mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-01-12 03:31:46 +00:00
llama: remove redundant loop when constructing ubatch (#9574)
nix-ci-aarch64.yml #210:Scheduled
ggml-alloc : fix list of allocated tensors with GGML_ALLOCATOR_DEBUG (#9573)
close-issue.yml #201:Scheduled
ggml-alloc : fix list of allocated tensors with GGML_ALLOCATOR_DEBUG (#9573)
nix-flake-update.yml #200:Scheduled
Update CUDA graph on scale change plus clear nodes/params (#9550)
nix-ci-aarch64.yml #195:Scheduled