mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-09-22 13:06:19 +00:00
llama: remove redundant loop when constructing ubatch (#9574)
nix-ci-aarch64.yml #210:Scheduled
ggml-alloc : fix list of allocated tensors with GGML_ALLOCATOR_DEBUG (#9573)
close-issue.yml #201:Scheduled
ggml-alloc : fix list of allocated tensors with GGML_ALLOCATOR_DEBUG (#9573)
nix-flake-update.yml #200:Scheduled
Update CUDA graph on scale change plus clear nodes/params (#9550)
nix-ci-aarch64.yml #195:Scheduled