This website requires JavaScript.
Explore
Help
Sign In
root
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2025-01-12 03:31:46 +00:00
Code
Issues
Actions
5
Packages
Projects
Releases
Wiki
Activity
All Workflows
build.yml
close-issue.yml
docker.yml
editorconfig.yml
gguf-publish.yml
labeler.yml
python-check-requirements.yml
python-lint.yml
python-type-check.yml
server.yml
Actor
All actors
root
Status
All status
success
failure
waiting
running
convert : add --print-supported-models option (#11172)
#1680
:
Scheduled
master
2025-01-12 04:12:42 +00:00
0s
llama: add support for QRWKV6 model architecture (#11001)
#1669
:
Scheduled
master
2025-01-11 04:12:42 +00:00
0s
fix: add missing msg in static_assert (#11143)
#1655
:
Scheduled
master
2025-01-10 04:12:42 +00:00
0s
rpc : code cleanup (#11107)
#1644
:
Scheduled
master
2025-01-09 04:12:42 +00:00
0s
llama-run : fix context size (#11094)
#1642
:
Scheduled
master
2025-01-08 04:12:42 +00:00
0s
Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver (#11074)
#1635
:
Scheduled
master
2025-01-07 04:12:42 +00:00
0s
Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver (#11074)
#1633
:
Scheduled
master
2025-01-06 04:12:32 +00:00
0s
common : disable KV cache shifting automatically for unsupported models (#11053)
#1621
:
Scheduled
master
2025-01-05 04:12:32 +00:00
0s
server: bench: minor fixes (#10765)
#1618
:
Scheduled
master
2025-01-04 04:12:32 +00:00
0s
ggml : fixes for AVXVNNI instruction set with MSVC and Clang (#11027)
#1605
:
Scheduled
master
2025-01-03 04:12:32 +00:00
0s
ggml : fixes for AVXVNNI instruction set with MSVC and Clang (#11027)
#1603
:
Scheduled
master
2025-01-02 04:12:32 +00:00
0s
vulkan: optimize mul_mat for small values of N (#10991)
#1594
:
Scheduled
master
2025-01-01 04:12:32 +00:00
0s
vulkan: im2col and matmul optimizations for stable diffusion (#10942)
#1592
:
Scheduled
master
2024-12-31 04:12:32 +00:00
0s
server: added more docs for response_fields field (#10995)
#1590
:
Scheduled
master
2024-12-30 04:12:32 +00:00
0s
vulkan: multi-row k quants (#10846)
#1588
:
Scheduled
master
2024-12-29 04:12:32 +00:00
0s
vulkan: multi-row k quants (#10846)
#1585
:
Scheduled
master
2024-12-28 04:12:32 +00:00
0s
ggml : more perfo with llamafile tinyblas on x86_64 (#10714)
#1581
:
Scheduled
master
2024-12-27 04:12:32 +00:00
0s
ggml : more perfo with llamafile tinyblas on x86_64 (#10714)
#1579
:
Scheduled
master
2024-12-26 04:12:32 +00:00
0s
ggml : fix const usage in SSE path (#10962)
#1568
:
Scheduled
master
2024-12-25 04:12:32 +00:00
0s
llama : support InfiniAI Megrez 3b (#10893)
#1557
:
Scheduled
master
2024-12-24 04:12:32 +00:00
0s
convert : add BertForMaskedLM (#10919)
#1550
:
Scheduled
master
2024-12-23 04:12:32 +00:00
0s
ggml-cpu: replace NEON asm with intrinsics in ggml_gemv_q4_0_4x8_q8_0() (#10874)
#1545
:
Scheduled
master
2024-12-22 04:12:32 +00:00
0s
clip : disable GPU support (#10896)
#1536
:
Scheduled
master
2024-12-21 04:12:32 +00:00
0s
ggml : fix arm build (#10890)
#1527
:
Scheduled
master
2024-12-20 04:12:32 +00:00
0s
Use model->gguf_kv for loading the template instead of using the C API. (#10868)
#1512
:
Scheduled
master
2024-12-19 04:12:32 +00:00
0s
rwkv6: add wkv6 support for Vulkan backend (#10829)
#1499
:
Scheduled
master
2024-12-18 04:12:32 +00:00
0s
llava : Allow locally downloaded models for QwenVL (#10833)
#1495
:
Scheduled
master
2024-12-17 04:12:32 +00:00
0s
nix: allow to override rocm gpu targets (#10794)
#1482
:
Scheduled
master
2024-12-16 04:12:32 +00:00
0s
Introducing experimental OpenCL backend with support for Qualcomm Adreno GPUs (#10693)
#1475
:
Scheduled
master
2024-12-15 04:12:32 +00:00
0s
contrib : add ngxson as codeowner (#10804)
#1470
:
Scheduled
master
2024-12-14 04:12:49 +00:00
0s
First
Previous
1
2
3
4
5
...
Next
Last