This website requires JavaScript.
Explore
Help
Sign In
root
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2025-01-12 03:31:46 +00:00
Code
Issues
Actions
5
Packages
Projects
Releases
Wiki
Activity
All Workflows
build.yml
close-issue.yml
docker.yml
editorconfig.yml
gguf-publish.yml
labeler.yml
python-check-requirements.yml
python-lint.yml
python-type-check.yml
server.yml
Actor
All actors
root
Status
All status
success
failure
waiting
running
convert : add --print-supported-models option (#11172)
#1679
:
Scheduled
master
2025-01-12 00:42:42 +00:00
0s
doc: add cuda guide for fedora (#11135)
#1665
:
Scheduled
master
2025-01-11 00:42:42 +00:00
0s
ci : use actions from ggml-org (#11140)
#1650
:
Scheduled
master
2025-01-10 00:42:42 +00:00
0s
rpc : code cleanup (#11107)
#1643
:
Scheduled
master
2025-01-09 00:42:42 +00:00
0s
llama : remove unused headers (#11109)
#1641
:
Scheduled
master
2025-01-08 00:42:42 +00:00
0s
Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver (#11074)
#1634
:
Scheduled
master
2025-01-07 00:42:42 +00:00
0s
Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver (#11074)
#1632
:
Scheduled
master
2025-01-06 00:42:32 +00:00
0s
common : disable KV cache shifting automatically for unsupported models (#11053)
#1620
:
Scheduled
master
2025-01-05 00:42:32 +00:00
0s
server: bench: minor fixes (#10765)
#1617
:
Scheduled
master
2025-01-04 00:42:32 +00:00
0s
ggml : fixes for AVXVNNI instruction set with MSVC and Clang (#11027)
#1604
:
Scheduled
master
2025-01-03 00:42:32 +00:00
0s
ggml : fixes for AVXVNNI instruction set with MSVC and Clang (#11027)
#1602
:
Scheduled
master
2025-01-02 00:42:32 +00:00
0s
vulkan: optimize mul_mat for small values of N (#10991)
#1593
:
Scheduled
master
2025-01-01 00:42:32 +00:00
0s
vulkan: im2col and matmul optimizations for stable diffusion (#10942)
#1591
:
Scheduled
master
2024-12-31 00:42:32 +00:00
0s
server: added more docs for response_fields field (#10995)
#1589
:
Scheduled
master
2024-12-30 00:42:32 +00:00
0s
vulkan: multi-row k quants (#10846)
#1587
:
Scheduled
master
2024-12-29 00:42:32 +00:00
0s
vulkan: multi-row k quants (#10846)
#1584
:
Scheduled
master
2024-12-28 00:42:32 +00:00
0s
ggml : more perfo with llamafile tinyblas on x86_64 (#10714)
#1580
:
Scheduled
master
2024-12-27 00:42:32 +00:00
0s
ggml : more perfo with llamafile tinyblas on x86_64 (#10714)
#1573
:
Scheduled
master
2024-12-26 00:42:32 +00:00
0s
server : fix missing model id in /model endpoint (#10957)
#1567
:
Scheduled
master
2024-12-25 00:42:32 +00:00
0s
vulkan: build fixes for 32b (#10927)
#1551
:
Scheduled
master
2024-12-24 00:42:32 +00:00
0s
convert : add BertForMaskedLM (#10919)
#1549
:
Scheduled
master
2024-12-23 00:42:32 +00:00
0s
SYCL: Migrate away from deprecated ggml_tensor->backend (#10840)
#1544
:
Scheduled
master
2024-12-22 00:42:32 +00:00
0s
clip : disable GPU support (#10896)
#1535
:
Scheduled
master
2024-12-21 00:42:32 +00:00
0s
ggml : fix arm build (#10890)
#1526
:
Scheduled
master
2024-12-20 00:42:32 +00:00
0s
Use model->gguf_kv for loading the template instead of using the C API. (#10868)
#1511
:
Scheduled
master
2024-12-19 00:42:32 +00:00
0s
rwkv6: add wkv6 support for Vulkan backend (#10829)
#1498
:
Scheduled
master
2024-12-18 00:42:32 +00:00
0s
llava : Allow locally downloaded models for QwenVL (#10833)
#1494
:
Scheduled
master
2024-12-17 00:42:32 +00:00
0s
nix: allow to override rocm gpu targets (#10794)
#1481
:
Scheduled
master
2024-12-16 00:42:32 +00:00
0s
Introducing experimental OpenCL backend with support for Qualcomm Adreno GPUs (#10693)
#1474
:
Scheduled
master
2024-12-15 00:42:32 +00:00
0s
contrib : add ngxson as codeowner (#10804)
#1469
:
Scheduled
master
2024-12-14 00:42:49 +00:00
0s
First
Previous
1
2
3
4
5
Next
Last