This website requires JavaScript.
Explore
Help
Sign In
root
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2024-12-24 10:24:35 +00:00
Code
Issues
Actions
5
Packages
Projects
Releases
Wiki
Activity
All Workflows
build.yml
close-issue.yml
docker.yml
editorconfig.yml
gguf-publish.yml
labeler.yml
python-check-requirements.yml
python-lint.yml
python-type-check.yml
server.yml
Actor
All actors
root
Status
All status
success
failure
waiting
running
llama : add Falcon3 support (#10864)
python-check-requirements.yml #1508
:
Commit
382bc7f2e8
pushed by
root
b4341
2024-12-19 02:44:32 +00:00
0s
Revert "llama : add Falcon3 support (#10864)"
python-type-check.yml #1507
:
Commit
e10dc009b5
pushed by
root
revert-10864-falcon3_integration
2024-12-19 02:44:32 +00:00
0s
Revert "llama : add Falcon3 support (#10864)"
python-check-requirements.yml #1506
:
Commit
e10dc009b5
pushed by
root
revert-10864-falcon3_integration
2024-12-19 02:44:32 +00:00
0s
Use model->gguf_kv for loading the template instead of using the C API. (#10868)
python-type-check.yml #1505
:
Commit
d62b532c52
pushed by
root
master
2024-12-18 07:24:36 +00:00
0s
Use model->gguf_kv for loading the template instead of using the C API. (#10868)
python-lint.yml #1504
:
Commit
d62b532c52
pushed by
root
master
2024-12-18 07:24:36 +00:00
0s
Use model->gguf_kv for loading the template instead of using the C API. (#10868)
python-check-requirements.yml #1503
:
Commit
d62b532c52
pushed by
root
master
2024-12-18 07:24:36 +00:00
0s
server : return tokens ids only if requested
python-type-check.yml #1502
:
Commit
8bcfc5551e
pushed by
root
gg/server-content-tokens
2024-12-19 02:44:32 +00:00
0s
tts : extend python example to generate spectrogram
python-type-check.yml #1501
:
Commit
265a5eac5a
pushed by
root
gg/tts-add-outetts
2024-12-18 15:34:37 +00:00
0s
tts : extend python example to generate spectrogram
python-check-requirements.yml #1500
:
Commit
265a5eac5a
pushed by
root
gg/tts-add-outetts
2024-12-18 20:44:32 +00:00
0s
rwkv6: add wkv6 support for Vulkan backend (#10829)
docker.yml #1499
:
Scheduled
master
2024-12-18 04:12:32 +00:00
0s
rwkv6: add wkv6 support for Vulkan backend (#10829)
close-issue.yml #1498
:
Scheduled
master
2024-12-18 00:42:32 +00:00
0s
tts : outetts-voc -> wavtokenizer-dec
python-type-check.yml #1497
:
Commit
985d59f5e5
pushed by
root
gg/tts-add-outetts
2024-12-17 14:44:32 +00:00
0s
tts : outetts-voc -> wavtokenizer-dec
python-check-requirements.yml #1496
:
Commit
985d59f5e5
pushed by
root
gg/tts-add-outetts
2024-12-17 14:44:32 +00:00
0s
llava : Allow locally downloaded models for QwenVL (#10833)
docker.yml #1495
:
Scheduled
master
2024-12-17 04:12:32 +00:00
0s
llava : Allow locally downloaded models for QwenVL (#10833)
close-issue.yml #1494
:
Scheduled
master
2024-12-17 00:42:32 +00:00
0s
llama : add Deepseek MoE v1 & GigaChat models (#10827)
python-type-check.yml #1493
:
Commit
a0974156f3
pushed by
root
b4333
2024-12-17 02:44:32 +00:00
0s
llama : add Deepseek MoE v1 & GigaChat models (#10827)
python-check-requirements.yml #1492
:
Commit
a0974156f3
pushed by
root
b4333
2024-12-17 02:44:32 +00:00
0s
llava : Allow locally downloaded models for QwenVL (#10833)
python-type-check.yml #1491
:
Commit
4ddd199f6f
pushed by
root
master
2024-12-17 02:44:32 +00:00
0s
llava : Allow locally downloaded models for QwenVL (#10833)
python-lint.yml #1490
:
Commit
4ddd199f6f
pushed by
root
master
2024-12-17 02:44:32 +00:00
0s
llava : Allow locally downloaded models for QwenVL (#10833)
python-check-requirements.yml #1489
:
Commit
4ddd199f6f
pushed by
root
master
2024-12-17 02:44:32 +00:00
0s
gguf-py : bump to v0.13.0
gguf-publish.yml #1488
:
Commit
b5ae1ddff9
pushed by
root
gguf-v0.13.0
2024-12-16 14:44:32 +00:00
0s
server: Fix `has_next_line` in JSON response (#10818)
python-type-check.yml #1487
:
Commit
89d604f2c8
pushed by
root
gguf-v0.12.0
2024-12-16 14:44:32 +00:00
0s
server: Fix `has_next_line` in JSON response (#10818)
gguf-publish.yml #1486
:
Commit
89d604f2c8
pushed by
root
gguf-v0.12.0
2024-12-16 14:44:32 +00:00
0s
server: Fix `has_next_line` in JSON response (#10818)
python-type-check.yml #1485
:
Commit
89d604f2c8
pushed by
root
b4329
2024-12-16 08:44:32 +00:00
0s
server: Fix `has_next_line` in JSON response (#10818)
python-type-check.yml #1484
:
Commit
89d604f2c8
pushed by
root
master
2024-12-15 22:14:36 +00:00
0s
server: Fix `has_next_line` in JSON response (#10818)
python-lint.yml #1483
:
Commit
89d604f2c8
pushed by
root
master
2024-12-15 22:14:36 +00:00
0s
nix: allow to override rocm gpu targets (#10794)
docker.yml #1482
:
Scheduled
master
2024-12-16 04:12:32 +00:00
0s
nix: allow to override rocm gpu targets (#10794)
close-issue.yml #1481
:
Scheduled
master
2024-12-16 00:42:32 +00:00
0s
llama : add Qwen2VL support + multimodal RoPE (#10361)
python-type-check.yml #1480
:
Commit
ba1cb19cdd
pushed by
root
b4327
2024-12-15 14:44:32 +00:00
0s
llama : add Qwen2VL support + multimodal RoPE (#10361)
python-check-requirements.yml #1479
:
Commit
ba1cb19cdd
pushed by
root
b4327
2024-12-15 14:44:32 +00:00
0s
First
Previous
1
2
3
4
5
...
Next
Last