.. |
baby-llama
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
batched
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
batched-bench
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
batched.swift
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
benchmark
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
convert-llama2c-to-ggml
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
cvector-generator
|
Add support for sqrt on CUDA (#7953)
|
2024-06-17 00:23:04 +02:00 |
embedding
|
llama : allow pooled embeddings on any model (#7477)
|
2024-06-21 08:38:22 +03:00 |
eval-callback
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
export-lora
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
finetune
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
gbnf-validator
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
gguf
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
gguf-split
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
gritlm
|
llama : allow pooled embeddings on any model (#7477)
|
2024-06-21 08:38:22 +03:00 |
imatrix
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
infill
|
Only use FIM middle token if it exists (#7648)
|
2024-06-18 22:19:45 +10:00 |
jeopardy
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
llama-bench
|
llama-bench : fix RPC indication (#7936)
|
2024-06-14 16:47:41 +03:00 |
llama.android
|
android : module (#7502)
|
2024-05-25 11:11:33 +03:00 |
llama.swiftui
|
swiftui : enable stream updating (#7754)
|
2024-06-21 08:30:58 +03:00 |
llava
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
lookahead
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
lookup
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
main
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
main-cmake-pkg
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
parallel
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
passkey
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
perplexity
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
quantize
|
Update llama-quantize ppl/file size output from LLaMA-v1 to Llama-3 values (#8058)
|
2024-06-22 15:16:10 +02:00 |
quantize-stats
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
retrieval
|
llama : allow pooled embeddings on any model (#7477)
|
2024-06-21 08:38:22 +03:00 |
rpc
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
save-load-state
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
server
|
server : fix smart slot selection (#8020)
|
2024-06-20 09:57:10 +10:00 |
simple
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
speculative
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
sycl
|
[SYCL] Fix windows build and inference (#8003)
|
2024-06-20 21:19:05 +08:00 |
tokenize
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
train-text-from-scratch
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
base-translate.sh
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
chat-13B.bat
|
Create chat-13B.bat (#592)
|
2023-03-29 20:21:09 +03:00 |
chat-13B.sh
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
chat-persistent.sh
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
chat-vicuna.sh
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
chat.sh
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
CMakeLists.txt
|
Add cvector-generator example (#7514)
|
2024-06-15 18:53:40 +02:00 |
convert-legacy-llama.py
|
ggml : refactor rope norm/neox (#7634)
|
2024-06-05 11:29:20 +03:00 |
json_schema_to_grammar.py
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
json-schema-pydantic-example.py
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
llama.vim
|
llama.vim : added api key support (#5090)
|
2024-01-23 08:51:27 +02:00 |
llm.vim
|
llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879)
|
2023-08-30 09:50:55 +03:00 |
Miku.sh
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
pydantic_models_to_grammar.py
|
grammars: x{min,max} repetition operator (#6640)
|
2024-06-06 10:07:06 +01:00 |
pydantic-models-to-grammar-examples.py
|
examples : make pydantic scripts pass mypy and support py3.8 (#5099)
|
2024-01-25 14:51:24 -05:00 |
reason-act.sh
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
regex-to-grammar.py
|
JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings, cap number length (#6555)
|
2024-04-12 19:43:38 +01:00 |
server-embd.py
|
server : refactor (#5882)
|
2024-03-07 11:41:53 +02:00 |
server-llama2-13B.sh
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
ts-type-to-grammar.sh
|
JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings, cap number length (#6555)
|
2024-04-12 19:43:38 +01:00 |