..
baby-llama
code : normalize enum names ( #5697 )
2024-02-25 12:09:09 +02:00
batched
ggml, common, examples, tests : fixed type arguments in printf ( #5528 )
2024-02-18 18:20:12 +02:00
batched-bench
llama : cleanup unused mmq flags ( #5772 )
2024-03-01 13:39:06 +02:00
batched.swift
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
beam-search
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
benchmark
2-bit quantizations ( #4897 )
2024-01-14 09:45:56 +02:00
convert-llama2c-to-ggml
ggml, common, examples, tests : fixed type arguments in printf ( #5528 )
2024-02-18 18:20:12 +02:00
embedding
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
export-lora
ci : add an option to fail on compile warning ( #3952 )
2024-02-17 23:03:14 +02:00
finetune
code : normalize enum names ( #5697 )
2024-02-25 12:09:09 +02:00
gguf
gguf : simplify example dependencies
2023-12-21 23:08:14 +02:00
imatrix
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
infill
convert : automatically fall back to HfVocab if tokenizer.model doesn't exist ( #5821 )
2024-03-02 12:27:26 -05:00
jeopardy
parallel : add option to load external prompt file ( #3416 )
2023-10-06 16:16:38 +03:00
llama-bench
Support multiple GPUs (split mode) on SYCL backend ( #5806 )
2024-03-02 19:49:30 +08:00
llama.android
ggml-quants : provide ggml_vqtbl1q_u8 for 64bit compatibility ( #5711 )
2024-02-25 20:43:00 +02:00
llama.swiftui
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
llava
code : normalize enum names ( #5697 )
2024-02-25 12:09:09 +02:00
lookahead
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
lookup
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
main
main : support special tokens as reverse/anti prompt ( #5847 )
2024-03-04 09:57:20 +02:00
main-cmake-pkg
main-cmake-pkg : fix build issue ( #4665 )
2023-12-29 16:18:20 +02:00
parallel
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
passkey
llama : fix defrag bugs + add parameter ( #5735 )
2024-02-27 14:35:51 +02:00
perplexity
ci : fix wikitext url + compile warnings ( #5569 )
2024-02-18 22:39:30 +02:00
quantize
IQ4_XS: a 4.25 bpw quantization ( #5747 )
2024-02-27 16:34:24 +02:00
quantize-stats
refactor : switch to emplace_back to avoid extra object ( #5291 )
2024-02-03 13:23:37 +02:00
save-load-state
llama : minimize size used for state save/load ( #4820 )
2024-01-13 18:29:43 +02:00
server
server : init http requests thread pool with --parallel if set ( #5836 )
2024-03-03 09:48:36 +02:00
simple
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
speculative
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
sycl
Support multiple GPUs (split mode) on SYCL backend ( #5806 )
2024-03-02 19:49:30 +08:00
tokenize
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
train-text-from-scratch
code : normalize enum names ( #5697 )
2024-02-25 12:09:09 +02:00
alpaca.sh
alpaca.sh : update model file name ( #2074 )
2023-07-06 19:17:50 +03:00
base-translate.sh
examples : improve base-translate.sh script ( #4783 )
2024-01-06 11:40:24 +02:00
chat-13B.bat
Create chat-13B.bat ( #592 )
2023-03-29 20:21:09 +03:00
chat-13B.sh
examples : read chat prompts from a template file ( #1196 )
2023-05-03 20:58:11 +03:00
chat-persistent.sh
llama : fix session saving/loading ( #3400 )
2023-10-03 21:04:01 +03:00
chat-vicuna.sh
examples : add chat-vicuna.sh ( #1854 )
2023-06-15 21:05:53 +03:00
chat.sh
main : log file ( #2748 )
2023-08-30 09:29:32 +03:00
CMakeLists.txt
gguf : add python reader example ( #5216 )
2024-02-13 19:56:38 +02:00
gpt4all.sh
examples : add -n to alpaca and gpt4all scripts ( #706 )
2023-04-13 16:03:39 +03:00
json-schema-to-grammar.py
examples : support minItems/maxItems in JSON grammar converter ( #5039 )
2024-02-19 16:14:07 +02:00
llama2-13b.sh
gitignore : changes for Poetry users + chat examples ( #2284 )
2023-07-21 13:53:27 +03:00
llama2.sh
gitignore : changes for Poetry users + chat examples ( #2284 )
2023-07-21 13:53:27 +03:00
llama.vim
llama.vim : added api key support ( #5090 )
2024-01-23 08:51:27 +02:00
llm.vim
llm.vim : stop generation at multiple linebreaks, bind to <F2> ( #2879 )
2023-08-30 09:50:55 +03:00
make-ggml.py
make-ggml.py : compatibility with more models and GGUF ( #3290 )
2023-09-27 19:25:12 +03:00
Miku.sh
MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 ( #2287 )
2023-07-21 11:13:18 +03:00
pydantic_models_to_grammar.py
examples : make pydantic scripts pass mypy and support py3.8 ( #5099 )
2024-01-25 14:51:24 -05:00
pydantic-models-to-grammar-examples.py
examples : make pydantic scripts pass mypy and support py3.8 ( #5099 )
2024-01-25 14:51:24 -05:00
reason-act.sh
chmod : make scripts executable ( #2675 )
2023-08-23 17:29:09 +03:00
server-llama2-13B.sh
chmod : make scripts executable ( #2675 )
2023-08-23 17:29:09 +03:00