.. |
baby-llama
|
build : enable more non-default compiler warnings (#3200)
|
2023-09-28 17:41:44 -04:00 |
batched
|
cuda : add batched cuBLAS GEMM for faster attention (#3749)
|
2023-10-24 16:48:37 +03:00 |
batched-bench
|
Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)
|
2023-10-29 11:31:40 -06:00 |
batched.swift
|
speculative : add tree-based sampling example (#3624)
|
2023-10-18 16:21:57 +03:00 |
beam-search
|
llama : remove token functions with context args in favor of model (#3720)
|
2023-10-23 22:40:03 +03:00 |
benchmark
|
benchmark-matmult : do not use integer abs() on a float (#3277)
|
2023-09-20 12:06:08 -04:00 |
convert-llama2c-to-ggml
|
gguf : support big endian platform (#3552)
|
2023-10-20 14:19:40 +03:00 |
embedding
|
llama.cpp : split llama_context_params into model and context params (#3301)
|
2023-09-28 22:42:38 +03:00 |
export-lora
|
train : finetune LORA (#2632)
|
2023-09-28 21:40:11 +03:00 |
finetune
|
finetune : add -ngl parameter (#3762)
|
2023-11-01 13:49:04 +02:00 |
gguf
|
check C++ code with -Wmissing-declarations (#3184)
|
2023-09-15 15:38:27 -04:00 |
infill
|
llama : remove token functions with context args in favor of model (#3720)
|
2023-10-23 22:40:03 +03:00 |
jeopardy
|
parallel : add option to load external prompt file (#3416)
|
2023-10-06 16:16:38 +03:00 |
llama-bench
|
Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)
|
2023-10-29 11:31:40 -06:00 |
llava
|
llama : remove token functions with context args in favor of model (#3720)
|
2023-10-23 22:40:03 +03:00 |
main
|
Merge commit 'e16b9fa4baa8a09c6619b116159830e898050942' into nomic-vulkan
|
2023-11-23 17:22:04 -05:00 |
main-cmake-pkg
|
cmake : add missed dependencies (#3763)
|
2023-10-24 20:48:45 +03:00 |
metal
|
gguf : new file format with flexible meta data (beta) (#2398)
|
2023-08-21 23:07:43 +03:00 |
parallel
|
llama : remove token functions with context args in favor of model (#3720)
|
2023-10-23 22:40:03 +03:00 |
perplexity
|
Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)
|
2023-10-29 11:31:40 -06:00 |
quantize
|
ggml : quantization refactoring (#3833)
|
2023-10-29 18:32:28 +02:00 |
quantize-stats
|
llama.cpp : split llama_context_params into model and context params (#3301)
|
2023-09-28 22:42:38 +03:00 |
save-load-state
|
save-load-state : fix example + add ci test (#3655)
|
2023-10-17 19:12:46 +03:00 |
server
|
server : re-enable completion and embedded at the same time (#3876)
|
2023-11-01 11:28:28 +02:00 |
simple
|
simple : fix batch handling (#3803)
|
2023-10-27 08:37:41 -06:00 |
speculative
|
llama : add option for greedy sampling with probs (#3813)
|
2023-10-28 14:23:11 +03:00 |
train-text-from-scratch
|
train-text-from-scratch : fix assert failure in ggml-alloc (#3618)
|
2023-10-17 20:00:58 +03:00 |
alpaca.sh
|
alpaca.sh : update model file name (#2074)
|
2023-07-06 19:17:50 +03:00 |
chat-13B.bat
|
Create chat-13B.bat (#592)
|
2023-03-29 20:21:09 +03:00 |
chat-13B.sh
|
examples : read chat prompts from a template file (#1196)
|
2023-05-03 20:58:11 +03:00 |
chat-persistent.sh
|
llama : fix session saving/loading (#3400)
|
2023-10-03 21:04:01 +03:00 |
chat-vicuna.sh
|
examples : add chat-vicuna.sh (#1854)
|
2023-06-15 21:05:53 +03:00 |
chat.sh
|
main : log file (#2748)
|
2023-08-30 09:29:32 +03:00 |
CMakeLists.txt
|
sampling : refactor init to use llama_sampling_params (#3696)
|
2023-10-20 21:07:23 +03:00 |
gpt4all.sh
|
examples : add -n to alpaca and gpt4all scripts (#706)
|
2023-04-13 16:03:39 +03:00 |
json-schema-to-grammar.py
|
chmod : make scripts executable (#2675)
|
2023-08-23 17:29:09 +03:00 |
llama2-13b.sh
|
gitignore : changes for Poetry users + chat examples (#2284)
|
2023-07-21 13:53:27 +03:00 |
llama2.sh
|
gitignore : changes for Poetry users + chat examples (#2284)
|
2023-07-21 13:53:27 +03:00 |
llama.vim
|
vim : streaming and more (#2495)
|
2023-08-08 14:44:48 +03:00 |
llm.vim
|
llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879)
|
2023-08-30 09:50:55 +03:00 |
make-ggml.py
|
make-ggml.py : compatibility with more models and GGUF (#3290)
|
2023-09-27 19:25:12 +03:00 |
Miku.sh
|
MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287)
|
2023-07-21 11:13:18 +03:00 |
reason-act.sh
|
chmod : make scripts executable (#2675)
|
2023-08-23 17:29:09 +03:00 |
server-llama2-13B.sh
|
chmod : make scripts executable (#2675)
|
2023-08-23 17:29:09 +03:00 |