Georgi Gerganov
1e7a0092dd
Merge branch 'master' into gguf
...
ggml-ci
2023-08-21 16:28:30 +03:00
Kawrakow
cb1c0727bd
HellaSwag: split token evaluation into batches if needed ( #2681 )
...
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-08-21 11:11:31 +03:00
Kawrakow
5e9ff54a67
More efficient Hellaswag implementation ( #2677 )
...
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-08-20 16:44:46 +03:00
klosax
28b8c265eb
cmpnct_gpt2bpe.hpp : cleanup
2023-08-19 18:26:51 +02:00
klosax
c0a1269b7f
Update examples/server/README.md
...
Co-authored-by: slaren <slarengh@gmail.com>
2023-08-19 15:27:37 +02:00
klosax
6a2e520095
cmpnct_gpt2bpe.hpp : remove non-general stuff
2023-08-19 13:19:02 +02:00
klosax
8945d47f52
gptneox-main.cpp : fixes
2023-08-19 12:09:24 +02:00
klosax
781bf2481f
falcon-main.cpp : fixes
2023-08-19 12:08:17 +02:00
klosax
dadf098b5a
cmpnct_gpt2bpe.hpp : fixes
2023-08-19 12:06:22 +02:00
klosax
1d80eea574
falcon-main.cpp : fix for falcon 40b
2023-08-19 01:03:37 +02:00
klosax
d5e976c12b
falcon-main.cpp : falcon inference example
2023-08-19 00:02:18 +02:00
Georgi Gerganov
1f0bccb279
server : better default prompt ( #2646 )
2023-08-19 05:45:36 +08:00
Jhen-Jie Hong
f63564adfa
server : update xxd usage for older versions compatibility ( #2649 )
...
* server : update xxd usage for older versions compatibility
* remove unused $func
2023-08-19 05:41:32 +08:00
Georgi Gerganov
5d2656d670
llama : avoid hardcoded special tokens
2023-08-18 17:29:20 +03:00
Georgi Gerganov
2d6c2c757c
llama : remove C++ API + reorganize common source in /common dir
2023-08-18 16:22:48 +03:00
Georgi Gerganov
38016ed9ec
Merge branch 'master' into gguf
2023-08-18 15:21:48 +03:00
slaren
097e121e2f
llama : add benchmark example ( #2626 )
...
* llama : add benchmark example
* add to examples CMakeLists.txt
* fix msvc build
* add missing include
* add Bessel's correction to stdev calculation
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
* improve markdown formatting
* add missing include
* print warning is NDEBUG is not defined
* remove n_prompt and n_gen from the matrix, use each value separately instead
* better checks for non-optimized builds
* llama.cpp : fix MEM_REQ_SCRATCH0 reusing the value of n_ctx of the first call
* fix json formatting
* add sql output
* add basic cpu and gpu info (linx/cuda only)
* markdown: also show values that differ from the default
* markdown: add build id
* cleanup
* improve formatting
* formatting
---------
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2023-08-18 12:44:58 +02:00
Georgi Gerganov
e9b12c332e
perplexity : more meaningful ETA number - 2 decimal points
2023-08-18 12:48:55 +03:00
Georgi Gerganov
856afff746
Merge branch 'master' into gguf
2023-08-18 12:38:05 +03:00
staviq
10151bee2e
server : support for saving templates in browser LocalStorage ( #2486 )
...
* support for templates in browser LocalStorage
* sync accepted #2409 fix from upstream
* convert autosave invocation to useEffect
* Apply suggestions from code review
Co-authored-by: Jhen-Jie Hong <iainst0409@gmail.com>
* Regen index.html.cpp, suggested from code review
---------
Co-authored-by: Jhen-Jie Hong <iainst0409@gmail.com>
2023-08-18 07:34:01 +08:00
klosax
78e1e57862
quantize-stats.cpp : .bin --> .gguf
2023-08-17 19:18:24 +02:00
klosax
fb11dd3f92
common.h : .bin --> .gguf
2023-08-17 19:16:35 +02:00
Georgi Gerganov
93f285bdf1
gptneox : move as a WIP example
2023-08-17 19:49:45 +03:00
Georgi Gerganov
dd9e2fc988
ci : update ".bin" to ".gguf" extension
...
ggml-ci
2023-08-17 19:32:14 +03:00
Georgi Gerganov
6d66ef96eb
Merge branch 'master' into gguf
2023-08-17 19:04:59 +03:00
Kerfuffle
8dae7ce684
Add --cfg-negative-prompt-file option for examples ( #2591 )
...
Add --cfg-negative-prompt-file option for examples
2023-08-17 07:29:44 -06:00
M. Yusuf Sarıgöz
42f8fe1927
examples/gguf : no need to keep q option for quantization any more
2023-08-17 08:56:42 +03:00
Georgi Gerganov
88b5769487
gguf : deduplicate ( #2629 )
...
* gguf : better type names
* dedup : CPU + Metal is working
* ggml : fix warnings about unused results
* llama.cpp : fix line feed and compiler warning
* llama : fix strncpy warning + note token_to_str does not write null
* llama : restore the original load/save session implementation
Will migrate this to GGUF in the future
* convert-llama-h5-to-gguf.py : support alt ctx param name
* ggml : assert when using ggml_mul with non-F32 src1
* examples : dedup simple
---------
Co-authored-by: klosax <131523366+klosax@users.noreply.github.com>
2023-08-16 19:25:29 +03:00
Georgi Gerganov
758ff1bbb5
llama : refactor model loading code ( #2620 )
...
* llama : style formatting + remove helper methods
* llama : fix quantization using gguf tool
* llama : simplify gguf_file_saver
* llama : fix method names
* llama : simplify write_header()
* llama : no need to pass full file loader to the file saver
just gguf_ctx
* llama : gguf_file_saver write I32
* llama : refactor tensor names (#2622 )
* gguf: update tensor names searched in quantization
* gguf : define tensor names as constants
* gguf : initial write API (not tested yet)
* gguf : write to file API (not tested)
* gguf : initial write API ready + example
* gguf : fix header write
* gguf : fixes + simplify example + add ggml_nbytes_pad()
* gguf : minor
* llama : replace gguf_file_saver with new gguf write API
* gguf : streaming support when writing files
* gguf : remove oboslete write methods
* gguf : remove obosolete gguf_get_arr_xxx API
* llama : simplify gguf_file_loader
* llama : move hparams and vocab from gguf_file_loader to llama_model_loader
* llama : merge gguf-util.h in llama.cpp
* llama : reorder definitions in .cpp to match .h
* llama : minor simplifications
* llama : refactor llama_model_loader (WIP)
wip : remove ggml_ctx from llama_model_loader
wip : merge gguf_file_loader in llama_model_loader
* llama : fix shape prints
* llama : fix Windows build + fix norm_rms_eps key
* llama : throw error on missing KV paris in model meta data
* llama : improve printing + log meta data
* llama : switch print order of meta data
---------
Co-authored-by: M. Yusuf Sarıgöz <yusufsarigoz@gmail.com>
2023-08-16 14:34:03 +03:00
Jhen-Jie Hong
3ebb00935f
server : add missing /json-schema-to-grammar.mjs ( #2616 )
...
fixes #2611
2023-08-15 06:14:14 +08:00
Georgi Gerganov
7494c78428
llama : sync gguf-llama with llama ( #2613 )
...
* llama : sync gguf-llama with llama
* tests : fix build + warnings (test-tokenizer-1 still fails)
* tests : fix wstring_convert
* convert : fix layer names
* llama : sync gguf-llama.cpp
* convert : update HF converter to new tokenizer voodoo magics
2023-08-14 21:33:33 +03:00
goerch
afc4ca2889
convert : update convert-new.py with tokenizer fixes ( #2614 )
...
* Merge tokenizer fixes into the gguf branch.
* Add test vocabularies
* Adapt convert-new.py (and fix a clang-cl compiler error on windows)
2023-08-14 20:20:04 +03:00
goerch
ec1b100720
llama : tokenizer fixes ( #2549 )
...
* Merge tokenizer fixes into the gguf branch.
* Add test vocabularies
2023-08-14 19:30:28 +03:00
Georgi Gerganov
8af3a99ff1
Merge branch 'master' into gguf
2023-08-14 16:39:18 +03:00
Cheng Shao
d75561df20
server : add --numa support ( #2524 )
2023-08-14 16:36:42 +03:00
Georgi Gerganov
f00780b2ee
llama : sync gguf-llama.cpp with latest llama.cpp ( #2608 )
...
* llama : sync gguf-llama.cpp with latest llama.cpp
* minor : indentation + assert
* llama : refactor gguf_buffer and gguf_ctx_buffer
* llama : minor
2023-08-14 16:28:44 +03:00
Georgi Gerganov
0c19ae70d5
simple : minor style changes
2023-08-14 12:58:12 +03:00
Jhen-Jie Hong
2feb8934eb
server : fix default grammar by use empty string in the UI ( #2604 )
2023-08-14 16:20:17 +08:00
Jhen-Jie Hong
5517d6e692
server : implement json-schema-to-grammar.mjs & add grammar param in the UI ( #2588 )
...
* server : implement json-schema-to-grammar.mjs by follow python impl
* server : add grammar support in chat.mjs
* server : implement grammer param in the UI
* server : generate .hpp
* server : remove trailing whitespaces
* server : generate .hpp
* server : fix sort of prop pairs
* server : optimize regex & iteration
2023-08-14 15:16:54 +08:00
Georgi Gerganov
56a1f32072
Merge branch 'master' into gguf
2023-08-14 10:14:05 +03:00
M. Yusuf Sarıgöz
202eab04d3
gguf : quantization is working
2023-08-12 16:39:05 +03:00
M. Yusuf Sarıgöz
1fc3d30b71
gguf : start implementing quantization (WIP)
2023-08-12 16:09:47 +03:00
M. Yusuf Sarıgöz
b2571af255
gguf : start implementing quantization (WIP)
2023-08-12 14:28:17 +03:00
byte-6174
b19edd54d5
Adding support for llama2.c models ( #2559 )
2023-08-12 01:17:25 +02:00
Equim
53dc399472
server: fixed wrong variable name in timing json ( #2579 )
...
* server: fixed wrong variable name in timing json
* remove redunct entry
2023-08-12 00:35:14 +02:00
DannyDaemonic
9ca4abed89
Handle ENABLE_VIRTUAL_TERMINAL_PROCESSING
more gracefully on earlier versions of Windows.
2023-08-10 13:11:36 -07:00
Christian Demsar
e59fcb2bc1
Add --n-predict -2 for stopping generation on full context ( #2565 )
2023-08-10 16:28:27 +02:00
M. Yusuf Sarıgöz
1c4d8bf981
gguf : start implementing libllama in GGUF (WIP)
2023-08-10 16:52:08 +03:00
Martin Krasser
1638757767
Fix grammar-based sampling issue in server ( #2566 )
2023-08-10 13:16:38 +03:00
Martin Krasser
f5bfea0580
Allow passing grammar to completion endpoint ( #2532 )
...
* Allow passing grammar to completion endpoint
2023-08-08 16:29:19 +03:00