klosax
278ada9572
gguf.py : bytesarray for gpt2bpe tokenizer
2023-08-04 04:07:57 +02:00
klosax
fb0b243705
Makefile : remove gptneox-common
2023-08-04 04:02:10 +02:00
klosax
5d98989cf6
gpt2 bpe tokenizer (handles merges and unicode)
2023-08-04 03:58:44 +02:00
klosax
e6f19ba240
gptneox-main.cpp : gpt2 bpe tokenizer
2023-08-04 03:56:37 +02:00
klosax
2922280a1a
convert-gptneox-h5-to-gguf.py : gpt2bpe tokenizer
2023-08-04 03:55:23 +02:00
klosax
6691aa8797
Delete gptneox-common.h
2023-08-04 03:52:01 +02:00
klosax
23abbe8e00
Delete gptneox-common.cpp
2023-08-04 03:51:43 +02:00
Evan Jones
8183159cf3
examples : generate JSON according to schema ( #1887 )
...
* examples : add JSON schema grammars
* complete JSON grammar
* ensure primitive types can be used as root of schema
* support integer type and adjust usage text
2023-08-02 22:05:44 -04:00
Johannes Gäßler
468ea24fb4
CUDA: faster non k-quant mul_mat_q kernels ( #2483 )
2023-08-02 18:04:04 +02:00
Johannes Gäßler
4f6b60c776
CUDA: Fix models with output size != 32000 ( #2480 )
2023-08-02 16:48:10 +02:00
klosax
c5ba5efda2
convert-llama-h5-to-gguf.py : special tokens
2023-08-02 11:26:07 +02:00
klosax
e1e9b28547
convert-llama-h5-to-gguf.py : accumulate kv / ti + special tokens
2023-08-02 11:15:33 +02:00
ldwang
220d931864
readme : add Aquila-7B model series to supported models ( #2487 )
...
* support bpe tokenizer in convert
Signed-off-by: ldwang <ftgreat@gmail.com>
* support bpe tokenizer in convert
Signed-off-by: ldwang <ftgreat@gmail.com>
* support bpe tokenizer in convert, fix
Signed-off-by: ldwang <ftgreat@gmail.com>
* Add Aquila-7B models in README.md
Signed-off-by: ldwang <ftgreat@gmail.com>
* Up Aquila-7B models in README.md
Signed-off-by: ldwang <ftgreat@gmail.com>
---------
Signed-off-by: ldwang <ftgreat@gmail.com>
Co-authored-by: ldwang <ftgreat@gmail.com>
2023-08-02 11:21:11 +03:00
M. Yusuf Sarıgöz
c3a65c4bbe
gguf-util.h : update note
2023-08-02 11:16:23 +03:00
M. Yusuf Sarıgöz
cf365fbc20
gguf : gguf counterpart of llama-util.h
2023-08-02 11:13:56 +03:00
Eve
81844fbcfd
tests : Fix compilation warnings (Linux/GCC) ( #2451 )
...
* fix hellaswag print format, cast away warning in test-double-float
* c++11 cannot use designated initializers
* add static to test-grad0.c internal functions
* use memcpy in test-double-float.c
* port c tests to c++
* use initializer list for ggml_init_params
2023-08-02 11:06:19 +03:00
Yiming Cui
a312193e18
readme : Add Chinese LLaMA-2 / Alpaca-2 to supported models ( #2475 )
...
* add support for chinese llama-2 / alpaca-2
* remove white spaces
2023-08-02 09:18:31 +03:00
klosax
1b4f9c8eb9
convert-gptneox-h5-to-gguf.py : accumulate kv and ti + special tokens
2023-08-01 23:40:50 +02:00
klosax
49380a23a3
gguf.py : accumulate kv and tensor info data + special tokens
2023-08-01 23:37:48 +02:00
klosax
ff1cb02397
constants.py : special tokens
2023-08-01 23:17:21 +02:00
Bono Lv
c574bddb36
fix a typo in examples/server/README.md ( #2478 )
2023-08-01 14:54:28 +02:00
klosax
36a36c32a3
Update gptneox-main.cpp
2023-08-01 14:44:28 +02:00
klosax
c77fabb1f9
gptneox-main.cpp : special tokens
2023-08-01 14:32:53 +02:00
klosax
e7a741695c
convert-gptneox-h5-to-gguf.py : Special tokens
2023-08-01 14:30:00 +02:00
ebraminio
86aeb27734
server : Support dark mode ( #2414 )
...
* server : Support dark mode
So it respects user system light / dark settings.
* Update index.html.hpp by running ./deps.sh
2023-08-01 10:56:23 +02:00
Matteo Boschini
1873ff586b
metal : add gqa8 kernel to allow llama-2-70B on metal ( #2459 )
...
* Added gqa8 kernel to allow llama-2-70B on metal
* Update ggml-metal.m
Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com>
* Extend kernel_mul_mat_f16_f32 to handle gqa broadcast
* Added ne03==ne13 assertion
---------
Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com>
2023-08-01 10:43:12 +03:00
klosax
da4900e835
Update convert-llama-h5-to-gguf.py
2023-07-31 23:04:03 +02:00
M. Yusuf Sarıgöz
f3de876a12
fix : update convert-llama-h5-to-gguf.py
2023-07-31 23:58:29 +03:00
Johannes Gäßler
49e7cb5bb1
CUDA: fixed LLAMA_FAST compilation option ( #2473 )
2023-07-31 21:02:19 +02:00
Johannes Gäßler
b772bba42e
CUDA: fixed cmake F16 option ( #2471 )
2023-07-31 19:52:22 +02:00
M. Yusuf Sarıgöz
bb42aefaeb
gguf : mmap tensor data example
2023-07-31 17:46:12 +03:00
Johannes Gäßler
0728c5a8b9
CUDA: mmq CLI option, fixed mmq build issues ( #2453 )
2023-07-31 15:44:35 +02:00
M. Yusuf Sarıgöz
b26f5b2e43
gguf : fix typo in function call
2023-07-31 16:23:54 +03:00
Johannes Gäßler
1215ed7d5c
CUDA: Implemented row flattening for non-glm RoPE ( #2468 )
2023-07-31 14:32:30 +02:00
Johannes Gäßler
2dbf518911
CUDA: fewer memory bank conflicts for mul_mat_q ( #2458 )
2023-07-31 13:18:51 +02:00
slaren
9d2382b3e4
Fix Metal backend broken from the allocator changes ( #2455 )
...
* fix Metal backend broken from the allocator changes
2023-07-31 11:02:53 +02:00
M. Yusuf Sarıgöz
7aa0a0e7f7
gguf : support custom alignment value
2023-07-31 09:59:36 +03:00
klosax
6b3a7b9f4f
Update convert-llama-h5-to-gguf.py
2023-07-31 03:02:00 +02:00
klosax
4f5b6224be
Update convert-gptneox-h5-to-gguf.py
2023-07-31 03:00:20 +02:00
klosax
2a0914673c
Update convert-gptneox-h5-to-gguf.py
2023-07-30 17:31:11 +02:00
klosax
068a8e0fbe
Update convert-llama-h5-to-gguf.py
2023-07-30 17:29:56 +02:00
klosax
30c4ea47e6
add gptneox gguf example
2023-07-30 16:59:26 +02:00
klosax
2fabc176ce
Update convert-llama-h5-to-gguf.py
2023-07-30 16:28:08 +02:00
slaren
a113689571
ggml : add graph tensor allocator ( #2411 )
...
* ggml : add graph tensor allocator
* ggml : don't calculate data pointer of unallocated tensors when creating a view with an offset
* ggml : refactor ggml_view_Nd into ggml_view_tensor_offset
2023-07-30 15:58:01 +02:00
klosax
f175b05872
Makefile : add gptneox gguf example
2023-07-30 15:08:37 +02:00
klosax
e9192b0135
add gptneox gguf example
2023-07-30 15:05:37 +02:00
klosax
4ed98bf1ab
Update convert-llama-h5-to-gguf.py
2023-07-30 15:01:47 +02:00
klosax
b19c11750b
ggml.c : add gguf_get_arr_n
2023-07-30 14:58:50 +02:00
klosax
b4676ee447
ggml.h : increase GGML_MAX_NAME to 64
2023-07-30 14:51:37 +02:00
klosax
ccd81a751b
gguf.py : add layer norm eps and merges
2023-07-30 14:48:14 +02:00