Commit Graph

1260 Commits

Author SHA1 Message Date
klosax
17800cd80f
convert-llama-h5-to-gguf.py : load model in parts to save memory 2023-08-13 12:20:02 +02:00
klosax
e3d1f07eb1
convert-gptneox-h5-to-gguf.py : load model in parts to save memory 2023-08-13 12:18:34 +02:00
klosax
9bf5a7efcb
Update gguf_tensor_map.py 2023-08-13 01:27:38 +02:00
Johannes Gäßler
f64d44a9b9
CUDA: Fixed OpenLLaMA 3b mmq, reduced compile time (#2590) 2023-08-13 00:24:45 +02:00
klosax
c7bd8c147c
gptneox-main.cpp : n_layer --> n_block 2023-08-13 00:03:32 +02:00
klosax
e91a2224e4
convert-llama-h5-to-gguf.py : n_layer --> n_block 2023-08-13 00:02:44 +02:00
klosax
489616e126
convert-gptneox-h5-to-gguf.py : n_layer --> n_block 2023-08-13 00:02:04 +02:00
klosax
d2ce9cfe8d
gguf.py : n_layer --> n_block 2023-08-13 00:01:20 +02:00
klosax
8b5f0c5067
constants.py : n_layer --> n_block 2023-08-13 00:00:32 +02:00
klosax
5e58ffa1ed
gptneox-main.cpp : n_layer --> n_block 2023-08-12 23:50:58 +02:00
klosax
e606ffeaee
convert-llama-h5-to-gguf.py : simplify nbytes 2023-08-12 22:30:35 +02:00
klosax
f8218477b3
convert-gptneox-h5-to-gguf.py : simplify nbytes 2023-08-12 22:29:35 +02:00
klosax
4cef57c81a
convert-llama-h5-to-gguf.py : no need to convert tensors twice 2023-08-12 21:50:24 +02:00
klosax
8f09157ec9
convert-gptneox-h5-to-gguf.py : no need to convert tensors twice 2023-08-12 21:48:58 +02:00
klosax
5d81a715d4
gguf.py : no need to convert tensors twice 2023-08-12 21:45:45 +02:00
M. Yusuf Sarıgöz
60d540831b gguf : roper closing of file 2023-08-12 21:42:31 +03:00
M. Yusuf Sarıgöz
202eab04d3 gguf : quantization is working 2023-08-12 16:39:05 +03:00
M. Yusuf Sarıgöz
1fc3d30b71 gguf : start implementing quantization (WIP) 2023-08-12 16:09:47 +03:00
M. Yusuf Sarıgöz
fa7c39540c gguf : start implementing quantization (WIP) 2023-08-12 15:55:58 +03:00
M. Yusuf Sarıgöz
b2571af255 gguf : start implementing quantization (WIP) 2023-08-12 14:28:17 +03:00
M. Yusuf Sarıgöz
c4f02b4f74 gguf : start implementing quantization (WIP) 2023-08-12 12:01:17 +03:00
M. Yusuf Sarıgöz
0e1a3c7e7d gguf : start implementing quantization (WIP) 2023-08-12 11:32:34 +03:00
M. Yusuf Sarıgöz
4fa017a1f9 gguf : start implementing quantization (WIP) 2023-08-12 10:40:56 +03:00
M. Yusuf Sarıgöz
186c496fdf Merge branch 'gguf' of https://github.com//ggerganov/llama.cpp into gguf 2023-08-12 07:25:10 +03:00
M. Yusuf Sarıgöz
2f52008b20 gguf : rm references to old file magics 2023-08-12 07:24:46 +03:00
byte-6174
b19edd54d5
Adding support for llama2.c models (#2559) 2023-08-12 01:17:25 +02:00
Equim
53dc399472
server: fixed wrong variable name in timing json (#2579)
* server: fixed wrong variable name in timing json

* remove redunct entry
2023-08-12 00:35:14 +02:00
klosax
e76c59d524
Update gptneox-main.cpp 2023-08-11 23:09:49 +02:00
klosax
2a5ac7af44
Update gguf_tensor_map.py 2023-08-11 23:08:48 +02:00
M. Yusuf Sarıgöz
e732423280 gguf : get rid of n_mult, read n_ff from file 2023-08-11 23:50:38 +03:00
M. Yusuf Sarıgöz
f44bbd3d88 gguf : rm redundant method 2023-08-11 21:00:51 +03:00
M. Yusuf Sarıgöz
7009cf581c gguf : shorter name for member variable 2023-08-11 20:43:02 +03:00
M. Yusuf Sarıgöz
61919c1a8f gguf : rm references to old file formats 2023-08-11 20:36:11 +03:00
M. Yusuf Sarıgöz
d09fd10713 gguf : write metadata in gguf_file_saver 2023-08-11 20:07:43 +03:00
M. Yusuf Sarıgöz
781b9ec3f5 gguf : write metadata in gguf_file_saver (WIP) 2023-08-11 18:01:26 +03:00
M. Yusuf Sarıgöz
28abfc90fa gguf : write metadata in gguf_file_saver (WIP) 2023-08-11 13:27:58 +03:00
M. Yusuf Sarıgöz
e3a4960953 gguf : add gguf_get_kv_type 2023-08-11 13:03:23 +03:00
M. Yusuf Sarıgöz
eb8ca6996f gguf : add gguf_get_kv_type 2023-08-11 12:24:08 +03:00
M. Yusuf Sarıgöz
b2440f1943 gguf : start implementing gguf_file_saver (WIP) 2023-08-11 11:29:50 +03:00
M. Yusuf Sarıgöz
a356b0e228 gguf : start implementing gguf_file_saver (WIP) 2023-08-11 10:50:02 +03:00
M. Yusuf Sarıgöz
e7d346c37c gguf : start implementing gguf_file_saver (WIP) 2023-08-11 09:52:01 +03:00
DannyDaemonic
9ca4abed89
Handle ENABLE_VIRTUAL_TERMINAL_PROCESSING more gracefully on earlier versions of Windows. 2023-08-10 13:11:36 -07:00
M. Yusuf Sarıgöz
f316b94c7c gguf : rm deprecated function 2023-08-10 20:20:22 +03:00
M. Yusuf Sarıgöz
cfb8e35b73 gguf : inference with 7B model working (WIP) 2023-08-10 19:56:56 +03:00
M. Yusuf Sarıgöz
42cc04d11d gguf : calculate n_mult 2023-08-10 18:49:08 +03:00
M. Yusuf Sarıgöz
22de6c5c4c upd .gitignore 2023-08-10 18:09:49 +03:00
M. Yusuf Sarıgöz
4c0f64e302 rm binary commited by mistake 2023-08-10 18:07:41 +03:00
M. Yusuf Sarıgöz
4f865181aa gguf : start implementing libllama in GGUF (WIP) 2023-08-10 17:49:31 +03:00
Christian Demsar
e59fcb2bc1
Add --n-predict -2 for stopping generation on full context (#2565) 2023-08-10 16:28:27 +02:00
M. Yusuf Sarıgöz
1c4d8bf981 gguf : start implementing libllama in GGUF (WIP) 2023-08-10 16:52:08 +03:00