Commit Graph

36 Commits

Author SHA1 Message Date
M. Yusuf Sarıgöz
22c61c5b45 gguf : style fixes in simple conversion script 2023-08-17 19:05:43 +03:00
M. Yusuf Sarıgöz
5f97a48fc1 gguf : single pass for writing tensors + refactoring writer 2023-08-17 16:57:50 +03:00
M. Yusuf Sarıgöz
dce07c3121 gguf : single pass for writing tensors + refactoring writer 2023-08-17 16:48:49 +03:00
M. Yusuf Sarıgöz
f31e9230ad gguf : single pass for writing tensors + refactoring writer 2023-08-17 15:19:30 +03:00
Georgi Gerganov
c8ee87f141
gguf.py : merge all files in gguf.py 2023-08-16 19:55:49 +03:00
Georgi Gerganov
88b5769487
gguf : deduplicate (#2629)
* gguf : better type names

* dedup : CPU + Metal is working

* ggml : fix warnings about unused results

* llama.cpp : fix line feed and compiler warning

* llama : fix strncpy warning + note token_to_str does not write null

* llama : restore the original load/save session implementation

Will migrate this to GGUF in the future

* convert-llama-h5-to-gguf.py : support alt ctx param name

* ggml : assert when using ggml_mul with non-F32 src1

* examples : dedup simple

---------

Co-authored-by: klosax <131523366+klosax@users.noreply.github.com>
2023-08-16 19:25:29 +03:00
Georgi Gerganov
758ff1bbb5
llama : refactor model loading code (#2620)
* llama : style formatting + remove helper methods

* llama : fix quantization using gguf tool

* llama : simplify gguf_file_saver

* llama : fix method names

* llama : simplify write_header()

* llama : no need to pass full file loader to the file saver

just gguf_ctx

* llama : gguf_file_saver write I32

* llama : refactor tensor names (#2622)

* gguf: update tensor names searched in quantization

* gguf : define tensor names as constants

* gguf : initial write API (not tested yet)

* gguf : write to file API (not tested)

* gguf : initial write API ready + example

* gguf : fix header write

* gguf : fixes + simplify example + add ggml_nbytes_pad()

* gguf : minor

* llama : replace gguf_file_saver with new gguf write API

* gguf : streaming support when writing files

* gguf : remove oboslete write methods

* gguf : remove obosolete gguf_get_arr_xxx API

* llama : simplify gguf_file_loader

* llama : move hparams and vocab from gguf_file_loader to llama_model_loader

* llama : merge gguf-util.h in llama.cpp

* llama : reorder definitions in .cpp to match .h

* llama : minor simplifications

* llama : refactor llama_model_loader (WIP)

wip : remove ggml_ctx from llama_model_loader

wip : merge gguf_file_loader in llama_model_loader

* llama : fix shape prints

* llama : fix Windows build + fix norm_rms_eps key

* llama : throw error on missing KV paris in model meta data

* llama : improve printing + log meta data

* llama : switch print order of meta data

---------

Co-authored-by: M. Yusuf Sarıgöz <yusufsarigoz@gmail.com>
2023-08-16 14:34:03 +03:00
klosax
ea5615a03a
convert-llama-h5-to-gguf.py : clarify the reverse permute 2023-08-16 11:23:15 +02:00
klosax
66756c82af
convert-llama-h5-to-gguf.py : add tensor data layout 2023-08-15 19:54:33 +02:00
klosax
2dd5d2c92c
convert-llama-h5-to-gguf.py : add 70b gqa support 2023-08-15 00:43:10 +02:00
klosax
7ec125b1dc
convert-llama-h5-to-gguf.py : add token types 2023-08-14 22:06:33 +02:00
Georgi Gerganov
7494c78428
llama : sync gguf-llama with llama (#2613)
* llama : sync gguf-llama with llama

* tests : fix build + warnings (test-tokenizer-1 still fails)

* tests : fix wstring_convert

* convert : fix layer names

* llama : sync gguf-llama.cpp

* convert : update HF converter to new tokenizer voodoo magics
2023-08-14 21:33:33 +03:00
Georgi Gerganov
0c19ae70d5
simple : minor style changes 2023-08-14 12:58:12 +03:00
klosax
a7d226f871
convert-llama-h5-to-gguf.py : fixes 2023-08-14 11:14:24 +02:00
M. Yusuf Sarıgöz
24f48833ab fix conflicts 2023-08-13 16:55:42 +03:00
M. Yusuf Sarıgöz
bf2dad3100 convert : rm quantization version 2023-08-13 14:38:53 +03:00
M. Yusuf Sarıgöz
1d60468eee fix conflicts 2023-08-13 13:35:40 +03:00
M. Yusuf Sarıgöz
91d4bfd536 convert : write more metadata for LLaMA 2023-08-13 13:29:46 +03:00
klosax
17800cd80f
convert-llama-h5-to-gguf.py : load model in parts to save memory 2023-08-13 12:20:02 +02:00
klosax
e91a2224e4
convert-llama-h5-to-gguf.py : n_layer --> n_block 2023-08-13 00:02:44 +02:00
klosax
e606ffeaee
convert-llama-h5-to-gguf.py : simplify nbytes 2023-08-12 22:30:35 +02:00
klosax
4cef57c81a
convert-llama-h5-to-gguf.py : no need to convert tensors twice 2023-08-12 21:50:24 +02:00
klosax
7d5f4522dd
convert-llama-h5-to-gguf.py : map tensor names 2023-08-09 00:52:16 +02:00
klosax
c5ba5efda2
convert-llama-h5-to-gguf.py : special tokens 2023-08-02 11:26:07 +02:00
klosax
e1e9b28547
convert-llama-h5-to-gguf.py : accumulate kv / ti + special tokens 2023-08-02 11:15:33 +02:00
klosax
da4900e835
Update convert-llama-h5-to-gguf.py 2023-07-31 23:04:03 +02:00
M. Yusuf Sarıgöz
f3de876a12 fix : update convert-llama-h5-to-gguf.py 2023-07-31 23:58:29 +03:00
klosax
6b3a7b9f4f
Update convert-llama-h5-to-gguf.py 2023-07-31 03:02:00 +02:00
klosax
068a8e0fbe
Update convert-llama-h5-to-gguf.py 2023-07-30 17:29:56 +02:00
klosax
2fabc176ce
Update convert-llama-h5-to-gguf.py 2023-07-30 16:28:08 +02:00
klosax
4ed98bf1ab
Update convert-llama-h5-to-gguf.py 2023-07-30 15:01:47 +02:00
M. Yusuf Sarıgöz
87c34e4dd4 gguf : update convert-llama-h5-to-gguf.py 2023-07-30 01:09:22 +03:00
klosax
06c3e4a1a7
Update convert-llama-h5-to-gguf.py 2023-07-29 21:38:01 +02:00
klosax
8ad7cd49fb
Update convert-llama-h5-to-gguf.py 2023-07-29 16:47:00 +02:00
M. Yusuf Sarıgöz
0317c41d98 gguf : upd gguf conversion script 2023-07-29 13:31:07 +03:00
klosax
999431c4b6
quick and dirty conversion example 2023-07-29 11:20:05 +02:00