Commit Graph

4004 Commits

Author SHA1 Message Date
Xuan Son Nguyen
7bb36ccf91
gguf : enforce that tensor names are unique (#6905)
* not allow adding duplicated tensor name

* no duplicated tensor while reading gguf

* typo

* throw exception inside llama_model_loader

Co-authored-by: slaren <slarengh@gmail.com>

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-04-28 17:36:18 +02:00
Neo Zhang
ce023f6f2f
add device version in device list (#6959)
Co-authored-by: arthw <>
2024-04-28 22:40:31 +08:00
github-actions[bot]
6e472f58e4 flake.lock: Update
Flake lock file updates:

• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/5c24cf2f0a12ad855f444c30b2421d044120c66f?narHash=sha256-XtTSSIB2DA6tOv%2Bl0FhvfDMiyCmhoRbNB%2B0SeInZkbk%3D' (2024-04-19)
  → 'github:NixOS/nixpkgs/7bb2ccd8cdc44c91edba16c48d2c8f331fb3d856?narHash=sha256-Drmja/f5MRHZCskS6mvzFqxEaZMeciScCTFxWVLqWEY%3D' (2024-04-25)
2024-04-28 11:12:50 +00:00
mgroeber9110
4dba7e8114
Replace "alternative" boolean operator in conditional compilation directive (#6949) 2024-04-27 21:02:06 +02:00
Pierrick Hymbert
b7368332e2
ci: server: tests python env on github container ubuntu latest / fix n_predict (#6935)
* ci: server: fix python env

* ci: server: fix server tests after #6638

* ci: server: fix windows is not building PR branch
2024-04-27 17:50:48 +02:00
agray3
928e0b7013
Reset schedule earlier to allow overlap with ggml graph computation on device (#6933)
* Reset schedule earlier to allow overlap with graph computation on device
2024-04-26 20:08:30 +02:00
Pierrick Hymbert
0c4d489e29
quantize: add imatrix and dataset metadata in GGUF (#6658)
* imatrix: save the dataset file used in the output file

* llama: support kv overrides type string string

* common: factorize KV Overrides parsing between common and server

* quantize: add imatrix n entries and dataset KV metadata
quantize: factorize KV Overrides parsing between common
#6656

* llama: remove kv override str_value initialization as it does not compile on some toolchain

* quantize: add imatrix m_last_call as `quantize.imatrix.chunks_count`

* quantize: add imatrix filename in KV

* llama: add llama_model_kv_override_free

* common: add llama_model_kv_override_free
common: free kv override if used after model loading

* llama: finally move the string KV override value to the stack

* llama : minor

* no need to add a NUL to the std::vector, std::string can be initialized from a pair of iterators.

Co-authored-by: slaren <slarengh@gmail.com>

* kv override: ensure string termination

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2024-04-26 20:06:33 +02:00
slaren
017e6999b5
add basic tensor data validation function (#6884)
* add basic tensor data validation function

* add --check-tensors command line argument

tensor validation is disabled by default and can be enabled by adding
`--check-tensors` to the command line arguments.

quantize always validates tensors.
2024-04-26 18:39:58 +02:00
slaren
e2764cd7ca
gguf : fix mismatch between alloc and free functions (#6929) 2024-04-26 18:07:42 +03:00
Justine Tunney
4b1c3c98b4
llamafile : use 64-bit integers in sgemm (#6928) 2024-04-26 17:05:33 +03:00
Pierrick Hymbert
bbe3c6e761
ci: server: fix python installation (#6925) 2024-04-26 12:27:25 +02:00
Pierrick Hymbert
7f5ff558ee
server: stop generation at n_ctx_train if n_predict is not set (#6638)
* server: cap n_predict if not set to n_ctx_train

* server: fix infinite loop

* server: infinite loop, move in process_token
server: infinite loop: set stop limit to true

* minor: spaces

* minor: spaces

* server: include prompt tokens in the EOS limit
2024-04-26 12:15:30 +02:00
Pierrick Hymbert
9e4e077ec5
ci: server: fix python installation (#6922) 2024-04-26 11:11:51 +02:00
Georgi Gerganov
83b72cb086
Merge pull request from GHSA-p5mv-gjc5-mwqv
* always use calloc

clamp n_kv on failure to read a kv

* ggml : alternative ctx->header.n_kv update

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-04-26 10:41:53 +03:00
Pierrick Hymbert
d4a9afc100
ci: server: fix python installation (#6918) 2024-04-26 09:27:49 +02:00
Pierrick Hymbert
7d641c26ac
ci: fix concurrency for pull_request_target (#6917) 2024-04-26 09:26:59 +02:00
Pierrick Hymbert
5790c8dac1
bench: server add stop word for PHI-2 (#6916) 2024-04-26 09:26:16 +02:00
vik
46e12c4692
llava : add support for moondream vision language model (#6899)
* add support for moondream vision language model

This required making the following changes to the CLIP model:

1. Support for patch embedding bias.
2. Make class embedding and pre-layernorm optional.
3. Add support for post-layernorm.

* Update examples/llava/clip.cpp

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-25 22:38:31 +03:00
Georgi Gerganov
dba497e0c1
cmake : restore LLAMA_LLAMAFILE_DEFAULT 2024-04-25 21:37:27 +03:00
Georgi Gerganov
fa0b4ad252
cmake : remove obsolete ANDROID check 2024-04-25 18:59:51 +03:00
slaren
d6e1d44f16
llama : synchronize before get/set session data (#6911) 2024-04-25 17:59:03 +02:00
Georgi Gerganov
853d06ffe2
ci : tmp disable slow tests 2024-04-25 17:06:27 +03:00
BarfingLemurs
3fe0596c18
readme : update model list (#6908)
* Update README.md

* missing space

* llama3 !
2024-04-25 16:52:28 +03:00
slaren
0ead1f1072
llama : check that all the tensor data is in the model file (#6885)
* llama : check that all the tensor data is in the model file

* also check for unsigned overflow
2024-04-25 15:23:47 +02:00
Georgi Gerganov
51543729ff
ggml : fix redefinition of vaddvq_f32 for 32-bit ARM (#6906) 2024-04-25 15:48:25 +03:00
Daniel Bevenius
4ab99d8d47
clip : rename lerp function to avoid conflict (#6894)
This commit renamesthe lerp (linear interpolation) function in clip.cpp
to avoid a conflict with the lerp function in the <cmath> standard C++
library when using c++20.

The motivation for this change is to enable projects that use c++20 to
be able to compile clip.cpp without having to resort to patching it. The
lerp function was added to cmath in version C++20 (202002L) and is why
this is not causing any issue at the moment as C++11/C++17 is currently
used by llama.cpp.

I realize that llama.cpp uses either C++11 (or C++17 in the case for
SYCL) but wanted to ask if this would be an acceptable change just the
same.

Refs: https://en.cppreference.com/w/cpp/numeric/lerp

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-04-25 15:38:14 +03:00
Georgi Gerganov
54770413c4
ggml : fix MIN / MAX macros (#6904)
ggml-ci
2024-04-25 15:12:28 +03:00
Georgi Gerganov
aa750c1ede
tests : minor bash stuff (#6902)
* tests : minor bash stuff

ggml-ci

* llama : fix build

ggml-ci

* tests : fix CUR_DIR -> ROOT_DIR

ggml-ci

* tests : fix fname

ggml-ci
2024-04-25 14:27:20 +03:00
jiez
1966eb2615
quantize : add '--keep-split' to quantize model into shards (#6688)
* Implement '--keep-split' to quantize model into several shards

* Add test script

* Update examples/quantize/quantize.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Split model correctly even if tensor id is out-of-order

* Update llama_model_quantize_params

* Fix preci failures

---------

Co-authored-by: z5269887 <z5269887@unsw.edu.au>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-25 13:29:35 +03:00
Johannes Gäßler
784e11dea1
README: add graphic for matrix multiplication (#6881) 2024-04-24 21:29:13 +02:00
Douglas Hanley
b4e4b8a935
llama : add llama_get_pooling_type function (#6862)
* add llama_get_pooling_type function

* fix argument name, move with ctx funcs
2024-04-24 16:10:07 +03:00
mgroeber9110
3fe847b574
server : do not apply Markdown formatting in code sections (#6850) 2024-04-24 13:54:24 +03:00
Kyle Mistele
37246b1031
common : revert showing control tokens by default for server (#6860)
* fix: revert showing control tokens by default

* feat: revert changes to default behavior of llama_token_to_piece; provide overridden declaration to receive "bool special" param to toggle showing control tokens

* feat: use the overridden declaration of llama_token_to_piece from common/common.cpp to specify "false" so that control tokens are not shown in chat completion responses"

* common : simplify

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-24 13:15:29 +03:00
Johannes Gäßler
28103f4832
Server: fix seed for multiple slots (#6835)
* Server: add tests for consistent results

* sampling: separate rng per sampling context
2024-04-24 11:08:36 +02:00
Georgi Gerganov
c0d1b3e03e
ggml : move 32-bit arm compat in ggml-impl.h (#6865)
ggml-ci
2024-04-24 12:00:07 +03:00
Tristan Druyen
abd3314064
llama : add phi 3 chat template (#6857)
* Add phi 3 chat template & tests

* test : fix chat template result

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-24 11:52:37 +03:00
Junyang Lin
3fec68be4e
convert : add support of codeqwen due to tokenizer (#6707)
* add support of codeqwen due to tokenizer

* override load_hparams

* fix typo

* fix load_params

* convert : fix whitespace

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-24 10:16:21 +03:00
liuwei-git
c8297c6af5
llama : add phi3 support (#6852)
* add explicit phi3 support

* add explicit phi3 support

* remove unused code

* convert : add BOS token

* llama : match EOT token <|end|>

* llama : minor / style

* llama : tabs -> spaces

* convert : fix lint checks

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-24 10:00:37 +03:00
Anas Ahouzi
4e96a812b3
[SYCL] Windows default build instructions without -DLLAMA_SYCL_F16 flag activated (#6767)
* Fix FP32/FP16 build instructions

* Fix typo

* Recommended build instruction

Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>

* Recommended build instruction

Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>

* Recommended build instruction

Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>

* Add comments in Intel GPU linux

---------

Co-authored-by: Anas Ahouzi <112881240+aahouzi-intel@users.noreply.github.com>
Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>
2024-04-23 08:53:18 +08:00
Justine Tunney
192090bae4
llamafile : improve sgemm.cpp (#6796)
* llamafile : improve sgemm.cpp

- Re-enable by default
- Fix issue described in #6716
- Make code more abstract, elegant, and maintainable
- Faster handling of weirdly shaped `m` an `n` edge cases

* Address review comments

* Help clang produce fma instructions

* Address review comments
2024-04-22 22:00:36 +03:00
Dave Airlie
e931888d50
ggml : fix calloc argument ordering. (#6820)
Latest gcc complains here:
/home/airlied/devel/llama.cpp/ggml-alloc.c: In function ‘ggml_gallocr_new_n’:
/home/airlied/devel/llama.cpp/ggml-alloc.c:374:59: warning: ‘calloc’ sizes specified with ‘sizeof’ in the earlier argument and not in the later argument [-Wcalloc-transposed-args]
  374 |     ggml_gallocr_t galloc = (ggml_gallocr_t)calloc(sizeof(struct ggml_gallocr), 1);
      |                                                           ^~~~~~
/home/airlied/devel/llama.cpp/ggml-alloc.c:374:59: note: earlier argument should specify number of elements, later size of each element

and a bunch more.

calloc is specified to take nmemb first then size, so realign the code.

In a couple of places there was a * x, 1 so I fixed those to use calloc properly.
2024-04-22 16:05:06 +02:00
Georgi Gerganov
8960fe86ae
llama : fix typo in <|im_end|> token text (#6745) 2024-04-22 15:41:11 +03:00
Pierrick Hymbert
c0956b09ba
ci: fix job are cancelling each other (#6781) 2024-04-22 13:22:54 +02:00
github-actions[bot]
e9b4a1bf68 flake.lock: Update
Flake lock file updates:

• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/1042fd8b148a9105f3c0aca3a6177fd1d9360ba5?narHash=sha256-3sbWO1mbpWsLepZGbWaMovSO7ndZeFqDSdX0hZ9nVyw%3D' (2024-04-10)
  → 'github:NixOS/nixpkgs/5c24cf2f0a12ad855f444c30b2421d044120c66f?narHash=sha256-XtTSSIB2DA6tOv%2Bl0FhvfDMiyCmhoRbNB%2B0SeInZkbk%3D' (2024-04-19)
2024-04-22 10:42:43 +00:00
Olivier Chafik
5cf5e7d490
build: generate hex dump of server assets during build (#6661)
* `build`: generate hex dumps of server assets on the fly

* build: workaround lack of -n on gnu xxd

* build: don't use xxd in cmake

* build: don't call xxd from build.zig

* build: more idiomatic hexing

* build: don't use xxd in Makefile (od hackery instead)

* build: avoid exceeding max cmd line limit in makefile hex dump

* build: hex dump assets at cmake build time (not config time)
2024-04-21 18:48:53 +01:00
Georgi Gerganov
40f74e4d73
llama : add option to render special/control tokens (#6807)
* make : fix common dep on llama.h

* llama : add option to render special tokens

* readme : add API change notice

ggml-ci

* swift : fix build
2024-04-21 18:36:45 +03:00
Georgi Gerganov
b9cc76d87e
ggml : fix ggml_backend_cpu_supports_op() for CPY (#0) 2024-04-21 16:48:50 +03:00
Wouter
7dbdba5690
llama : add llama-3 chat template (#6751)
* Added llama-3 chat template

* Update llama.cpp

Co-authored-by: Samuel Tallet <36248671+SamuelTallet@users.noreply.github.com>

* Update llama.cpp

Co-authored-by: Samuel Tallet <36248671+SamuelTallet@users.noreply.github.com>

* Update tests/test-chat-template.cpp

Co-authored-by: Samuel Tallet <36248671+SamuelTallet@users.noreply.github.com>

* Added EOS stop sequence according to https://github.com/ggerganov/llama.cpp/pull/6751#issuecomment-2065602862

* Removed adding of BOS token before first message

* Removed bos token from expected output from llama-3

* Update tests/test-chat-template.cpp

Co-authored-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com>

* Update tests/test-chat-template.cpp

Co-authored-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com>

* Added <|end_of_text|> as another stop token

* Reverted last change of adding the end_of_text stop word for llama 3

---------

Co-authored-by: Wouter Tichelaar <tichelaarw@spar.net>
Co-authored-by: Samuel Tallet <36248671+SamuelTallet@users.noreply.github.com>
Co-authored-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-21 16:03:39 +03:00
pmysl
c1386c936e
gguf-py : add IQ1_M to GGML_QUANT_SIZES (#6761) 2024-04-21 15:49:30 +03:00
Jan Boon
e8d35f47cb
doc : add link to falcon (#6789) 2024-04-21 15:35:40 +03:00