Commit Graph

2627 Commits

Author SHA1 Message Date
Mark Fairbairn
855f54402e
Change Windows AMD example to release build to make inference much faster. (#6525) 2024-04-07 20:52:19 +02:00
Georgi Gerganov
b909236c0b
flake.lock: Update (#6517)
Flake lock file updates:

• Updated input 'flake-parts':
    'github:hercules-ci/flake-parts/f7b3c975cf067e56e7cda6cb098ebe3fb4d74ca2' (2024-03-01)
  → 'github:hercules-ci/flake-parts/9126214d0a59633752a136528f5f3b9aa8565b7d' (2024-04-01)
• Updated input 'flake-parts/nixpkgs-lib':
    'github:NixOS/nixpkgs/1536926ef5621b09bba54035ae2bb6d806d72ac8?dir=lib' (2024-02-29)
  → 'github:NixOS/nixpkgs/d8fe5e6c92d0d190646fb9f1056741a229980089?dir=lib' (2024-03-29)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/d8fe5e6c92d0d190646fb9f1056741a229980089' (2024-03-29)
  → 'github:NixOS/nixpkgs/fd281bd6b7d3e32ddfa399853946f782553163b5' (2024-04-03)

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-04-07 11:25:30 -07:00
DAN™
e0717e751e
Add GritLM as supported models. (#6513) 2024-04-07 19:33:59 +02:00
Georgi Gerganov
c37247796b
sync : ggml 2024-04-07 17:05:51 +03:00
Slava Primenko
f77261a7c5
ggml: bypass code incompatible with CUDA < 11.1 (whisper/2020)
`cudaHostRegisterReadOnly` parameter was only introduced in CUDA 11.1

See this issue for more details:
https://github.com/ggerganov/examples/whisper/whisper.cpp/issues/2007
2024-04-07 17:05:40 +03:00
Georgi Gerganov
43e8995e75
scripts : sync ggml-cuda folder 2024-04-07 16:08:12 +03:00
limitedAtonement
9472bce308
Run make to build the project (#6457) 2024-04-07 13:05:40 +02:00
Neo Zhang Jianyu
d4f220a5cc
support/fix OPs GGML_TYPE_IQ4_NL, GGML_TYPE_IQ4_XS, GGML_TYPE_IQ3_XXS, GGML_TYPE_IQ3_S, GGML_TYPE_IQ2_XXS, GGML_TYPE_IQ2_XS, GGML_TYPE_IQ2_S, GGML_TYPE_IQ1_S, GGML_TYPE_IQ1_M (#6521) 2024-04-07 10:55:59 +08:00
Georgi Gerganov
54ea0698fb
sync : ggml 2024-04-06 18:27:46 +03:00
Daniel Bevenius
b66aec675c
backend : fix typo in scheduler documentation (ggml/781)
Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-04-06 17:42:26 +03:00
Clint Herron
57dd02c44b
Tests: Added integration tests for GBNF parser (#6472)
* Added integration tests for GBNF parser to validate correctness of parsing, as well as correctness of string matching. Intended for use to pin behavior while working on performance improvements.

* Fixing whitespace errors and cleaning error message alert to be clearer.

* Removing hacky include to llama.cpp from grammar integration test now that needed functions are available via internal API.

* Comment cleanup.

* Reorganizing tests for readability.

* Cleaning up debug message to make a bit more sense.
2024-04-06 10:31:33 -04:00
Pierrick Hymbert
75cd4c7729
ci: bench: support sse and fix prompt processing time / server: add tokens usage in stream OAI response (#6495)
* ci: bench: support sse and fix prompt processing time
server: add tokens usage in stream mode

* ci: bench: README.md EOL

* ci: bench: remove total pp and tg as it is not accurate

* ci: bench: fix case when there is no token generated

* ci: bench: change to the 95 percentile for pp and tg as it is closer to what the server exports in metrics

* ci: bench: fix finish reason rate
2024-04-06 05:40:47 +02:00
Brian
a8bd14d557
gguf.py : add licence and version to gguf writer (#6504) 2024-04-05 21:41:38 +03:00
Hoang Nguyen
d0f5deebf8
readme : update UI list (#6503)
* Add MindMac to UI list

* Update proprietary description

Co-authored-by: slaren <slarengh@gmail.com>

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-04-05 21:39:43 +03:00
Ting Sun
87e21bbacd
bench : make n_batch and n_ubatch configurable in Batched bench (#6500)
* bench: make n_batch and n_ubatch configurable

* bench: update doc for batched bench
2024-04-05 21:34:53 +03:00
Ouadie EL FAROUKI
1b496a745c
[SYCL] Fixed minor bug when enabling FP16 for non intel targets (#6464)
* moved INTEL_MKL guard from gemm_impl to gemm (wrapper)

* Update ggml-sycl.cpp

Co-authored-by: AidanBeltonS <87009434+AidanBeltonS@users.noreply.github.com>

---------

Co-authored-by: AidanBeltonS <87009434+AidanBeltonS@users.noreply.github.com>
2024-04-05 19:05:06 +05:30
alexpinel
a307375c02
readme : add Dot to UI list (#6487) 2024-04-04 13:22:50 -04:00
Jun Jie
b660a5729e
readme : fix typo (#6481) 2024-04-04 13:16:37 -04:00
Ed Lepedus
0a1d889e27
server: add cURL support to server Dockerfiles (#6474)
* server: add cURL support to `full.Dockerfile`

* server: add cURL support to `full-cuda.Dockerfile` and `server-cuda.Dockerfile`

* server: add cURL support to `full-rocm.Dockerfile` and `server-rocm.Dockerfile`

* server: add cURL support to `server-intel.Dockerfile`

* server: add cURL support to `server-vulkan.Dockerfile`

* fix typo in `server-vulkan.Dockerfile`

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-04 18:31:22 +02:00
Minsoo Cheong
7dda1b727e
ci: exempt master branch workflows from getting cancelled (#6486)
* ci: exempt master branch workflows from getting cancelled

* apply to bench.yml
2024-04-04 18:30:53 +02:00
Ewout ter Hoeven
c666ba26c3
build CI: Name artifacts (#6482)
Name the artifacts in the build CI, so that they get uploaded with separate names, instead of all put into the same `artifact` ZIP.

It might be possible to further simplify the packing step (in future PRs).
2024-04-04 17:08:55 +02:00
Shakhar Dasgupta
2e66913e5f
server: allow penalizing repetition of newlines on server webpage (#6431) 2024-04-04 17:03:00 +02:00
Pierrick Hymbert
8120efee1d
ci: bench fix concurrency for workflow trigger dispatch with sha1 (#6478) 2024-04-04 16:59:04 +02:00
limitedAtonement
a74401f0e5
Correct README link (#6458)
README is called README.md.
2024-04-04 16:30:02 +02:00
Pierrick Hymbert
7a2c92637a
ci: bench: add more ftype, fix triggers and bot comment (#6466)
* ci: bench: change trigger path to not spawn on each PR

* ci: bench: add more file type for phi-2: q8_0 and f16.
- do not show the comment by default

* ci: bench: add seed parameter in k6 script

* ci: bench: artefact name perf job

* Add iteration in the commit status, reduce again the autocomment

* ci: bench: add per slot metric in the commit status

* Fix trailing spaces
2024-04-04 12:57:58 +03:00
Daniel Bevenius
4bcd6b959c
common: remove duplicate check for curl (#6471)
This commit removes one of the two identical checks for curl being NULL
in llama_load_model_from_url.

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-04-04 09:49:21 +02:00
Clint Herron
9b84ae1806
examples : add GBNF validator program (#5948)
* Revising GBNF validator program to be much simpler.

* Changing from streams to using cstdio

* Adding final newline character.
2024-04-04 10:44:28 +03:00
Georgi Gerganov
4399f13fb9
server : remove obsolete --memory-f32 option 2024-04-04 09:34:58 +03:00
Xiao-Yong Jin
1a43c7254e
server : add option to disable KV offload (#6468) 2024-04-04 09:33:48 +03:00
Clint Herron
72d73af651
convert : fix for lint error complaining of bare except (#6470) 2024-04-04 09:32:53 +03:00
Fattire
5fb1574c81
A few small fixes to server's README docs (#6428)
* Typo fix to server's README.md

Fix minor typo ("tonen") in server README.

* server readme grammar/style fixes.

Quickly went through this file to look for inconsistencies in
presentation of defaults, flag options, and looked for typos
and grammar issues.

Not perfect, but hopefully improved.

* Update README.md

Remove an extra space before newline.
2024-04-03 22:22:57 +02:00
JH23X
60cdf40cc3
server : handle exception on wrong type in request (#6452)
Co-authored-by: Jonas Holzner <jonas.holzner.external@hensoldt.net>
2024-04-03 21:09:52 +03:00
bryanSwk
bb43cf7e9d
llama : add SEA-LION support (#6448)
* initial commit for sealion support

* add sealion support

* minor fix

* q/k ln and pos_embd only if required

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* minor : clear whitespaces

---------

Co-authored-by: bryan <bryansiow@aisingapore.org>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-03 21:05:10 +03:00
Ewout ter Hoeven
9f62c0173d
ci : update checkout, setup-python and upload-artifact to latest (#6456)
* CI: Update actions/checkout to v4

* CI: Update actions/setup-python to v5

* CI: Update actions/upload-artifact to v4
2024-04-03 21:01:13 +03:00
Ed Lepedus
5d4f12e462
server: add cURL support to server.Dockerfile (#6461) 2024-04-03 19:56:37 +02:00
Francisco Melo
154d4ee39c
readme : add feature-rich rust bindings (#6465) 2024-04-03 20:53:37 +03:00
Joyce
e69945d953
security : create policy (#6354)
* Create SECURITY.md

Signed-off-by: Joyce <joycebrum@google.com>

* Fix: link on SECURITY.md

Signed-off-by: Joyce <joycebrum@google.com>

* Fix: link on SECURITY.md

Signed-off-by: Joyce <joycebrum@google.com>

* minor

* fix

* fix

---------

Signed-off-by: Joyce <joycebrum@google.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-03 20:48:07 +03:00
Abhishek Gopinath K
db214fa578
Missing tokenizer.model error during gguf conversion (#6443)
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-04-03 11:42:52 -04:00
kaizau
1ff4d9f3d6
Add OpenChat, Alpaca, Vicuna chat templates (#6397)
* Add openchat chat template

* Add chat template test for openchat

* Add chat template for vicuna

* Add chat template for orca-vicuna

* Add EOS for vicuna templates

* Combine vicuna chat templates

* Add tests for openchat and vicuna chat templates

* Add chat template for alpaca

* Add separate template name for vicuna-orca

* Remove alpaca, match deepseek with jinja output

* Regenerate chat template test with add_generation_prompt

* Separate deepseek bos from system message

* Match openchat template with jinja output

* Remove BOS token from templates, unprefix openchat
2024-04-03 17:24:31 +02:00
Georgi Gerganov
076b08649e
readme : update hot topics 2024-04-03 16:11:15 +03:00
slaren
08a0c02060
ggml : mul_mat_id use the same tensor for all the experts (#6387)
* ggml : update mul_mat_id to use the same tensor for all the experts

* update cuda

* minor

* update metal

* update test-backend-ops

* fix cuda

* Update ggml-metal.m

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* update convert.py

* update convert-hf-to-gguf.py

* update convert.py for mixtral hf models

* Update convert-hf-to-gguf.py

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* cuda : support non-pow-2 number of experts

* allow quantize to work for split and merged experts models in the same way

* cleanup + disable mmap automatically with split tensors models

* update imatrix

* test-backend-ops : test qwen argsort

* update grok model loading

* llama : add merged experts tensors to the grok tensor map

* minor

* gguf : bump version

* fix quantizing of merged experts

* convert-hf-to-gguf.py : update grok (untested)

* make linter happy

* cuda/argsort : use shared memory instead of pool memory

* convert : fix grok tensor names

* metal : add support for non-pow-2 argsort

* llama : more loader cleanup, better error checking

* cuda : fix warning

* llama : still use mmap for loading old models, but copy the data to a host buffer

* add review note

* llama : remove ffn tensor counting + add sanity check

ggml-ci

* convert : fix handling of n_experts == None

ggml-ci

* imatrix : fix ncall counters

* llama : produce error if imatrix size does not match

* quantize : terminate on errors + trace logs

ggml-ci

* metal : pad shared memory to 16 bytes

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-03 16:07:05 +03:00
Meng, Hengyu
52604860f9
[SYCL] Disable iqx on windows as WA (#6435)
* disable iqx on windows as WA

* array instead of global_memory
2024-04-03 10:34:40 +08:00
Georgi Gerganov
f87f7b8986
flake.lock: Update (#6402)
Flake lock file updates:

• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/44d0940ea560dee511026a53f0e2e2cde489b4d4' (2024-03-23)
  → 'github:NixOS/nixpkgs/d8fe5e6c92d0d190646fb9f1056741a229980089' (2024-03-29)

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-04-01 09:05:57 -07:00
Johannes Gäßler
33a5244806
compare-llama-bench.py: fix long hexsha args (#6424) 2024-04-01 13:30:43 +02:00
Pierrick Hymbert
226e819371
ci: server: verify deps are coherent with the commit (#6409)
* ci: server: verify deps are coherent with the commit

* ci: server: change the ref to build as now it's a pull event target
2024-04-01 12:36:40 +02:00
Georgi Gerganov
c50a82ce0f
readme : update hot topics 2024-03-31 11:56:30 +03:00
Pierrick Hymbert
37e7854c10
ci: bench: fix Resource not accessible by integration on PR event (#6393) 2024-03-30 12:36:07 +02:00
Mohammadreza Hendiani
c342d070c6
Fedora build update (#6388)
* fixed deprecated address

* fixed deprecated address

* fixed deprecated address

* Added 'Apache-2.0' SPDX license identifier due to 'kompute.cc' submodule licensing. Explanation of licensing method: https://docs.fedoraproject.org/en-US/legal/spdx/#_and_expressions

* Added 'Apache-2.0' SPDX license identifier due to 'kompute.cc' submodule licensing. Explanation of licensing method: https://docs.fedoraproject.org/en-US/legal/spdx/#_and_expressions

* Added 'Apache-2.0' SPDX license identifier due to 'kompute.cc' submodule licensing. Explanation of licensing method: https://docs.fedoraproject.org/en-US/legal/spdx/#_and_expressions

* reverted back to only the MIT license
2024-03-29 22:59:56 +01:00
Xuan Son Nguyen
f7fc5f6c6f
split: allow --split-max-size option (#6343)
* split by max size

* clean up arg parse

* split: ok

* add dry run option

* error on 0 tensors

* be positive

* remove next_metadata_size
2024-03-29 22:34:44 +01:00
0cc4m
ba0c7c70ab
Vulkan k-quant mmq and ggml-backend offload functionality (#6155)
* Fix Vulkan no kv offload incoherence

* Add k-quant mul mat mat shaders

* Rework working buffer allocation, reduces vram use noticeably

Clean up cpu assist code, replaced with ggml-backend offload function

* Default to all dedicated GPUs

* Add fallback for integrated GPUs if no dedicated GPUs are found

* Add debug info which device is allocating memory

* Fix Intel dequant issue

Fix validation issue

* Fix Vulkan GGML_OP_GET_ROWS implementation

* Clean up merge artifacts

* Remove Vulkan warning
2024-03-29 17:29:21 +01:00