llama.cpp/ggml-cuda
slaren 08a0c02060
ggml : mul_mat_id use the same tensor for all the experts (#6387)
* ggml : update mul_mat_id to use the same tensor for all the experts

* update cuda

* minor

* update metal

* update test-backend-ops

* fix cuda

* Update ggml-metal.m

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* update convert.py

* update convert-hf-to-gguf.py

* update convert.py for mixtral hf models

* Update convert-hf-to-gguf.py

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* cuda : support non-pow-2 number of experts

* allow quantize to work for split and merged experts models in the same way

* cleanup + disable mmap automatically with split tensors models

* update imatrix

* test-backend-ops : test qwen argsort

* update grok model loading

* llama : add merged experts tensors to the grok tensor map

* minor

* gguf : bump version

* fix quantizing of merged experts

* convert-hf-to-gguf.py : update grok (untested)

* make linter happy

* cuda/argsort : use shared memory instead of pool memory

* convert : fix grok tensor names

* metal : add support for non-pow-2 argsort

* llama : more loader cleanup, better error checking

* cuda : fix warning

* llama : still use mmap for loading old models, but copy the data to a host buffer

* add review note

* llama : remove ffn tensor counting + add sanity check

ggml-ci

* convert : fix handling of n_experts == None

ggml-ci

* imatrix : fix ncall counters

* llama : produce error if imatrix size does not match

* quantize : terminate on errors + trace logs

ggml-ci

* metal : pad shared memory to 16 bytes

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-03 16:07:05 +03:00
..
acc.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
acc.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
alibi.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
alibi.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
arange.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
arange.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
argsort.cu ggml : mul_mat_id use the same tensor for all the experts (#6387) 2024-04-03 16:07:05 +03:00
argsort.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
binbcast.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
binbcast.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
clamp.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
clamp.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
common.cuh sync : ggml (#6351) 2024-03-29 17:45:46 +02:00
concat.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
concat.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
convert.cu IQ1_M: 1.75 bpw quantization (#6302) 2024-03-26 15:21:27 +01:00
convert.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
cpy.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
cpy.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
dequantize.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
diagmask.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
diagmask.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
dmmv.cu sync : ggml (#6351) 2024-03-29 17:45:46 +02:00
dmmv.cuh sync : ggml (#6351) 2024-03-29 17:45:46 +02:00
getrows.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
getrows.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
im2col.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
im2col.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
mmq.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
mmq.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
mmvq.cu IQ1_M: 1.75 bpw quantization (#6302) 2024-03-26 15:21:27 +01:00
mmvq.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
norm.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
norm.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
pad.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
pad.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
pool2d.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
pool2d.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
quantize.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
quantize.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
rope.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
rope.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
scale.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
scale.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
softmax.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
softmax.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
sumrows.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
sumrows.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
tsembd.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
tsembd.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
unary.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
unary.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
upscale.cu cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
upscale.cuh cuda : refactor into multiple files (#6269) 2024-03-25 13:50:23 +01:00
vecdotq.cuh IQ1_M: 1.75 bpw quantization (#6302) 2024-03-26 15:21:27 +01:00