llama.cpp/ggml/src/ggml-cuda
R0CKSTAR c35e586ea5
Some checks are pending
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full-cuda.Dockerfile platforms:linux/amd64 tag:full-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full.Dockerfile platforms:linux/amd64,linux/arm64 tag:full]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-cuda.Dockerfile platforms:linux/amd64 tag:light-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-intel.Dockerfile platforms:linux/amd64 tag:light-intel]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli.Dockerfile platforms:linux/amd64,linux/arm64 tag:light]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-cuda.Dockerfile platforms:linux/amd64 tag:server-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-intel.Dockerfile platforms:linux/amd64 tag:server-intel]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server.Dockerfile platforms:linux/amd64,linux/arm64 tag:server]) (push) Waiting to run
Nix CI / nix-eval (macos-latest) (push) Waiting to run
Nix CI / nix-eval (ubuntu-latest) (push) Waiting to run
Nix CI / nix-build (macos-latest) (push) Waiting to run
Nix CI / nix-build (ubuntu-latest) (push) Waiting to run
flake8 Lint / Lint (push) Waiting to run
musa: enable building fat binaries, enable unified memory, and disable Flash Attention on QY1 (MTT S80) (#9526)
* mtgpu: add mp_21 support

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* mtgpu: disable flash attention on qy1 (MTT S80); disable q3_k and mul_mat_batched_cublas

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* mtgpu: enable unified memory

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* mtgpu: map cublasOperation_t to mublasOperation_t (sync code to latest)

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

---------

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2024-09-22 16:55:49 +02:00
..
template-instances CUDA: MMQ code deduplication + iquant support (#8495) 2024-07-20 22:25:26 +02:00
vendors musa: enable building fat binaries, enable unified memory, and disable Flash Attention on QY1 (MTT S80) (#9526) 2024-09-22 16:55:49 +02:00
acc.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
acc.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
arange.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
arange.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
argsort.cu ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
argsort.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
binbcast.cu ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
binbcast.cuh ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
clamp.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
clamp.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
common.cuh musa: enable building fat binaries, enable unified memory, and disable Flash Attention on QY1 (MTT S80) (#9526) 2024-09-22 16:55:49 +02:00
concat.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
concat.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
conv-transpose-1d.cu feat: cuda implementation for ggml_conv_transpose_1d (ggml/854) 2024-07-08 12:23:00 +03:00
conv-transpose-1d.cuh feat: cuda implementation for ggml_conv_transpose_1d (ggml/854) 2024-07-08 12:23:00 +03:00
convert.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
convert.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
cpy.cu cuda : fix defrag with quantized KV (#9319) 2024-09-05 11:13:11 +02:00
cpy.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
cross-entropy-loss.cu ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
cross-entropy-loss.cuh ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
dequantize.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
diagmask.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
diagmask.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
dmmv.cu cuda : fix dmmv cols requirement to 2*GGML_CUDA_DMMV_X (#8800) 2024-08-01 15:26:22 +02:00
dmmv.cuh cuda : fix dmmv cols requirement to 2*GGML_CUDA_DMMV_X (#8800) 2024-08-01 15:26:22 +02:00
fattn-common.cuh CPU/CUDA: Gemma 2 FlashAttention support (#8542) 2024-08-24 21:34:59 +02:00
fattn-tile-f16.cu CPU/CUDA: Gemma 2 FlashAttention support (#8542) 2024-08-24 21:34:59 +02:00
fattn-tile-f16.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
fattn-tile-f32.cu musa: enable building fat binaries, enable unified memory, and disable Flash Attention on QY1 (MTT S80) (#9526) 2024-09-22 16:55:49 +02:00
fattn-tile-f32.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
fattn-vec-f16.cuh CPU/CUDA: Gemma 2 FlashAttention support (#8542) 2024-08-24 21:34:59 +02:00
fattn-vec-f32.cuh CPU/CUDA: Gemma 2 FlashAttention support (#8542) 2024-08-24 21:34:59 +02:00
fattn-wmma-f16.cuh CPU/CUDA: Gemma 2 FlashAttention support (#8542) 2024-08-24 21:34:59 +02:00
fattn.cu CUDA: enable Gemma FA for HIP/Pascal (#9581) 2024-09-22 09:34:52 +02:00
fattn.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
getrows.cu ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
getrows.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
im2col.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
im2col.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
mma.cuh CUDA: optimize and refactor MMQ (#8416) 2024-07-11 16:47:47 +02:00
mmq.cu CUDA: fix --split-mode row race condition (#9413) 2024-09-11 10:22:40 +02:00
mmq.cuh CUDA: fix --split-mode row race condition (#9413) 2024-09-11 10:22:40 +02:00
mmvq.cu ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
mmvq.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
norm.cu ggml : add epsilon as a parameter for group_norm (#8818) 2024-08-06 10:26:46 +03:00
norm.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
opt-step-adamw.cu ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
opt-step-adamw.cuh ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
out-prod.cu ggml : fix builds (#0) 2024-09-20 21:15:05 +03:00
out-prod.cuh ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
pad.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
pad.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
pool2d.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
pool2d.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
quantize.cu ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
quantize.cuh CUDA: optimize and refactor MMQ (#8416) 2024-07-11 16:47:47 +02:00
rope.cu ggml : move rope type enum to ggml.h (#8949) 2024-08-13 21:13:15 +02:00
rope.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
rwkv-wkv.cu RWKV v6: RWKV_WKV op CUDA implementation (#9454) 2024-09-22 04:29:12 +02:00
rwkv-wkv.cuh RWKV v6: RWKV_WKV op CUDA implementation (#9454) 2024-09-22 04:29:12 +02:00
scale.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
scale.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
softmax.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
softmax.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
sum.cu CUDA: fix sum.cu compilation for CUDA < 11.7 (#9562) 2024-09-20 18:35:35 +02:00
sum.cuh tests: add gradient tests for all backends (ggml/932) 2024-09-08 11:05:55 +03:00
sumrows.cu sync : ggml 2024-08-27 22:41:27 +03:00
sumrows.cuh sync : ggml 2024-08-27 22:41:27 +03:00
tsembd.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
tsembd.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
unary.cu RWKV v6: RWKV_WKV op CUDA implementation (#9454) 2024-09-22 04:29:12 +02:00
unary.cuh RWKV v6: RWKV_WKV op CUDA implementation (#9454) 2024-09-22 04:29:12 +02:00
upscale.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
upscale.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
vecdotq.cuh CUDA: MMQ code deduplication + iquant support (#8495) 2024-07-20 22:25:26 +02:00