• Joined on 2024-09-10
root synced and deleted reference refs/tags/refs/pull/9589/merge at root/llama.cpp from mirror 2024-09-22 21:16:20 +00:00
root synced commits to gg/metal-fa-f32-qk at root/llama.cpp from mirror 2024-09-22 21:16:20 +00:00
root synced new reference gg/metal-fa-f32-qk to root/llama.cpp from mirror 2024-09-22 21:16:20 +00:00
root synced commits to gg/perplexity-nl at root/llama.cpp from mirror 2024-09-22 21:16:20 +00:00
root synced new reference gg/perplexity-nl to root/llama.cpp from mirror 2024-09-22 21:16:20 +00:00
root synced commits to master at root/llama.cpp from mirror 2024-09-22 21:16:20 +00:00
c35e586ea5 musa: enable building fat binaries, enable unified memory, and disable Flash Attention on QY1 (MTT S80) (#9526)
912c331d3d Fix merge error in #9454 (#9589)
Compare 2 commits »
root synced commits to refs/pull/9034/merge at root/llama.cpp from mirror 2024-09-22 21:16:20 +00:00
912c331d3d Fix merge error in #9454 (#9589)
a5b57b08ce CUDA: enable Gemma FA for HIP/Pascal (#9581)
ecd5d6b65b llama: remove redundant loop when constructing ubatch (#9574)
2a63caaa69 RWKV v6: RWKV_WKV op CUDA implementation (#9454)
Compare 15 commits »
root synced commits to refs/pull/9058/merge at root/llama.cpp from mirror 2024-09-22 21:16:20 +00:00
c35e586ea5 musa: enable building fat binaries, enable unified memory, and disable Flash Attention on QY1 (MTT S80) (#9526)
912c331d3d Fix merge error in #9454 (#9589)
Compare 3 commits »
root synced commits to refs/pull/9526/merge at root/llama.cpp from mirror 2024-09-22 13:06:20 +00:00
0fb0b4eab3 mtgpu: map cublasOperation_t to mublasOperation_t (sync code to latest)
a3ad2c9971 mtgpu: enable unified memory
43ff5f36c2 mtgpu: disable flash attention on qy1 (MTT S80); disable q3_k and mul_mat_batched_cublas
e40b33dcad mtgpu: add mp_21 support
Compare 6 commits »
root synced commits to refs/pull/9532/merge at root/llama.cpp from mirror 2024-09-22 13:06:20 +00:00
a5b57b08ce CUDA: enable Gemma FA for HIP/Pascal (#9581)
Compare 2 commits »
root synced commits to refs/pull/9541/merge at root/llama.cpp from mirror 2024-09-22 13:06:20 +00:00
a5b57b08ce CUDA: enable Gemma FA for HIP/Pascal (#9581)
Compare 2 commits »
root synced commits to refs/pull/9544/merge at root/llama.cpp from mirror 2024-09-22 13:06:20 +00:00
a5b57b08ce CUDA: enable Gemma FA for HIP/Pascal (#9581)
Compare 2 commits »
root synced commits to refs/pull/9571/head at root/llama.cpp from mirror 2024-09-22 13:06:20 +00:00
c4d6f343d4 cuda: add q8_0->f32 cpy operation
a5b57b08ce CUDA: enable Gemma FA for HIP/Pascal (#9581)
ecd5d6b65b llama: remove redundant loop when constructing ubatch (#9574)
2a63caaa69 RWKV v6: RWKV_WKV op CUDA implementation (#9454)
d09770cae7 ggml-alloc : fix list of allocated tensors with GGML_ALLOCATOR_DEBUG (#9573)
Compare 5 commits »
root synced commits to refs/pull/9571/merge at root/llama.cpp from mirror 2024-09-22 13:06:20 +00:00
c4d6f343d4 cuda: add q8_0->f32 cpy operation
a5b57b08ce CUDA: enable Gemma FA for HIP/Pascal (#9581)
Compare 3 commits »
root synced commits to refs/pull/9579/merge at root/llama.cpp from mirror 2024-09-22 13:06:20 +00:00
a5b57b08ce CUDA: enable Gemma FA for HIP/Pascal (#9581)
Compare 2 commits »
root synced commits to refs/tags/b3802 at root/llama.cpp from mirror 2024-09-22 13:06:20 +00:00
root synced new reference refs/tags/b3802 to root/llama.cpp from mirror 2024-09-22 13:06:20 +00:00
root synced and deleted reference refs/tags/refs/pull/9581/merge at root/llama.cpp from mirror 2024-09-22 13:06:19 +00:00
root synced commits to master at root/llama.cpp from mirror 2024-09-22 13:06:19 +00:00
a5b57b08ce CUDA: enable Gemma FA for HIP/Pascal (#9581)
root synced commits to refs/pull/8837/merge at root/llama.cpp from mirror 2024-09-22 13:06:19 +00:00
a5b57b08ce CUDA: enable Gemma FA for HIP/Pascal (#9581)
ecd5d6b65b llama: remove redundant loop when constructing ubatch (#9574)
2a63caaa69 RWKV v6: RWKV_WKV op CUDA implementation (#9454)
Compare 4 commits »