mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-15 07:19:53 +00:00
9b596417af
* CUDA: quantized KV support for FA vec * try CI fix * fix commented-out kernel variants * add q8_0 q4_0 tests * fix nwarps > batch size * split fattn compile via extern templates * fix flake8 * fix metal tests * fix cmake * make generate_cu_files.py executable * add autogenerated .cu files * fix AMD * error if type_v != FP16 and not flash_attn * remove obsolete code
9 lines
276 B
Plaintext
9 lines
276 B
Plaintext
// This file has been autogenerated by generate-variants.py, do not edit manually.
|
|
|
|
#include "../fattn-wmma-f16.cuh"
|
|
|
|
DECL_FATTN_WMMA_F16_CASE(64, 8, half);
|
|
DECL_FATTN_WMMA_F16_CASE(96, 8, half);
|
|
DECL_FATTN_WMMA_F16_CASE(128, 8, half);
|
|
DECL_FATTN_WMMA_F16_CASE(256, 8, half);
|