mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-14 23:09:53 +00:00
9b596417af
* CUDA: quantized KV support for FA vec * try CI fix * fix commented-out kernel variants * add q8_0 q4_0 tests * fix nwarps > batch size * split fattn compile via extern templates * fix flake8 * fix metal tests * fix cmake * make generate_cu_files.py executable * add autogenerated .cu files * fix AMD * error if type_v != FP16 and not flash_attn * remove obsolete code
11 lines
367 B
Plaintext
11 lines
367 B
Plaintext
// This file has been autogenerated by generate-variants.py, do not edit manually.
|
|
|
|
#include "../fattn-wmma-f16.cuh"
|
|
|
|
DECL_FATTN_WMMA_F16_CASE(64, 16, float);
|
|
DECL_FATTN_WMMA_F16_CASE(80, 16, float);
|
|
DECL_FATTN_WMMA_F16_CASE(96, 16, float);
|
|
DECL_FATTN_WMMA_F16_CASE(112, 16, float);
|
|
DECL_FATTN_WMMA_F16_CASE(128, 16, float);
|
|
DECL_FATTN_WMMA_F16_CASE(256, 16, float);
|