llama.cpp/common
Kawrakow 76aa30a263
Add ability to use Q5_0, Q5_1, and IQ4_NL for quantized K cache (#6183)
* k_cache: be able to use Q5_0

* k_cache: be able to use Q5_1 on CODA

* k_cache: be able to use Q5_0 on Metal

* k_cache: be able to use Q5_1 on Metal

* k_cache: be able to use IQ4_NL - just CUDA for now

* k_cache: be able to use IQ4_NL on Metal

* k_cache: add newly added supported types to llama-bench and CUDA supports_op

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-03-21 08:27:57 +01:00
..
base64.hpp llava : expose as a shared library for downstream projects (#3613) 2023-11-07 00:36:23 +03:00
build-info.cpp.in build : link against build info instead of compiling against it (#3879) 2023-11-02 08:50:16 +02:00
CMakeLists.txt common: llama_load_model_from_url using --model-url (#6098) 2024-03-17 19:12:37 +01:00
common.cpp Add ability to use Q5_0, Q5_1, and IQ4_NL for quantized K cache (#6183) 2024-03-21 08:27:57 +01:00
common.h common: llama_load_model_from_url using --model-url (#6098) 2024-03-17 19:12:37 +01:00
console.cpp check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
console.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
grammar-parser.cpp grammar : verify parsed state (#5950) 2024-03-10 17:17:43 +02:00
grammar-parser.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
log.h log : fix MSVC compile errors (#5643) 2024-03-08 11:35:04 +02:00
sampling.cpp grammar : handle missing "root" node (#6004) 2024-03-13 20:10:40 +02:00
sampling.h common : disable repeat penalties by default (#6127) 2024-03-19 10:21:54 +02:00
stb_image.h examples: support LLaVA v1.5 (multimodal model) (#3436) 2023-10-12 18:23:18 +03:00
train.cpp code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
train.h sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00