llama.cpp/tests
Kawrakow bd2d4e393b
1.5 bit quantization (#5453)
* iq1_s: WIP basics

* iq1_s: CUDA is working

* iq1_s: scalar CPU dot product

* iq1_s: WIP AVX2 dot product - something is not right

* Fix tests

* Fix shadow warnings

* Fix after merge with latest master

* iq1_s: AVX2 finally works

* iq1_s: ARM_NEON dot product. Works, but not very fast

* iq1_s: better grid

* iq1_s: use IQ2_XXS for attn_output

At a cost of 0.04 extra bpw this gives a big improvement in PPL.

* iq1_s: Metal basics

Dequantize works, but not dot product

* iq1_s: Metal works, but quite slow

As usual, Apple Silicon does not like the code I write.

* iq1_s: Tests

* iq1_s: slightly faster dot product

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-02-18 18:16:55 +02:00
..
.gitignore tests : .gitignore obj files 2024-02-08 09:46:47 +02:00
CMakeLists.txt ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
get-model.cpp ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
get-model.h ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
test-autorelease.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-backend-ops.cpp 1.5 bit quantization (#5453) 2024-02-18 18:16:55 +02:00
test-c.c Nomic Vulkan backend (#4456) 2024-01-29 15:50:50 -05:00
test-double-float.cpp ggml : move FP16 <-> FP32 code to ggml-impl.h (#3861) 2023-10-30 19:19:15 +02:00
test-grad0.cpp cuda : improve cuda pool efficiency using virtual memory (#4606) 2023-12-24 14:34:22 +01:00
test-grammar-parser.cpp gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
test-llama-grammar.cpp refactor : switch to emplace_back to avoid extra object (#5291) 2024-02-03 13:23:37 +02:00
test-model-load-cancel.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-opt.cpp sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00
test-quantize-fns.cpp ggml : add mmla kernels for quantized GEMM (#4966) 2024-02-11 15:22:33 +02:00
test-quantize-perf.cpp ggml : add mmla kernels for quantized GEMM (#4966) 2024-02-11 15:22:33 +02:00
test-rope.cpp llama : custom attention mask + parallel decoding + no context swaps (#3228) 2023-09-28 19:04:36 +03:00
test-sampling.cpp sampling: fix top_k <= 0 (#5388) 2024-02-08 09:46:30 +01:00
test-tokenizer-0-falcon.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-tokenizer-0-falcon.py ci : add flake8 to github actions (python linting) (#4129) 2023-11-20 11:35:47 +01:00
test-tokenizer-0-llama.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-tokenizer-0-llama.py ci : add flake8 to github actions (python linting) (#4129) 2023-11-20 11:35:47 +01:00
test-tokenizer-1-bpe.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-tokenizer-1-llama.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00