llama.cpp/tests
Kawrakow 55c1b2a3bb
IQ1_M: 1.75 bpw quantization (#6302)
* iq1_m: basics

* iq1_m: basics-2

* iq1_m: CUDA dequantize works

Very 1st shot I get PPL = 9.76 for LLaMA-v2-7B.

* iq1_m: separate shifts for each group of 8 in a block

We get
PPL(LLaMA-v2-7B ) = 9.2810
PPL(LLaMA-v2-13B) = 6.8105

Not bad, but slightly higher than
  sqrt(PPL(IQ1_S) * PPL(IQ2_XXS))
which is the expected outcome given that IQ1_M is
halfway between IQ1_S and IQ2_XXS in terms of bpw.
From this, we would expect
 PPL = 9.14 for LLaMA-v2-7B
 PPL = 6.63 for LLaMA-v2-13B

* iq1_m: go to 3-bit scales

There is slight increase in PPL, but the 0.0625 bpw reduction
in size is totally worth it.

We now have
PPL(LLaMA-v2-7B ) = 9.4469 at 1.96 bpw
PPL(LLaMA-v2-13B) = 6.8717 at 1.93 bpw
PPL(LLaMA-v2-70B) = 4.8568 at 1.85 bpw

* iq1_m: scalar dot product

* iq1_m: AVX2 dot product

* iq1_m: very slightly faster AVX2 dot product

* iq1_m: ARM_NEON dot product

Works, but very slow (10.5 t/s)

* iq1_m: Metal - dequantize works, dot product does not

* iq1_m: Metal now works

About the same performance as iq1_s.

* iq1_m: minor

* iq1_m: checking pure iq1_m quantization

It is pretty bad: PPL(LLaMA-v2-7B) = 34 if we quantize output.weight
with Q4_K.

* iiq1_m: slightly faster ARM_NEON dot product

10.5 t/s -> 11.65 t/s

* iq1_m: faster ARM_NEON dot product

11.65 t/s -> 14.9 t/s

* iq1_m: another minor ARM_NEON dot product improvement

14.9 -> 15.0 t/s

* iq1_m: small PPL improvement via super-block scale adjustment

After quantizing block scales redo the super-block scale fit.

PPL(LLaMA-v2-7B ) = 9.3346
PPL(LLaMA-v2-13B) = 6.8419
PPL(LLaMA-v2-70B) = 4.8294
PPL(Mistral-7B  ) = 8.1624

* iq1_m: adapt to CUDA refactoring

* iq1_m: remove unused variable

We have progressed to warnings being errors.

* iq1_m: add to backend-ops tests

* iq1_m: fix Windows ARM

* iq1_m: use common definition of iq1m_scale_t

* cuda: assert -> NO_DEVICE_CODE

* iq1_M: PR comments

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-03-26 15:21:27 +01:00
..
.gitignore tests : gitignore ggml-common.h 2024-03-09 14:17:11 +02:00
CMakeLists.txt json-schema-to-grammar improvements (+ added to server) (#5978) 2024-03-21 11:50:43 +00:00
get-model.cpp ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
get-model.h ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
run-json-schema-to-grammar.mjs json-schema-to-grammar improvements (+ added to server) (#5978) 2024-03-21 11:50:43 +00:00
test-autorelease.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-backend-ops.cpp IQ1_M: 1.75 bpw quantization (#6302) 2024-03-26 15:21:27 +01:00
test-c.c Nomic Vulkan backend (#4456) 2024-01-29 15:50:50 -05:00
test-chat-template.cpp llama : add Orion chat template (#6066) 2024-03-15 10:44:57 +02:00
test-double-float.cpp ggml : move FP16 <-> FP32 code to ggml-impl.h (#3861) 2023-10-30 19:19:15 +02:00
test-grad0.cpp cuda : improve cuda pool efficiency using virtual memory (#4606) 2023-12-24 14:34:22 +01:00
test-grammar-parser.cpp ggml, common, examples, tests : fixed type arguments in printf (#5528) 2024-02-18 18:20:12 +02:00
test-json-schema-to-grammar.cpp tests : conditional python & node json schema tests (#6207) 2024-03-22 15:09:07 +02:00
test-llama-grammar.cpp ggml, common, examples, tests : fixed type arguments in printf (#5528) 2024-02-18 18:20:12 +02:00
test-model-load-cancel.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-opt.cpp code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
test-quantize-fns.cpp tests : include IQ2_XXS and IQ2_XS in test-quantize-fns (#6303) 2024-03-25 19:33:15 +02:00
test-quantize-perf.cpp ggml : add mmla kernels for quantized GEMM (#4966) 2024-02-11 15:22:33 +02:00
test-rope.cpp llama : custom attention mask + parallel decoding + no context swaps (#3228) 2023-09-28 19:04:36 +03:00
test-sampling.cpp sampling: fix top_k <= 0 (#5388) 2024-02-08 09:46:30 +01:00
test-tokenizer-0-falcon.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-tokenizer-0-falcon.py ci : add flake8 to github actions (python linting) (#4129) 2023-11-20 11:35:47 +01:00
test-tokenizer-0-llama.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-tokenizer-0-llama.py ci : add flake8 to github actions (python linting) (#4129) 2023-11-20 11:35:47 +01:00
test-tokenizer-1-bpe.cpp llama : refactor unicode stuff (#5992) 2024-03-11 17:47:47 +02:00
test-tokenizer-1-llama.cpp llama : refactor unicode stuff (#5992) 2024-03-11 17:47:47 +02:00