mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-28 12:24:35 +00:00
55c1b2a3bb
* iq1_m: basics * iq1_m: basics-2 * iq1_m: CUDA dequantize works Very 1st shot I get PPL = 9.76 for LLaMA-v2-7B. * iq1_m: separate shifts for each group of 8 in a block We get PPL(LLaMA-v2-7B ) = 9.2810 PPL(LLaMA-v2-13B) = 6.8105 Not bad, but slightly higher than sqrt(PPL(IQ1_S) * PPL(IQ2_XXS)) which is the expected outcome given that IQ1_M is halfway between IQ1_S and IQ2_XXS in terms of bpw. From this, we would expect PPL = 9.14 for LLaMA-v2-7B PPL = 6.63 for LLaMA-v2-13B * iq1_m: go to 3-bit scales There is slight increase in PPL, but the 0.0625 bpw reduction in size is totally worth it. We now have PPL(LLaMA-v2-7B ) = 9.4469 at 1.96 bpw PPL(LLaMA-v2-13B) = 6.8717 at 1.93 bpw PPL(LLaMA-v2-70B) = 4.8568 at 1.85 bpw * iq1_m: scalar dot product * iq1_m: AVX2 dot product * iq1_m: very slightly faster AVX2 dot product * iq1_m: ARM_NEON dot product Works, but very slow (10.5 t/s) * iq1_m: Metal - dequantize works, dot product does not * iq1_m: Metal now works About the same performance as iq1_s. * iq1_m: minor * iq1_m: checking pure iq1_m quantization It is pretty bad: PPL(LLaMA-v2-7B) = 34 if we quantize output.weight with Q4_K. * iiq1_m: slightly faster ARM_NEON dot product 10.5 t/s -> 11.65 t/s * iq1_m: faster ARM_NEON dot product 11.65 t/s -> 14.9 t/s * iq1_m: another minor ARM_NEON dot product improvement 14.9 -> 15.0 t/s * iq1_m: small PPL improvement via super-block scale adjustment After quantizing block scales redo the super-block scale fit. PPL(LLaMA-v2-7B ) = 9.3346 PPL(LLaMA-v2-13B) = 6.8419 PPL(LLaMA-v2-70B) = 4.8294 PPL(Mistral-7B ) = 8.1624 * iq1_m: adapt to CUDA refactoring * iq1_m: remove unused variable We have progressed to warnings being errors. * iq1_m: add to backend-ops tests * iq1_m: fix Windows ARM * iq1_m: use common definition of iq1m_scale_t * cuda: assert -> NO_DEVICE_CODE * iq1_M: PR comments --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com> |
||
---|---|---|
.. | ||
acc.cu | ||
acc.cuh | ||
alibi.cu | ||
alibi.cuh | ||
arange.cu | ||
arange.cuh | ||
argsort.cu | ||
argsort.cuh | ||
binbcast.cu | ||
binbcast.cuh | ||
clamp.cu | ||
clamp.cuh | ||
common.cuh | ||
concat.cu | ||
concat.cuh | ||
convert.cu | ||
convert.cuh | ||
cpy.cu | ||
cpy.cuh | ||
dequantize.cuh | ||
diagmask.cu | ||
diagmask.cuh | ||
dmmv.cu | ||
dmmv.cuh | ||
getrows.cu | ||
getrows.cuh | ||
im2col.cu | ||
im2col.cuh | ||
mmq.cu | ||
mmq.cuh | ||
mmvq.cu | ||
mmvq.cuh | ||
norm.cu | ||
norm.cuh | ||
pad.cu | ||
pad.cuh | ||
pool2d.cu | ||
pool2d.cuh | ||
quantize.cu | ||
quantize.cuh | ||
rope.cu | ||
rope.cuh | ||
scale.cu | ||
scale.cuh | ||
softmax.cu | ||
softmax.cuh | ||
sumrows.cu | ||
sumrows.cuh | ||
tsembd.cu | ||
tsembd.cuh | ||
unary.cu | ||
unary.cuh | ||
upscale.cu | ||
upscale.cuh | ||
vecdotq.cuh |