llama.cpp/src
Georgi Gerganov 73cf442e7b
llama : fix Gemma-2 Query scaling factors (#8473)
* 9B - query_pre_attn_scalar = 256 not 224

See 03e657582d

Gemma 9b should use 256 and not 224 (self.config.hidden_size // self.config.num_attention_heads)

* llama : fix Gemma-2 Query scaling factor

ggml-ci

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>
2024-07-14 14:05:09 +03:00
..
CMakeLists.txt tests : add _CRT_SECURE_NO_WARNINGS for WIN32 (#8231) 2024-07-04 13:53:42 +03:00
llama.cpp llama : fix Gemma-2 Query scaling factors (#8473) 2024-07-14 14:05:09 +03:00
unicode-data.cpp Removes multiple newlines at the end of files that is breaking the editorconfig step of CI. (#8258) 2024-07-02 12:18:10 -04:00
unicode-data.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
unicode.cpp msvc : silence codecvt c++17 deprecation warnings (#8395) 2024-07-10 14:40:53 +03:00
unicode.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00