llama.cpp/src
Gabe Goodhart 3d6bf6919f
llama : add IBM Granite MoE architecture (#9438)
* feat(gguf-py): Add granitemoe architecture

This includes the addition of new tensor names for the new moe layers.
These may not be correct at this point due to the need for the hack in
gguf_writer.py to double-check the length of the shape for these layers.

Branch: GraniteMoE

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat(convert_hf_to_gguf): Add GraniteMoeModel

GraniteMoe has the same configuration deltas as Granite

Branch: GraniteMoE

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(granitemoe convert): Split the double-sized input layer into gate and up

After a lot of staring and squinting, it's clear that the standard mixtral
expert implementation is equivalent to the vectorized parallel experts in
granite. The difference is that in granite, the w1 and w3 are concatenated
into a single tensor "input_linear." Rather than reimplementing all of the
math on the llama.cpp side, the much simpler route is to just split this
tensor during conversion and follow the standard mixtral route.

Branch: GraniteMoE

Co-Authored-By: alex.brooks@ibm.com

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat(granitemoe): Implement granitemoe

GraniteMoE follows the mixtral architecture (once the input_linear layers
are split into gate_exps/up_exps). The main delta is the addition of the
same four multipliers used in Granite.

Branch: GraniteMoE

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* Typo fix in docstring

Co-Authored-By: ggerganov@gmail.com

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(conversion): Simplify tensor name mapping in conversion

Branch: GraniteMoE

Co-Authored-By: git@compilade.net
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(convert): Remove unused tensor name mappings

Branch: GraniteMoE

Co-Authored-By: git@compilade.net
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(convert): Sanity check on merged FFN tensor sizes

Branch: GraniteMoE

Co-Authored-By: git@compilade.net
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Allow "output" layer in granite moe architecture (convert and cpp)

Branch: GraniteMoE

Co-Authored-By: git@compilade.net
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(granite): Add missing 'output' tensor for Granite

This is a fix for the previous `granite` architecture PR. Recent snapshots
have included this (`lm_head.weights`) as part of the architecture

Branch: GraniteMoE

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

---------

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-09-25 10:06:52 +03:00
..
CMakeLists.txt llama : move vocab, grammar and sampling into separate files (#8508) 2024-07-23 13:10:17 +03:00
llama-grammar.cpp llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
llama-grammar.h llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
llama-impl.h log : add CONT level for continuing previous log entry (#9610) 2024-09-24 10:15:35 +03:00
llama-sampling.cpp sampling : avoid expensive softmax during greedy sampling (#9605) 2024-09-24 09:03:17 +03:00
llama-sampling.h llama : refactor samplers internal implementation (#9370) 2024-09-08 15:52:07 +02:00
llama-vocab.cpp llama : keep track of all EOG tokens in the vocab (#9609) 2024-09-24 10:16:06 +03:00
llama-vocab.h llama : keep track of all EOG tokens in the vocab (#9609) 2024-09-24 10:16:06 +03:00
llama.cpp llama : add IBM Granite MoE architecture (#9438) 2024-09-25 10:06:52 +03:00
unicode-data.cpp Removes multiple newlines at the end of files that is breaking the editorconfig step of CI. (#8258) 2024-07-02 12:18:10 -04:00
unicode-data.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
unicode.cpp unicode : add <algorithm> (#9508) 2024-09-17 09:51:15 +03:00
unicode.h llama : move vocab, grammar and sampling into separate files (#8508) 2024-07-23 13:10:17 +03:00