llama : fix typo in llama_tensor_get_type comment [no ci] (#8937)

This commit is contained in:
Daniel Bevenius 2024-08-09 08:32:23 +02:00 committed by GitHub
parent daef3ab233
commit 6f6496bb09
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -15304,7 +15304,7 @@ static ggml_type llama_tensor_get_type(quantize_state_internal & qs, ggml_type n
const int n_expert = std::max(1, (int)qs.model.hparams.n_expert);
auto layer_info = [n_expert] (int i_layer, int n_layer, const char * name) {
if (n_expert > 1) {
// Believe it or not, "experts" in the FFN of Mixtral-8x7B are not consecutive, but iccasionally randomly
// Believe it or not, "experts" in the FFN of Mixtral-8x7B are not consecutive, but occasionally randomly
// sprinkled in the model. Hence, simply dividing i_ffn_down by n_expert does not work
// for getting the current layer as I initially thought, and we need to resort to parsing the
// tensor name.