mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-14 23:09:53 +00:00
06943a69f6
* ggml : move rope type enum to ggml.h
This commit moves the `llama_rope_type` enum from `llama.h` to
`ggml.h` and changes its name to `ggml_rope_type`.
The motivation for this change is to address the TODO in `llama.h` and
use the enum in ggml.
Note: This commit does not change the `mode` parameter to be of type
`enum ggml_rope_type`. The name `mode` and its usage suggest that it
might be more generic and possibly used as a bit field for multiple
flags. Further investigation/discussion may be needed to determine
if `mode` should be restricted to RoPE types.
* squash! ggml : move rope type enum to ggml.h
This commit removes GGML_ROPE_TYPE_NONE and GGML_ROPE_TYPE_GLM from
ggml.h, and back the llama_rope_type enum.
I've kept the assert for GGML_ROPE_TYPE_GLM as I'm not sure if it is
safe to remove it yet.
* squash! ggml : move rope type enum to ggml.h
This commit removes the enum ggml_rope_type from ggml.h and replaces it
with a define (GGML_ROPE_TYPE_NEOX). This define is used in the code to
check if the mode is set to GPT-NeoX. Also the enum llama_rope_type has
been updated to reflect this change.
* squash! ggml : move rope type enum to ggml.h
This commit contains a suggestion enable the GGML_ROPE_TYPE_NEOX
macro/define to be passed to the shader compiler.
* squash! ggml : move rope type enum to ggml.h
This commit fixes the editorconfig-checker warnings.
* squash! ggml : move rope type enum to ggml.h
Update comment for ggml_rope function.
* Revert "squash! ggml : move rope type enum to ggml.h"
This reverts commit
|
||
---|---|---|
.. | ||
common.comp | ||
op_add.comp | ||
op_addrow.comp | ||
op_cpy_f16_f16.comp | ||
op_cpy_f16_f32.comp | ||
op_cpy_f32_f16.comp | ||
op_cpy_f32_f32.comp | ||
op_diagmask.comp | ||
op_gelu.comp | ||
op_getrows_f16.comp | ||
op_getrows_f32.comp | ||
op_getrows_q4_0.comp | ||
op_getrows_q4_1.comp | ||
op_getrows_q6_k.comp | ||
op_getrows.comp | ||
op_mul_mat_f16.comp | ||
op_mul_mat_mat_f32.comp | ||
op_mul_mat_q4_0.comp | ||
op_mul_mat_q4_1.comp | ||
op_mul_mat_q6_k.comp | ||
op_mul_mat_q8_0.comp | ||
op_mul_mv_q_n_pre.comp | ||
op_mul_mv_q_n.comp | ||
op_mul.comp | ||
op_norm.comp | ||
op_relu.comp | ||
op_rmsnorm.comp | ||
op_rope_f16.comp | ||
op_rope_f32.comp | ||
op_scale_8.comp | ||
op_scale.comp | ||
op_silu.comp | ||
op_softmax.comp | ||
rope_common.comp |