mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-26 03:14:35 +00:00
0c7b3595b9
* add control-vector-generator * calc diff * add comments * proof-of-concept stdlib implementation Implements PCA and file writing using mostly standard libraries. The output is recognized as a functional control vector, but outputs gibberish. * param parsing, refactor, comments Added basic command-line parameters for outfile and one each positive/negative prompt. Refactored some messy code in PCA computation and GGUF exporting. Left a bunch of comments regarding further work needed. * example template completions Implements an example template set built from the positive/negative prompts like the control vector Python implementation. * add multi prompts, multi-thread for PCA * fix mem error * add debugs * fix matrix transpose multiplication you have got to be kidding me * preliminary template/multiprompt support model is running out of context and that ought to be fixed (segfaulting) but other than that it looks goodish * fix zero output & param parsing, functional templating fixed a bug where the output file had no tensor data/was all zero fixed a bug where single hyphen flags were not being correctly parsed implements creation of templated prompts from input (still need to adapt based on model) * fix square_diff matmul index range and CRLF->LF line endings fixed a logic error where square_diff would not multiply all rows fixed a formatting error where the provided completions.txt had CRLF line endings * add command-line args for num threads, num completions file lines, always reload model refactored a few things and did what the commit message says on the tin * code aestheticization * fix compiler warnings * in-series multithreading for prompt embedding? added commented-out code to attempt to start implementing mutlithreading for embedding in main * remove unnecessary multithreading * interim fix memory leak * translated everything but PCA (I think) * tentatively translate the rest * fix ggml errors and make new ones at least it compiles and runs * fix cb_eval * temporary commit while I move dev environments it finally outputs a functioning control vector - "functioning" in the sense that it can be loaded and it clearly has the right idea, but makes the model incoherent * update debug statements * pre-tokenize so we can allocate correct memory to ctx_diffs_wrapped * update comments * (wip) refactor * clean up PCA ggml implementation * fix shape of v_diff_original * add n_batch for pca * working version * remember to copy back the last_eigenvector * fix n_completions * bring back n_completions * default n_pca_batch to 20 * fix macos build * add to makefile all targets * use ggml_format_name * add readme * fix .editorconfig * use ggml_backend_tensor_copy * attemp to fix compile problem on mac * fix compile warn * reuse allocr * move param parser to common * better error handling * clean up a bit * add print_usage * shorten help msg * beautify help msg * escape prompt by default * change compile target to llama-cvector-generator * typo * disable GPU for PCA * code style --------- Co-authored-by: Christian Zhou-Zheng <christianzhouzheng@gmail.com> |
||
---|---|---|
.. | ||
baby-llama | ||
batched | ||
batched-bench | ||
batched.swift | ||
benchmark | ||
convert-llama2c-to-ggml | ||
cvector-generator | ||
embedding | ||
eval-callback | ||
export-lora | ||
finetune | ||
gbnf-validator | ||
gguf | ||
gguf-split | ||
gritlm | ||
imatrix | ||
infill | ||
jeopardy | ||
llama-bench | ||
llama.android | ||
llama.swiftui | ||
llava | ||
lookahead | ||
lookup | ||
main | ||
main-cmake-pkg | ||
parallel | ||
passkey | ||
perplexity | ||
quantize | ||
quantize-stats | ||
retrieval | ||
rpc | ||
save-load-state | ||
server | ||
simple | ||
speculative | ||
sycl | ||
tokenize | ||
train-text-from-scratch | ||
base-translate.sh | ||
chat-13B.bat | ||
chat-13B.sh | ||
chat-persistent.sh | ||
chat-vicuna.sh | ||
chat.sh | ||
CMakeLists.txt | ||
convert-legacy-llama.py | ||
json_schema_to_grammar.py | ||
json-schema-pydantic-example.py | ||
llama.vim | ||
llm.vim | ||
Miku.sh | ||
pydantic_models_to_grammar.py | ||
pydantic-models-to-grammar-examples.py | ||
reason-act.sh | ||
regex-to-grammar.py | ||
server-embd.py | ||
server-llama2-13B.sh | ||
ts-type-to-grammar.sh |