mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-11 21:39:52 +00:00
37c746d687
* enable qwen to llama.cpp * llama : do not GPU split bias tensors --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> |
||
---|---|---|
.. | ||
alpaca.txt | ||
assistant.txt | ||
chat-with-baichuan.txt | ||
chat-with-bob.txt | ||
chat-with-qwen.txt | ||
chat-with-vicuna-v0.txt | ||
chat-with-vicuna-v1.txt | ||
chat.txt | ||
dan-modified.txt | ||
dan.txt | ||
LLM-questions.txt | ||
mnemonics.txt | ||
parallel-questions.txt | ||
reason-act.txt |