mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-26 11:24:35 +00:00
4bd0f93e4a
* model: dbrx convert to gguf #6344 * llama: support dbrx #6344 * doc: dbrx: add the model as supported * scripts: get-wikitext-2 add unzip * llama: increase maximum experts allowed * llama: factorize moe graph implementation between grok, mixtral and dbrx --------- Co-authored-by: Megha Agarwal <16129366+megha95@users.noreply.github.com>
12 lines
247 B
Bash
Executable File
12 lines
247 B
Bash
Executable File
#!/bin/bash
|
|
|
|
wget https://huggingface.co/datasets/ggml-org/ci/resolve/main/wikitext-2-raw-v1.zip
|
|
unzip wikitext-2-raw-v1.zip
|
|
|
|
echo "Usage:"
|
|
echo ""
|
|
echo " ./perplexity -m model.gguf -f wikitext-2-raw/wiki.test.raw [other params]"
|
|
echo ""
|
|
|
|
exit 0
|