mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-11 13:30:35 +00:00
1c641e6aac
* `main`/`server`: rename to `llama` / `llama-server` for consistency w/ homebrew
* server: update refs -> llama-server
gitignore llama-server
* server: simplify nix package
* main: update refs -> llama
fix examples/main ref
* main/server: fix targets
* update more names
* Update build.yml
* rm accidentally checked in bins
* update straggling refs
* Update .gitignore
* Update server-llm.sh
* main: target name -> llama-cli
* Prefix all example bins w/ llama-
* fix main refs
* rename {main->llama}-cmake-pkg binary
* prefix more cmake targets w/ llama-
* add/fix gbnf-validator subfolder to cmake
* sort cmake example subdirs
* rm bin files
* fix llama-lookup-* Makefile rules
* gitignore /llama-*
* rename Dockerfiles
* rename llama|main -> llama-cli; consistent RPM bin prefixes
* fix some missing -cli suffixes
* rename dockerfile w/ llama-cli
* rename(make): llama-baby-llama
* update dockerfile refs
* more llama-cli(.exe)
* fix test-eval-callback
* rename: llama-cli-cmake-pkg(.exe)
* address gbnf-validator unused fread warning (switched to C++ / ifstream)
* add two missing llama- prefixes
* Updating docs for eval-callback binary to use new `llama-` prefix.
* Updating a few lingering doc references for rename of main to llama-cli
* Updating `run-with-preset.py` to use new binary names.
Updating docs around `perplexity` binary rename.
* Updating documentation references for lookup-merge and export-lora
* Updating two small `main` references missed earlier in the finetune docs.
* Update apps.nix
* update grammar/README.md w/ new llama-* names
* update llama-rpc-server bin name + doc
* Revert "update llama-rpc-server bin name + doc"
This reverts commit e474ef1df4
.
* add hot topic notice to README.md
* Update README.md
* Update README.md
* rename gguf-split & quantize bins refs in **/tests.sh
---------
Co-authored-by: HanClinto <hanclinto@gmail.com>
46 lines
1.9 KiB
Bash
Executable File
46 lines
1.9 KiB
Bash
Executable File
#!/bin/bash
|
|
set -e
|
|
|
|
# Read the first argument into a variable
|
|
arg1="$1"
|
|
|
|
# Shift the arguments to remove the first one
|
|
shift
|
|
|
|
if [[ "$arg1" == '--convert' || "$arg1" == '-c' ]]; then
|
|
python3 ./convert-hf-to-gguf.py "$@"
|
|
elif [[ "$arg1" == '--quantize' || "$arg1" == '-q' ]]; then
|
|
./llama-quantize "$@"
|
|
elif [[ "$arg1" == '--run' || "$arg1" == '-r' ]]; then
|
|
./llama-cli "$@"
|
|
elif [[ "$arg1" == '--finetune' || "$arg1" == '-f' ]]; then
|
|
./llama-finetune "$@"
|
|
elif [[ "$arg1" == '--all-in-one' || "$arg1" == '-a' ]]; then
|
|
echo "Converting PTH to GGML..."
|
|
for i in `ls $1/$2/ggml-model-f16.bin*`; do
|
|
if [ -f "${i/f16/q4_0}" ]; then
|
|
echo "Skip model quantization, it already exists: ${i/f16/q4_0}"
|
|
else
|
|
echo "Converting PTH to GGML: $i into ${i/f16/q4_0}..."
|
|
./llama-quantize "$i" "${i/f16/q4_0}" q4_0
|
|
fi
|
|
done
|
|
elif [[ "$arg1" == '--server' || "$arg1" == '-s' ]]; then
|
|
./llama-server "$@"
|
|
else
|
|
echo "Unknown command: $arg1"
|
|
echo "Available commands: "
|
|
echo " --run (-r): Run a model previously converted into ggml"
|
|
echo " ex: -m /models/7B/ggml-model-q4_0.bin -p \"Building a website can be done in 10 simple steps:\" -n 512"
|
|
echo " --convert (-c): Convert a llama model into ggml"
|
|
echo " ex: --outtype f16 \"/models/7B/\" "
|
|
echo " --quantize (-q): Optimize with quantization process ggml"
|
|
echo " ex: \"/models/7B/ggml-model-f16.bin\" \"/models/7B/ggml-model-q4_0.bin\" 2"
|
|
echo " --finetune (-f): Run finetune command to create a lora finetune of the model"
|
|
echo " See documentation for finetune for command-line parameters"
|
|
echo " --all-in-one (-a): Execute --convert & --quantize"
|
|
echo " ex: \"/models/\" 7B"
|
|
echo " --server (-s): Run a model on the server"
|
|
echo " ex: -m /models/7B/ggml-model-q4_0.bin -c 2048 -ngl 43 -mg 1 --port 8080"
|
|
fi
|