mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-13 14:29:52 +00:00
9c1ba55733
* style: format with nixfmt/rfc101-style * build(nix): Package gguf-py * build(nix): Refactor to new scope for gguf-py * build(nix): Exclude gguf-py from devShells * build(nix): Refactor gguf-py derivation to take in exact deps * build(nix): Enable pytestCheckHook and pythonImportsCheck for gguf-py * build(python): Package python scripts with pyproject.toml * chore: Cleanup * dev(nix): Break up python/C devShells * build(python): Relax pytorch version constraint Nix has an older version * chore: Move cmake to nativeBuildInputs for devShell * fmt: Reconcile formatting with rebase * style: nix fmt * cleanup: Remove unncessary __init__.py * chore: Suggestions from review - Filter out non-source files from llama-scripts flake derivation - Clean up unused closure - Remove scripts devShell * revert: Bad changes * dev: Simplify devShells, restore the -extra devShell * build(nix): Add pyyaml for gguf-py * chore: Remove some unused bindings * dev: Add tiktoken to -extra devShells |
||
---|---|---|
.. | ||
nix | ||
cloud-v-pipeline | ||
full-cuda.Dockerfile | ||
full-rocm.Dockerfile | ||
full.Dockerfile | ||
llama-cli-cann.Dockerfile | ||
llama-cli-cuda.Dockerfile | ||
llama-cli-intel.Dockerfile | ||
llama-cli-rocm.Dockerfile | ||
llama-cli-vulkan.Dockerfile | ||
llama-cli.Dockerfile | ||
llama-cpp-cuda.srpm.spec | ||
llama-cpp.srpm.spec | ||
llama-server-cuda.Dockerfile | ||
llama-server-intel.Dockerfile | ||
llama-server-rocm.Dockerfile | ||
llama-server-vulkan.Dockerfile | ||
llama-server.Dockerfile | ||
tools.sh |