llama.cpp/.devops/nix
compilade d6bd4d46dd
llama : support StableLM 2 1.6B (#5052)
* llama : support StableLM 2 1.6B

* convert : fix Qwen's set_vocab wrongly naming all special tokens [PAD{id}]

* convert : refactor Qwen's set_vocab to use it for StableLM 2 too

* nix : add tiktoken to llama-python-extra

* convert : use presence of tokenizer.json to determine StableLM tokenizer loader

It's a less arbitrary heuristic than the vocab size.
2024-01-22 13:21:52 +02:00
..
apps.nix flake.nix : rewrite (#4605) 2023-12-29 16:42:26 +02:00
devshells.nix flake.nix : rewrite (#4605) 2023-12-29 16:42:26 +02:00
jetson-support.nix flake.nix: expose full scope in legacyPackages 2023-12-31 13:14:58 -08:00
nixpkgs-instances.nix flake.nix : rewrite (#4605) 2023-12-29 16:42:26 +02:00
package.nix llama : support StableLM 2 1.6B (#5052) 2024-01-22 13:21:52 +02:00
scope.nix flake.nix : rewrite (#4605) 2023-12-29 16:42:26 +02:00