mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-25 02:44:36 +00:00
README: add "Supported platforms" + update hot topics
This commit is contained in:
parent
a93120236f
commit
7d86e25bf6
@ -5,10 +5,11 @@ Inference of [Facebook's LLaMA](https://github.com/facebookresearch/llama) model
|
|||||||
**Hot topics**
|
**Hot topics**
|
||||||
|
|
||||||
- Running on Windows: https://github.com/ggerganov/llama.cpp/issues/22
|
- Running on Windows: https://github.com/ggerganov/llama.cpp/issues/22
|
||||||
|
- Fix Tokenizer / Unicode support: https://github.com/ggerganov/llama.cpp/issues/11
|
||||||
|
|
||||||
## Description
|
## Description
|
||||||
|
|
||||||
The main goal is to run the model using 4-bit quantization on a MacBook.
|
The main goal is to run the model using 4-bit quantization on a MacBook
|
||||||
|
|
||||||
- Plain C/C++ implementation without dependencies
|
- Plain C/C++ implementation without dependencies
|
||||||
- Apple silicon first-class citizen - optimized via Arm Neon and Accelerate framework
|
- Apple silicon first-class citizen - optimized via Arm Neon and Accelerate framework
|
||||||
@ -22,6 +23,12 @@ Please do not make conclusions about the models based on the results from this i
|
|||||||
For all I know, it can be completely wrong. This project is for educational purposes and is not going to be maintained properly.
|
For all I know, it can be completely wrong. This project is for educational purposes and is not going to be maintained properly.
|
||||||
New features will probably be added mostly through community contributions, if any.
|
New features will probably be added mostly through community contributions, if any.
|
||||||
|
|
||||||
|
Supported platformst:
|
||||||
|
|
||||||
|
- [X] Mac OS
|
||||||
|
- [X] Linux
|
||||||
|
- [ ] Windows (soon)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
Here is a typical run using LLaMA-7B:
|
Here is a typical run using LLaMA-7B:
|
||||||
|
Loading…
Reference in New Issue
Block a user