2023-03-22 05:32:36 +00:00
# ifndef LLAMA_H
# define LLAMA_H
2023-06-06 19:33:23 +00:00
# include "ggml.h"
# ifdef GGML_USE_CUBLAS
# include "ggml-cuda.h"
# define LLAMA_MAX_DEVICES GGML_CUDA_MAX_DEVICES
# else
# define LLAMA_MAX_DEVICES 1
# endif // GGML_USE_CUBLAS
2023-03-22 05:32:36 +00:00
# include <stddef.h>
# include <stdint.h>
# include <stdbool.h>
# ifdef LLAMA_SHARED
2023-03-29 13:19:29 +00:00
# if defined(_WIN32) && !defined(__MINGW32__)
2023-03-22 05:32:36 +00:00
# ifdef LLAMA_BUILD
# define LLAMA_API __declspec(dllexport)
# else
# define LLAMA_API __declspec(dllimport)
# endif
# else
# define LLAMA_API __attribute__ ((visibility ("default")))
# endif
# else
# define LLAMA_API
# endif
2023-06-24 08:47:58 +00:00
# ifdef __GNUC__
# define DEPRECATED(func, hint) func __attribute__((deprecated(hint)))
# elif defined(_MSC_VER)
# define DEPRECATED(func, hint) __declspec(deprecated(hint)) func
# else
# define DEPRECATED(func, hint) func
# endif
2023-08-16 16:25:29 +00:00
# define LLAMA_DEFAULT_SEED 0xFFFFFFFF
2023-05-20 12:58:15 +00:00
2023-08-16 16:25:29 +00:00
# define LLAMA_FILE_MAGIC_GGSN 0x6767736eu // 'ggsn'
2023-03-22 05:32:36 +00:00
2023-08-16 16:25:29 +00:00
# define LLAMA_SESSION_MAGIC LLAMA_FILE_MAGIC_GGSN
# define LLAMA_SESSION_VERSION 1
2023-06-29 13:15:15 +00:00
2023-06-04 20:34:30 +00:00
# if defined(GGML_USE_CUBLAS) || defined(GGML_USE_CLBLAST) || defined(GGML_USE_METAL)
2023-05-28 17:48:57 +00:00
// Defined when llama.cpp is compiled with support for offloading model layers to GPU.
# define LLAMA_SUPPORTS_GPU_OFFLOAD
# endif
2023-03-22 05:32:36 +00:00
# ifdef __cplusplus
extern " C " {
# endif
//
// C interface
//
// TODO: show sample usage
//
2023-06-24 08:47:58 +00:00
struct llama_model ;
2023-03-22 05:32:36 +00:00
struct llama_context ;
typedef int llama_token ;
typedef struct llama_token_data {
2023-05-20 08:06:11 +00:00
llama_token id ; // token id
float logit ; // log-odds of the token
float p ; // probability of the token
2023-03-22 05:32:36 +00:00
} llama_token_data ;
llama : new sampling algorithms (#1126)
* Sample interface, new samplers.
New samplers:
- locally typical sampling
- tail free sampling
- frequency and presence penalty
- mirostat
Ignore EOS fix: -inf should be used.
* mirostat
* Added --logit-bias and --no-penalize-nl, removed std::span
* Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
* Save and load example adjust
* Tests
* Windows build fix
* Windows test fix
2023-04-29 05:34:41 +00:00
typedef struct llama_token_data_array {
llama_token_data * data ;
size_t size ;
bool sorted ;
} llama_token_data_array ;
2023-03-28 16:48:20 +00:00
typedef void ( * llama_progress_callback ) ( float progress , void * ctx ) ;
2023-03-25 05:26:28 +00:00
2023-08-09 20:46:40 +00:00
enum llama_log_level {
LLAMA_LOG_LEVEL_ERROR = 2 ,
LLAMA_LOG_LEVEL_WARN = 3 ,
LLAMA_LOG_LEVEL_INFO = 4
} ;
// Signature for logging events
// Note that text includes the new line character at the end for most events.
// If your logging mechanism cannot handle that, check if the last character is '\n' and strip it
// if it exists.
// It might not exist for progress report where '.' is output repeatedly.
2023-08-14 13:35:16 +00:00
typedef void ( * llama_log_callback ) ( enum llama_log_level level , const char * text , void * user_data ) ;
2023-08-09 20:46:40 +00:00
struct llama_context_params {
2023-07-23 12:09:47 +00:00
uint32_t seed ; // RNG seed, -1 for random
int32_t n_ctx ; // text context
int32_t n_batch ; // prompt processing batch size
int32_t n_gpu_layers ; // number of layers to store in VRAM
int32_t main_gpu ; // the GPU that is used for scratch and small tensors
2023-07-21 10:10:51 +00:00
const float * tensor_split ; // how to split layers across multiple GPUs (size: LLAMA_MAX_DEVICES)
llama : add custom RoPE (#2054)
* Implement customizable RoPE
The original RoPE has pre-defined parameters
theta_i = 10000^(−2(i−1)/d), for i in [1, 2, ..., d/2]
Our customizable RoPE, ggml_rope_custom_inplace, uses
theta_i = scale * base^(−2(i−1)/d), for i in [1, 2, ..., d/2]
with the default matches the original
scale = 1.0
base = 10000
The new command line arguments
--rope-freq-base
--rope-freq-scale
set the two new RoPE parameter.
Recent researches show changing these two parameters extends the context limit with minimal loss.
1. Extending Context to 8K
kaiokendev
https://kaiokendev.github.io/til#extending-context-to-8k
2. Extending Context Window of Large Language Models via Positional Interpolation
Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian
https://arxiv.org/abs/2306.15595
3. NTK-Aware Scaled RoPE allows LLaMA models to have extended (8k+) context size without any fine-tuning and minimal perplexity degradation.
https://www.reddit.com/user/bloc97
https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/
For the bold, try adding the following command line parameters to your favorite model:
-c 16384 --rope-freq-base 80000 --rope-freq-scale 0.5
* ggml-metal: fix custom rope
* common: fix argument names in help
* llama: increase MEM_REQ_EVAL for MODEL_3B
It avoids crashing for quantized weights on CPU.
Better ways to calculate the required buffer size would be better.
* llama: make MEM_REQ_EVAL depend on n_ctx
* server: use proper Content-Type in curl examples
Without the header Content-Type: application/json, curl will POST with
Content-Type: application/x-www-form-urlencoded
Though our simple server doesn't care, the httplib.h used has a limit
with CPPHTTPLIB_FORM_URL_ENCODED_PAYLOAD_MAX_LENGTH 8192
With Content-Type: application/json, we can send large json data.
* style : minor fixes, mostly indentations
* ggml : fix asserts
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-15 10:34:16 +00:00
// ref: https://github.com/ggerganov/llama.cpp/pull/2054
float rope_freq_base ; // RoPE base frequency
float rope_freq_scale ; // RoPE frequency scaling factor
2023-06-20 01:24:39 +00:00
// called with a progress value between 0 and 1, pass NULL to disable
llama_progress_callback progress_callback ;
// context pointer passed to the progress callback
void * progress_callback_user_data ;
2023-03-22 05:32:36 +00:00
2023-06-20 01:24:39 +00:00
// Keep the booleans together to avoid misalignment during copy-by-value.
bool low_vram ; // if true, reduce VRAM usage at the cost of performance
2023-07-31 13:44:35 +00:00
bool mul_mat_q ; // if true, use experimental mul_mat_q kernels
2023-03-22 05:32:36 +00:00
bool f16_kv ; // use fp16 for KV cache
bool logits_all ; // the llama_eval() call computes all logits, not just the last one
bool vocab_only ; // only load the vocabulary, no weights
Rewrite loading code to try to satisfy everyone:
- Support all three formats (ggml, ggmf, ggjt). (However, I didn't
include the hack needed to support GPT4All files without conversion.
Those can still be used after converting them with convert.py from my
other PR.)
- Support both mmap and read (mmap is used by default, but can be
disabled with `--no-mmap`, and is automatically disabled for pre-ggjt
files or on platforms where mmap is not supported).
- Support multi-file models like before, but automatically determine the
number of parts rather than requiring `--n_parts`.
- Improve validation and error checking.
- Stop using the per-file type field (f16) entirely in favor of just
relying on the per-tensor type/size fields. This has no immediate
benefit, but makes it easier to experiment with different formats, and
should make it easier to support the new GPTQ-for-LLaMa models in the
future (I have some work in progress on that front).
- Support VirtualLock on Windows (using the same `--mlock` option as on
Unix).
- Indicate loading progress when using mmap + mlock. (Which led me
to the interesting observation that on my Linux machine, with a
warm file cache, mlock actually takes some time, whereas mmap
without mlock starts almost instantly...)
- To help implement this, move mlock support from ggml to the
loading code.
- madvise/PrefetchVirtualMemory support (based on #740)
- Switch from ifstream to the `fopen` family of functions to avoid
unnecessary copying and, when mmap is enabled, allow reusing the same
file descriptor for both metadata reads and mmap (whereas the existing
implementation opens the file a second time to mmap).
- Quantization now produces a single-file output even with multi-file
inputs (not really a feature as much as 'it was easier this way').
Implementation notes:
I tried to factor the code into more discrete pieces than before.
Regarding code style: I tried to follow the code style, but I'm naughty
and used a few advanced C++ features repeatedly:
- Destructors to make it easier to ensure everything gets cleaned up.
- Exceptions. I don't even usually use exceptions when writing C++, and
I can remove them if desired... but here they make the loading code
much more succinct while still properly handling a variety of errors,
ranging from API calls failing to integer overflow and allocation
failure. The exceptions are converted to error codes at the
API boundary.)
Co-authored-by: Pavol Rusnak <pavol@rusnak.io> (for the bit I copied from #740)
2023-04-08 19:24:37 +00:00
bool use_mmap ; // use mmap if possible
2023-03-24 15:19:05 +00:00
bool use_mlock ; // force system to keep model in RAM
2023-03-24 15:05:13 +00:00
bool embedding ; // embedding mode only
2023-03-22 05:32:36 +00:00
} ;
2023-08-16 16:25:29 +00:00
2023-04-11 15:03:51 +00:00
// model file types
enum llama_ftype {
2023-05-20 08:06:11 +00:00
LLAMA_FTYPE_ALL_F32 = 0 ,
LLAMA_FTYPE_MOSTLY_F16 = 1 , // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q4_0 = 2 , // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q4_1 = 3 , // except 1d tensors
2023-04-12 15:06:16 +00:00
LLAMA_FTYPE_MOSTLY_Q4_1_SOME_F16 = 4 , // tok_embeddings.weight and output.weight are F16
2023-05-20 08:06:11 +00:00
// LLAMA_FTYPE_MOSTLY_Q4_2 = 5, // support has been removed
// LLAMA_FTYPE_MOSTLY_Q4_3 = 6, // support has been removed
LLAMA_FTYPE_MOSTLY_Q8_0 = 7 , // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q5_0 = 8 , // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q5_1 = 9 , // except 1d tensors
ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)
* Starting to add k-quantization to ggml
I think it is better to have quantization separate from
ggml. For now just adding the k-quants there, but it would be
better to also factor out the existing ggml quantizations.
* Adding Q3_K and Q8_K (de)-quantization
* Q3_K now working on CUDA and AVX2/scalar
CUDA is not ideal - ~50% slower than Q4_0 for
single token prediction, about the same in batch
mode (perplexity). CPU single token is ~55 ms
(on Ryzen 7950X).
* Some improvement for Q3_K on CUDA
It is now ~22.5 ms/token on my GPU, so ~30% slower than Q4_0.
* Some more CUDA optimizations for Q3_K
Single token is now 20.5 ms/token (~20% slower than Q4_0).
Perplexity is on par with Q4_0.
* Adding Q4_K - scalar, AVX2, CUDA
Performance is the same or perhaps very slightly better than Q4_0 on the CPU.
On the GPU, single token prediction is ~10% better than Q4_0,
batch mode (perplexity is about the same).
* Adding Q6_K - scalar, AVX2, CUDA
Performance is ~40% lower compared to Q4_K on the CPU.
This is to be expected, considering that we are memory bound
on the CPU and the 6-bit model is ~44% larger than the 4-bit.
On the GPU, single token prediction is ~6% lower than Q4_0,
batch mode (perplexity) is even closer (but still slower).
* Adding Q5_K - scalar, AVX2, CUDA
Performance is ~20% lower compared to Q4_K on the CPU.
This is to be expected, considering that we are memory bound
on the CPU and the 5-bit model is ~22% larger than the 4-bit.
On the GPU, single token prediction is about the same as Q4_0
for both, single token and batch prediction.
* Per convention, all QX_K quantizations use Q5_K for output.weight
* Adding quantization mixes
* Quantization mixes: didn't quite get what I wanted in the last commit
* Q4_K dot product for ARM_NEON
* Q6_K dot product for ARM_NEON
* Q5_K dot product for ARM_NEON
* Adding Q3_K dot for ARM_NEON
It is 22% slower than Q4_K, despite the smaller model size.
On x86_64, where we are memory bound, the Q3_K model is
quite a bit faster than Q4_K.
* A very slightly faster ARM_NEON Q3_K dot
* Adding Q2_K - just CUDA for now
Token prediction is pretty good - about 15.5 ms on a RTX 4080.
Perplexity is about the same as Q4_K.
* Adding scalar and AVX2 Q2_K dot
* Adding ARM_NEON Q2_K dot
About the same performance as Q4_K.
* A slightly faster ARM_NEON Q2_K dot
Single token prediction is now ~36 ms on M2 Max.
The code is much simpler too.
* Fixed bug in Q2_K CUDA dot product kernel
Stranegly enough, for the few prompts I tried with the 7B model
the responses looked perfectly reasonable. Only realized something
is not quite right when I tried the larger models and started getting
nonse back.
In any case, Q2_K single token evaluation time on an RTX 4080 in a Ryzen7950X
box iusing CUDA and model fully loaded on the GPU are
~15.5 ms for 7B, ~25.4 ms for 13B, and ~55.8 ms for 30B.
The max number of layers that fit in VRAM for The 65B is 32.
With that, we get ~330 ms per token, which is not that much faster
than just running on the CPU (~470 ms per token).
* Don't print zeros/NaNs when no count histogram has been collected
* A 10% faster CUDA vector dot kernel for Q3_K
Q3_K is now running at ~18.5 ms / token on CUDA,
so the gap to Q4_0 is only 10%.
It seems memory acccess pattern is more important for
performance than the amount of computation the kernel
does.
* A slightly daster Q4_K AVX2 dot product
For perplexity, where we are less memory bound, time per
pass drops by ~5%. Barely measurable difference for single
token prediction.
* A slightly faster ARM_NEON A4_K dot product
* Minor
* Fix quantization error test
We cannot possibly be expecting rmse < 0.002 for 2- and 3-bit
quantization variants.
* Fix docker build
I have been sloppy with vector reinterpret casts on ARM_NEON.
It seems clang is very forgiving in that regard.
* Added forgotten ggml.o dependence on k_quants.h to the Makefile
* Had unintentionally committed the Makefile with -Ofast enabled
* ggml : rename k_quants -> ggml-quants-k, use lowercase in code
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-06-05 19:56:18 +00:00
LLAMA_FTYPE_MOSTLY_Q2_K = 10 , // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q3_K_S = 11 , // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q3_K_M = 12 , // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q3_K_L = 13 , // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q4_K_S = 14 , // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q4_K_M = 15 , // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q5_K_S = 16 , // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q5_K_M = 17 , // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q6_K = 18 , // except 1d tensors
2023-04-11 15:03:51 +00:00
} ;
2023-06-10 07:59:17 +00:00
// model quantization parameters
typedef struct llama_model_quantize_params {
int nthread ; // number of threads to use for quantizing, if <=0 will use std::thread::hardware_concurrency()
2023-08-16 16:25:29 +00:00
enum llama_ftype ftype ; // quantize to this llama_ftype
2023-06-10 07:59:17 +00:00
bool allow_requantize ; // allow quantizing non-f32/f16 tensors
bool quantize_output_tensor ; // quantize output.weight
} llama_model_quantize_params ;
2023-07-24 03:58:10 +00:00
// grammar types
struct llama_grammar ;
// grammar element type
enum llama_gretype {
// end of rule definition
LLAMA_GRETYPE_END = 0 ,
// start of alternate definition for rule
LLAMA_GRETYPE_ALT = 1 ,
// non-terminal element: reference to rule
LLAMA_GRETYPE_RULE_REF = 2 ,
// terminal element: character (code point)
LLAMA_GRETYPE_CHAR = 3 ,
// inverse char(s) ([^a], [^a-b] [^abc])
LLAMA_GRETYPE_CHAR_NOT = 4 ,
// modifies a preceding LLAMA_GRETYPE_CHAR or LLAMA_GRETYPE_CHAR_ALT to
// be an inclusive range ([a-z])
LLAMA_GRETYPE_CHAR_RNG_UPPER = 5 ,
// modifies a preceding LLAMA_GRETYPE_CHAR or
// LLAMA_GRETYPE_CHAR_RNG_UPPER to add an alternate char to match ([ab], [a-zA])
LLAMA_GRETYPE_CHAR_ALT = 6 ,
} ;
typedef struct llama_grammar_element {
enum llama_gretype type ;
uint32_t value ; // Unicode code point or rule ID
} llama_grammar_element ;
2023-07-05 20:51:13 +00:00
// performance timing information
struct llama_timings {
double t_start_ms ;
double t_end_ms ;
double t_load_ms ;
double t_sample_ms ;
double t_p_eval_ms ;
double t_eval_ms ;
int32_t n_sample ;
int32_t n_p_eval ;
int32_t n_eval ;
} ;
2023-08-16 16:25:29 +00:00
LLAMA_API struct llama_context_params llama_context_default_params ( void ) ;
LLAMA_API struct llama_model_quantize_params llama_model_quantize_default_params ( void ) ;
2023-07-19 07:06:40 +00:00
2023-05-20 08:06:11 +00:00
// TODO: not great API - very likely to change
// Initialize the llama + ggml backend
2023-06-26 17:57:59 +00:00
// If numa is true, use NUMA optimizations
2023-05-20 08:06:11 +00:00
// Call once at the start of the program
2023-07-10 15:49:56 +00:00
LLAMA_API void llama_backend_init ( bool numa ) ;
2023-08-18 11:56:36 +00:00
2023-07-10 15:49:56 +00:00
// Call once at the end of the program - currently only used for MPI
2023-08-16 16:25:29 +00:00
LLAMA_API void llama_backend_free ( void ) ;
2023-05-20 08:06:11 +00:00
2023-06-24 08:47:58 +00:00
LLAMA_API struct llama_model * llama_load_model_from_file (
const char * path_model ,
struct llama_context_params params ) ;
LLAMA_API void llama_free_model ( struct llama_model * model ) ;
LLAMA_API struct llama_context * llama_new_context_with_model (
struct llama_model * model ,
struct llama_context_params params ) ;
2023-03-22 05:32:36 +00:00
// Frees all allocated memory
LLAMA_API void llama_free ( struct llama_context * ctx ) ;
2023-08-18 11:56:36 +00:00
LLAMA_API int64_t llama_time_us ( void ) ;
LLAMA_API int llama_max_devices ( void ) ;
LLAMA_API bool llama_mmap_supported ( void ) ;
LLAMA_API bool llama_mlock_supported ( void ) ;
LLAMA_API int llama_n_vocab ( const struct llama_context * ctx ) ;
LLAMA_API int llama_n_ctx ( const struct llama_context * ctx ) ;
LLAMA_API int llama_n_embd ( const struct llama_context * ctx ) ;
LLAMA_API int llama_n_vocab_from_model ( const struct llama_model * model ) ;
LLAMA_API int llama_n_ctx_from_model ( const struct llama_model * model ) ;
LLAMA_API int llama_n_embd_from_model ( const struct llama_model * model ) ;
2023-08-18 12:21:48 +00:00
// Get a string describing the model type
LLAMA_API int llama_model_type ( const struct llama_model * model , char * buf , size_t buf_size ) ;
2023-03-22 05:32:36 +00:00
// Returns 0 on success
LLAMA_API int llama_model_quantize (
const char * fname_inp ,
const char * fname_out ,
2023-06-10 07:59:17 +00:00
const llama_model_quantize_params * params ) ;
2023-03-22 05:32:36 +00:00
2023-04-17 15:28:55 +00:00
// Apply a LoRA adapter to a loaded model
// path_base_model is the path to a higher quality model to use as a base for
// the layers modified by the adapter. Can be NULL to use the current loaded model.
// The model needs to be reloaded before applying a new adapter, otherwise the adapter
// will be applied on top of the previous one
// Returns 0 on success
2023-06-24 08:47:58 +00:00
LLAMA_API DEPRECATED ( int llama_apply_lora_from_file (
2023-04-17 15:28:55 +00:00
struct llama_context * ctx ,
2023-06-24 08:47:58 +00:00
const char * path_lora ,
const char * path_base_model ,
int n_threads ) ,
" please use llama_model_apply_lora_from_file instead " ) ;
LLAMA_API int llama_model_apply_lora_from_file (
const struct llama_model * model ,
2023-04-17 15:28:55 +00:00
const char * path_lora ,
const char * path_base_model ,
int n_threads ) ;
2023-04-02 10:23:04 +00:00
// Returns the number of tokens in the KV cache
2023-05-01 07:24:20 +00:00
LLAMA_API int llama_get_kv_cache_token_count ( const struct llama_context * ctx ) ;
2023-04-02 10:23:04 +00:00
2023-04-26 20:08:43 +00:00
// Sets the current rng seed.
2023-06-29 13:15:15 +00:00
LLAMA_API void llama_set_rng_seed ( struct llama_context * ctx , uint32_t seed ) ;
2023-04-26 20:08:43 +00:00
2023-05-03 02:26:13 +00:00
// Returns the maximum size in bytes of the state (rng, logits, embedding
// and kv_cache) - will often be smaller after compacting tokens
2023-05-01 07:24:20 +00:00
LLAMA_API size_t llama_get_state_size ( const struct llama_context * ctx ) ;
2023-04-22 06:21:32 +00:00
// Copies the state to the specified destination address.
// Destination needs to have allocated enough memory.
// Returns the number of bytes copied
2023-05-13 06:08:52 +00:00
LLAMA_API size_t llama_copy_state_data ( struct llama_context * ctx , uint8_t * dst ) ;
2023-04-22 06:21:32 +00:00
// Set the state reading from the specified address
// Returns the number of bytes read
2023-05-20 07:14:31 +00:00
LLAMA_API size_t llama_set_state_data ( struct llama_context * ctx , uint8_t * src ) ;
2023-04-22 06:21:32 +00:00
2023-04-28 15:59:37 +00:00
// Save/load session file
2023-05-01 11:54:59 +00:00
LLAMA_API bool llama_load_session_file ( struct llama_context * ctx , const char * path_session , llama_token * tokens_out , size_t n_token_capacity , size_t * n_token_count_out ) ;
LLAMA_API bool llama_save_session_file ( struct llama_context * ctx , const char * path_session , const llama_token * tokens , size_t n_token_count ) ;
2023-04-28 15:59:37 +00:00
2023-03-22 05:32:36 +00:00
// Run the llama inference to obtain the logits and probabilities for the next token.
// tokens + n_tokens is the provided batch of new tokens to process
// n_past is the number of tokens to use from previous eval calls
// Returns 0 on success
LLAMA_API int llama_eval (
struct llama_context * ctx ,
const llama_token * tokens ,
int n_tokens ,
int n_past ,
int n_threads ) ;
2023-06-28 15:53:37 +00:00
// Same as llama_eval, but use float matrix input directly.
LLAMA_API int llama_eval_embd (
struct llama_context * ctx ,
const float * embd ,
int n_tokens ,
int n_past ,
int n_threads ) ;
2023-03-22 05:32:36 +00:00
2023-06-04 20:34:30 +00:00
// Export a static computation graph for context of 511 and batch size of 1
// NOTE: since this functionality is mostly for debugging and demonstration purposes, we hardcode these
// parameters here to keep things simple
// IMPORTANT: do not use for anything else other than debugging and testing!
LLAMA_API int llama_eval_export ( struct llama_context * ctx , const char * fname ) ;
2023-08-18 11:56:36 +00:00
// Token logits obtained from the last call to llama_eval()
// The logits for the last token are stored in the last row
// Can be mutated in order to change the probabilities of the next token
// Rows: n_tokens
// Cols: n_vocab
LLAMA_API float * llama_get_logits ( struct llama_context * ctx ) ;
// Get the embeddings for the input
// shape: [n_embd] (1-dimensional)
LLAMA_API float * llama_get_embeddings ( struct llama_context * ctx ) ;
// Get the vocabulary as output parameters.
// Returns number of results.
LLAMA_API int llama_get_vocab (
const struct llama_context * ctx ,
const char * * strings ,
float * scores ,
int capacity ) ;
LLAMA_API int llama_get_vocab_from_model (
const struct llama_model * model ,
const char * * strings ,
float * scores ,
int capacity ) ;
2023-03-22 05:32:36 +00:00
// Convert the provided text into tokens.
// The tokens pointer must be large enough to hold the resulting tokens.
// Returns the number of tokens on success, no more than n_max_tokens
// Returns a negative number on failure - the number of tokens that would have been returned
// TODO: not sure if correct
LLAMA_API int llama_tokenize (
struct llama_context * ctx ,
const char * text ,
llama_token * tokens ,
int n_max_tokens ,
bool add_bos ) ;
2023-08-14 16:30:28 +00:00
LLAMA_API int llama_tokenize_bpe (
struct llama_context * ctx ,
const char * text ,
llama_token * tokens ,
int n_max_tokens ,
bool add_bos ) ;
2023-07-14 18:55:24 +00:00
LLAMA_API int llama_tokenize_with_model (
const struct llama_model * model ,
const char * text ,
llama_token * tokens ,
int n_max_tokens ,
bool add_bos ) ;
2023-03-22 05:32:36 +00:00
// Token Id -> String. Uses the vocabulary in the provided context
2023-08-16 16:25:29 +00:00
// Does not write null terminator to the buffer
2023-08-14 16:30:28 +00:00
LLAMA_API int llama_token_to_str (
2023-07-14 18:55:24 +00:00
const struct llama_context * ctx ,
2023-08-14 16:30:28 +00:00
llama_token token ,
2023-08-16 16:25:29 +00:00
char * buf ,
2023-08-14 16:30:28 +00:00
int length ) ;
2023-07-14 18:55:24 +00:00
2023-08-14 16:30:28 +00:00
LLAMA_API int llama_token_to_str_bpe (
const struct llama_context * ctx ,
llama_token token ,
2023-08-16 16:25:29 +00:00
char * buf ,
2023-08-14 16:30:28 +00:00
int length ) ;
2023-03-22 05:32:36 +00:00
2023-08-14 16:30:28 +00:00
LLAMA_API int llama_token_to_str_with_model (
const struct llama_model * model ,
llama_token token ,
2023-08-16 16:25:29 +00:00
char * buf ,
2023-08-14 16:30:28 +00:00
int length ) ;
2023-03-22 05:32:36 +00:00
// Special tokens
2023-08-18 11:56:36 +00:00
LLAMA_API llama_token llama_token_bos ( /*struct llama_model * model*/ void ) ; // beginning-of-sentence
LLAMA_API llama_token llama_token_eos ( /*struct llama_model * model*/ void ) ; // end-of-sentence
LLAMA_API llama_token llama_token_nl ( /*struct llama_model * model*/ void ) ; // next-line
llama : new sampling algorithms (#1126)
* Sample interface, new samplers.
New samplers:
- locally typical sampling
- tail free sampling
- frequency and presence penalty
- mirostat
Ignore EOS fix: -inf should be used.
* mirostat
* Added --logit-bias and --no-penalize-nl, removed std::span
* Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
* Save and load example adjust
* Tests
* Windows build fix
* Windows test fix
2023-04-29 05:34:41 +00:00
2023-07-24 03:58:10 +00:00
// Grammar
//
LLAMA_API struct llama_grammar * llama_grammar_init (
const llama_grammar_element * * rules ,
size_t n_rules ,
size_t start_rule_index ) ;
LLAMA_API void llama_grammar_free ( struct llama_grammar * grammar ) ;
llama : new sampling algorithms (#1126)
* Sample interface, new samplers.
New samplers:
- locally typical sampling
- tail free sampling
- frequency and presence penalty
- mirostat
Ignore EOS fix: -inf should be used.
* mirostat
* Added --logit-bias and --no-penalize-nl, removed std::span
* Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
* Save and load example adjust
* Tests
* Windows build fix
* Windows test fix
2023-04-29 05:34:41 +00:00
// Sampling functions
/// @details Repetition penalty described in CTRL academic paper https://arxiv.org/abs/1909.05858, with negative logit fix.
2023-05-02 20:09:08 +00:00
LLAMA_API void llama_sample_repetition_penalty ( struct llama_context * ctx , llama_token_data_array * candidates , const llama_token * last_tokens , size_t last_tokens_size , float penalty ) ;
llama : new sampling algorithms (#1126)
* Sample interface, new samplers.
New samplers:
- locally typical sampling
- tail free sampling
- frequency and presence penalty
- mirostat
Ignore EOS fix: -inf should be used.
* mirostat
* Added --logit-bias and --no-penalize-nl, removed std::span
* Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
* Save and load example adjust
* Tests
* Windows build fix
* Windows test fix
2023-04-29 05:34:41 +00:00
/// @details Frequency and presence penalties described in OpenAI API https://platform.openai.com/docs/api-reference/parameter-details.
2023-05-02 20:09:08 +00:00
LLAMA_API void llama_sample_frequency_and_presence_penalties ( struct llama_context * ctx , llama_token_data_array * candidates , const llama_token * last_tokens , size_t last_tokens_size , float alpha_frequency , float alpha_presence ) ;
llama : new sampling algorithms (#1126)
* Sample interface, new samplers.
New samplers:
- locally typical sampling
- tail free sampling
- frequency and presence penalty
- mirostat
Ignore EOS fix: -inf should be used.
* mirostat
* Added --logit-bias and --no-penalize-nl, removed std::span
* Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
* Save and load example adjust
* Tests
* Windows build fix
* Windows test fix
2023-04-29 05:34:41 +00:00
2023-07-11 16:18:43 +00:00
/// @details Apply classifier-free guidance to the logits as described in academic paper "Stay on topic with Classifier-Free Guidance" https://arxiv.org/abs/2306.17806
/// @param candidates A vector of `llama_token_data` containing the candidate tokens, the logits must be directly extracted from the original generation context without being sorted.
/// @params guidance_ctx A separate context from the same model. Other than a negative prompt at the beginning, it should have all generated and user input tokens copied from the main context.
/// @params scale Guidance strength. 1.0f means no guidance. Higher values mean stronger guidance.
LLAMA_API void llama_sample_classifier_free_guidance (
struct llama_context * ctx ,
llama_token_data_array * candidates ,
struct llama_context * guidance_ctx ,
2023-07-21 10:58:36 +00:00
float scale ) ;
2023-07-11 16:18:43 +00:00
llama : new sampling algorithms (#1126)
* Sample interface, new samplers.
New samplers:
- locally typical sampling
- tail free sampling
- frequency and presence penalty
- mirostat
Ignore EOS fix: -inf should be used.
* mirostat
* Added --logit-bias and --no-penalize-nl, removed std::span
* Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
* Save and load example adjust
* Tests
* Windows build fix
* Windows test fix
2023-04-29 05:34:41 +00:00
/// @details Sorts candidate tokens by their logits in descending order and calculate probabilities based on logits.
LLAMA_API void llama_sample_softmax ( struct llama_context * ctx , llama_token_data_array * candidates ) ;
/// @details Top-K sampling described in academic paper "The Curious Case of Neural Text Degeneration" https://arxiv.org/abs/1904.09751
2023-05-06 21:01:47 +00:00
LLAMA_API void llama_sample_top_k ( struct llama_context * ctx , llama_token_data_array * candidates , int k , size_t min_keep ) ;
llama : new sampling algorithms (#1126)
* Sample interface, new samplers.
New samplers:
- locally typical sampling
- tail free sampling
- frequency and presence penalty
- mirostat
Ignore EOS fix: -inf should be used.
* mirostat
* Added --logit-bias and --no-penalize-nl, removed std::span
* Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
* Save and load example adjust
* Tests
* Windows build fix
* Windows test fix
2023-04-29 05:34:41 +00:00
/// @details Nucleus sampling described in academic paper "The Curious Case of Neural Text Degeneration" https://arxiv.org/abs/1904.09751
2023-05-06 21:01:47 +00:00
LLAMA_API void llama_sample_top_p ( struct llama_context * ctx , llama_token_data_array * candidates , float p , size_t min_keep ) ;
llama : new sampling algorithms (#1126)
* Sample interface, new samplers.
New samplers:
- locally typical sampling
- tail free sampling
- frequency and presence penalty
- mirostat
Ignore EOS fix: -inf should be used.
* mirostat
* Added --logit-bias and --no-penalize-nl, removed std::span
* Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
* Save and load example adjust
* Tests
* Windows build fix
* Windows test fix
2023-04-29 05:34:41 +00:00
/// @details Tail Free Sampling described in https://www.trentonbricken.com/Tail-Free-Sampling/.
2023-05-06 21:01:47 +00:00
LLAMA_API void llama_sample_tail_free ( struct llama_context * ctx , llama_token_data_array * candidates , float z , size_t min_keep ) ;
llama : new sampling algorithms (#1126)
* Sample interface, new samplers.
New samplers:
- locally typical sampling
- tail free sampling
- frequency and presence penalty
- mirostat
Ignore EOS fix: -inf should be used.
* mirostat
* Added --logit-bias and --no-penalize-nl, removed std::span
* Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
* Save and load example adjust
* Tests
* Windows build fix
* Windows test fix
2023-04-29 05:34:41 +00:00
/// @details Locally Typical Sampling implementation described in the paper https://arxiv.org/abs/2202.00666.
2023-05-06 21:01:47 +00:00
LLAMA_API void llama_sample_typical ( struct llama_context * ctx , llama_token_data_array * candidates , float p , size_t min_keep ) ;
llama : new sampling algorithms (#1126)
* Sample interface, new samplers.
New samplers:
- locally typical sampling
- tail free sampling
- frequency and presence penalty
- mirostat
Ignore EOS fix: -inf should be used.
* mirostat
* Added --logit-bias and --no-penalize-nl, removed std::span
* Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
* Save and load example adjust
* Tests
* Windows build fix
* Windows test fix
2023-04-29 05:34:41 +00:00
LLAMA_API void llama_sample_temperature ( struct llama_context * ctx , llama_token_data_array * candidates , float temp ) ;
2023-07-24 03:58:10 +00:00
/// @details Apply constraints from grammar
LLAMA_API void llama_sample_grammar ( struct llama_context * ctx , llama_token_data_array * candidates , const struct llama_grammar * grammar ) ;
llama : new sampling algorithms (#1126)
* Sample interface, new samplers.
New samplers:
- locally typical sampling
- tail free sampling
- frequency and presence penalty
- mirostat
Ignore EOS fix: -inf should be used.
* mirostat
* Added --logit-bias and --no-penalize-nl, removed std::span
* Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
* Save and load example adjust
* Tests
* Windows build fix
* Windows test fix
2023-04-29 05:34:41 +00:00
/// @details Mirostat 1.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
/// @param candidates A vector of `llama_token_data` containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
/// @param tau The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
/// @param eta The learning rate used to update `mu` based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause `mu` to be updated more quickly, while a smaller learning rate will result in slower updates.
/// @param m The number of tokens considered in the estimation of `s_hat`. This is an arbitrary value that is used to calculate `s_hat`, which in turn helps to calculate the value of `k`. In the paper, they use `m = 100`, but you can experiment with different values to see how it affects the performance of the algorithm.
/// @param mu Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (`2 * tau`) and is updated in the algorithm based on the error between the target and observed surprisal.
LLAMA_API llama_token llama_sample_token_mirostat ( struct llama_context * ctx , llama_token_data_array * candidates , float tau , float eta , int m , float * mu ) ;
/// @details Mirostat 2.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
/// @param candidates A vector of `llama_token_data` containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
/// @param tau The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
/// @param eta The learning rate used to update `mu` based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause `mu` to be updated more quickly, while a smaller learning rate will result in slower updates.
/// @param mu Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (`2 * tau`) and is updated in the algorithm based on the error between the target and observed surprisal.
LLAMA_API llama_token llama_sample_token_mirostat_v2 ( struct llama_context * ctx , llama_token_data_array * candidates , float tau , float eta , float * mu ) ;
/// @details Selects the token with the highest probability.
LLAMA_API llama_token llama_sample_token_greedy ( struct llama_context * ctx , llama_token_data_array * candidates ) ;
2023-03-22 05:32:36 +00:00
llama : new sampling algorithms (#1126)
* Sample interface, new samplers.
New samplers:
- locally typical sampling
- tail free sampling
- frequency and presence penalty
- mirostat
Ignore EOS fix: -inf should be used.
* mirostat
* Added --logit-bias and --no-penalize-nl, removed std::span
* Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k)
* Save and load example adjust
* Tests
* Windows build fix
* Windows test fix
2023-04-29 05:34:41 +00:00
/// @details Randomly selects a token from the candidates based on their probabilities.
LLAMA_API llama_token llama_sample_token ( struct llama_context * ctx , llama_token_data_array * candidates ) ;
2023-03-22 05:32:36 +00:00
2023-07-24 03:58:10 +00:00
/// @details Accepts the sampled token into the grammar
LLAMA_API void llama_grammar_accept_token ( struct llama_context * ctx , struct llama_grammar * grammar , llama_token token ) ;
2023-03-22 05:32:36 +00:00
// Performance information
2023-07-05 20:51:13 +00:00
LLAMA_API struct llama_timings llama_get_timings ( struct llama_context * ctx ) ;
2023-03-22 05:32:36 +00:00
LLAMA_API void llama_print_timings ( struct llama_context * ctx ) ;
LLAMA_API void llama_reset_timings ( struct llama_context * ctx ) ;
// Print system information
LLAMA_API const char * llama_print_system_info ( void ) ;
2023-08-16 16:25:29 +00:00
// Set callback for all future logging events.
// If this is not called, or NULL is supplied, everything is output on stderr.
LLAMA_API void llama_log_set ( llama_log_callback log_callback , void * user_data ) ;
2023-03-22 05:32:36 +00:00
# ifdef __cplusplus
}
# endif
2023-08-18 13:22:48 +00:00
// Internal API to be implemented by llama.cpp and used by tests/benchmarks only
# ifdef LLAMA_API_INTERNAL
2023-04-13 15:04:45 +00:00
# include <vector>
# include <string>
2023-08-14 16:30:28 +00:00
2023-04-13 15:04:45 +00:00
struct ggml_tensor ;
2023-06-24 08:47:58 +00:00
const std : : vector < std : : pair < std : : string , struct ggml_tensor * > > & llama_internal_get_tensor_map ( struct llama_context * ctx ) ;
2023-04-13 15:04:45 +00:00
2023-08-14 16:30:28 +00:00
# endif // LLAMA_API_INTERNAL
2023-04-13 15:04:45 +00:00
Rewrite loading code to try to satisfy everyone:
- Support all three formats (ggml, ggmf, ggjt). (However, I didn't
include the hack needed to support GPT4All files without conversion.
Those can still be used after converting them with convert.py from my
other PR.)
- Support both mmap and read (mmap is used by default, but can be
disabled with `--no-mmap`, and is automatically disabled for pre-ggjt
files or on platforms where mmap is not supported).
- Support multi-file models like before, but automatically determine the
number of parts rather than requiring `--n_parts`.
- Improve validation and error checking.
- Stop using the per-file type field (f16) entirely in favor of just
relying on the per-tensor type/size fields. This has no immediate
benefit, but makes it easier to experiment with different formats, and
should make it easier to support the new GPTQ-for-LLaMa models in the
future (I have some work in progress on that front).
- Support VirtualLock on Windows (using the same `--mlock` option as on
Unix).
- Indicate loading progress when using mmap + mlock. (Which led me
to the interesting observation that on my Linux machine, with a
warm file cache, mlock actually takes some time, whereas mmap
without mlock starts almost instantly...)
- To help implement this, move mlock support from ggml to the
loading code.
- madvise/PrefetchVirtualMemory support (based on #740)
- Switch from ifstream to the `fopen` family of functions to avoid
unnecessary copying and, when mmap is enabled, allow reusing the same
file descriptor for both metadata reads and mmap (whereas the existing
implementation opens the file a second time to mmap).
- Quantization now produces a single-file output even with multi-file
inputs (not really a feature as much as 'it was easier this way').
Implementation notes:
I tried to factor the code into more discrete pieces than before.
Regarding code style: I tried to follow the code style, but I'm naughty
and used a few advanced C++ features repeatedly:
- Destructors to make it easier to ensure everything gets cleaned up.
- Exceptions. I don't even usually use exceptions when writing C++, and
I can remove them if desired... but here they make the loading code
much more succinct while still properly handling a variety of errors,
ranging from API calls failing to integer overflow and allocation
failure. The exceptions are converted to error codes at the
API boundary.)
Co-authored-by: Pavol Rusnak <pavol@rusnak.io> (for the bit I copied from #740)
2023-04-08 19:24:37 +00:00
# endif // LLAMA_H