llama.cpp/run.h
Thiago Padilha b7f1fa6d8c
Move llama_context setup + perplexity back to main.cpp
Signed-off-by: Thiago Padilha <thiago@padilha.cc>
2023-03-22 14:31:41 -03:00

7 lines
102 B
C

#pragma once
#include "llama.h"
#include "utils.h"
int run(llama_context * ctx, gpt_params params);