mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-11 21:39:52 +00:00
server : add LOG_INFO
when model is successfully loaded (#4881)
* added /health endpoint to the server * added comments on the additional /health endpoint * Better handling of server state When the model is being loaded, the server state is `LOADING_MODEL`. If model-loading fails, the server state becomes `ERROR`, otherwise it becomes `READY`. The `/health` endpoint provides more granular messages now according to the server_state value. * initialized server_state * fixed a typo * starting http server before initializing the model * Update server.cpp * Update server.cpp * fixes * fixes * fixes * made ServerState atomic and turned two-line spaces into one-line * updated `server` readme to document the `/health` endpoint too * used LOG_INFO after successful model loading
This commit is contained in:
parent
d8d90aa343
commit
eab6795006
@ -2906,6 +2906,7 @@ int main(int argc, char **argv)
|
||||
} else {
|
||||
llama.initialize();
|
||||
state.store(SERVER_STATE_READY);
|
||||
LOG_INFO("model loaded", {});
|
||||
}
|
||||
|
||||
// Middleware for API key validation
|
||||
|
Loading…
Reference in New Issue
Block a user