server : add LOG_INFO when model is successfully loaded (#4881)

* added /health endpoint to the server

* added comments on the additional /health endpoint

* Better handling of server state

When the model is being loaded, the server state is `LOADING_MODEL`. If model-loading fails, the server state becomes `ERROR`, otherwise it becomes `READY`. The `/health` endpoint provides more granular messages now according to the server_state value.

* initialized server_state

* fixed a typo

* starting http server before initializing the model

* Update server.cpp

* Update server.cpp

* fixes

* fixes

* fixes

* made ServerState atomic and turned two-line spaces into one-line

* updated `server` readme to document the `/health` endpoint too

* used LOG_INFO after successful model loading
This commit is contained in:
Behnam M 2024-01-11 12:41:39 -05:00 committed by GitHub
parent d8d90aa343
commit eab6795006
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -2906,6 +2906,7 @@ int main(int argc, char **argv)
} else { } else {
llama.initialize(); llama.initialize();
state.store(SERVER_STATE_READY); state.store(SERVER_STATE_READY);
LOG_INFO("model loaded", {});
} }
// Middleware for API key validation // Middleware for API key validation