2024-02-24 11:28:55 +00:00
|
|
|
@llama.cpp
|
2024-03-02 21:00:14 +00:00
|
|
|
@security
|
2024-02-24 11:28:55 +00:00
|
|
|
Feature: Security
|
|
|
|
|
|
|
|
Background: Server startup with an api key defined
|
|
|
|
Given a server listening on localhost:8080
|
2024-03-02 21:00:14 +00:00
|
|
|
And a model file tinyllamas/stories260K.gguf from HF repo ggml-org/models
|
2024-02-24 11:28:55 +00:00
|
|
|
And a server api key llama.cpp
|
|
|
|
Then the server is starting
|
|
|
|
Then the server is healthy
|
|
|
|
|
|
|
|
Scenario Outline: Completion with some user api key
|
|
|
|
Given a prompt test
|
|
|
|
And a user api key <api_key>
|
|
|
|
And 4 max tokens to predict
|
|
|
|
And a completion request with <api_error> api error
|
|
|
|
|
|
|
|
Examples: Prompts
|
|
|
|
| api_key | api_error |
|
|
|
|
| llama.cpp | no |
|
|
|
|
| llama.cpp | no |
|
|
|
|
| hackeme | raised |
|
|
|
|
| | raised |
|
|
|
|
|
|
|
|
Scenario Outline: OAI Compatibility
|
|
|
|
Given a system prompt test
|
|
|
|
And a user prompt test
|
|
|
|
And a model test
|
|
|
|
And 2 max tokens to predict
|
|
|
|
And streaming is disabled
|
|
|
|
And a user api key <api_key>
|
|
|
|
Given an OAI compatible chat completions request with <api_error> api error
|
|
|
|
|
|
|
|
Examples: Prompts
|
|
|
|
| api_key | api_error |
|
|
|
|
| llama.cpp | no |
|
|
|
|
| llama.cpp | no |
|
|
|
|
| hackme | raised |
|
|
|
|
|
|
|
|
|
|
|
|
Scenario Outline: CORS Options
|
|
|
|
When an OPTIONS request is sent from <origin>
|
|
|
|
Then CORS header <cors_header> is set to <cors_header_value>
|
|
|
|
|
|
|
|
Examples: Headers
|
|
|
|
| origin | cors_header | cors_header_value |
|
|
|
|
| localhost | Access-Control-Allow-Origin | localhost |
|
|
|
|
| web.mydomain.fr | Access-Control-Allow-Origin | web.mydomain.fr |
|
|
|
|
| origin | Access-Control-Allow-Credentials | true |
|
|
|
|
| web.mydomain.fr | Access-Control-Allow-Methods | POST |
|
|
|
|
| web.mydomain.fr | Access-Control-Allow-Headers | * |
|