mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-27 03:44:35 +00:00
GitHub: ask for more info in issue templates (#10426)
* GitHub: ask for more info in issues [no ci] * refactor issue templates to be component-specific * more understandable issue description * add dropdown for llama.cpp module
This commit is contained in:
parent
c18610b4ee
commit
599b3e0cd4
50
.github/ISSUE_TEMPLATE/01-bug-low.yml
vendored
50
.github/ISSUE_TEMPLATE/01-bug-low.yml
vendored
@ -1,50 +0,0 @@
|
|||||||
name: Low Severity Bugs
|
|
||||||
description: Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
|
|
||||||
title: "Bug: "
|
|
||||||
labels: ["bug-unconfirmed", "low severity"]
|
|
||||||
body:
|
|
||||||
- type: markdown
|
|
||||||
attributes:
|
|
||||||
value: |
|
|
||||||
Thanks for taking the time to fill out this bug report!
|
|
||||||
Please include information about your system, the steps to reproduce the bug,
|
|
||||||
and the version of llama.cpp that you are using.
|
|
||||||
If possible, please provide a minimal code example that reproduces the bug.
|
|
||||||
- type: textarea
|
|
||||||
id: what-happened
|
|
||||||
attributes:
|
|
||||||
label: What happened?
|
|
||||||
description: Also tell us, what did you expect to happen?
|
|
||||||
placeholder: Tell us what you see!
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
- type: textarea
|
|
||||||
id: version
|
|
||||||
attributes:
|
|
||||||
label: Name and Version
|
|
||||||
description: Which executable and which version of our software are you running? (use `--version` to get a version string)
|
|
||||||
placeholder: |
|
|
||||||
$./llama-cli --version
|
|
||||||
version: 2999 (42b4109e)
|
|
||||||
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
- type: dropdown
|
|
||||||
id: operating-system
|
|
||||||
attributes:
|
|
||||||
label: What operating system are you seeing the problem on?
|
|
||||||
multiple: true
|
|
||||||
options:
|
|
||||||
- Linux
|
|
||||||
- Mac
|
|
||||||
- Windows
|
|
||||||
- BSD
|
|
||||||
- Other? (Please let us know in description)
|
|
||||||
validations:
|
|
||||||
required: false
|
|
||||||
- type: textarea
|
|
||||||
id: logs
|
|
||||||
attributes:
|
|
||||||
label: Relevant log output
|
|
||||||
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
|
|
||||||
render: shell
|
|
73
.github/ISSUE_TEMPLATE/010-bug-compilation.yml
vendored
Normal file
73
.github/ISSUE_TEMPLATE/010-bug-compilation.yml
vendored
Normal file
@ -0,0 +1,73 @@
|
|||||||
|
name: Bug (compilation)
|
||||||
|
description: Something goes wrong when trying to compile llama.cpp.
|
||||||
|
title: "Compile bug: "
|
||||||
|
labels: ["bug-unconfirmed", "compilation"]
|
||||||
|
body:
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: >
|
||||||
|
Thanks for taking the time to fill out this bug report!
|
||||||
|
This issue template is intended for bug reports where the compilation of llama.cpp fails.
|
||||||
|
Before opening an issue, please confirm that the compilation still fails with `-DGGML_CCACHE=OFF`.
|
||||||
|
If the compilation succeeds with ccache disabled you should be able to permanently fix the issue
|
||||||
|
by clearing `~/.cache/ccache` (on Linux).
|
||||||
|
- type: textarea
|
||||||
|
id: commit
|
||||||
|
attributes:
|
||||||
|
label: Git commit
|
||||||
|
description: Which commit are you trying to compile?
|
||||||
|
placeholder: |
|
||||||
|
$git rev-parse HEAD
|
||||||
|
84a07a17b1b08cf2b9747c633a2372782848a27f
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: dropdown
|
||||||
|
id: operating-system
|
||||||
|
attributes:
|
||||||
|
label: Which operating systems do you know to be affected?
|
||||||
|
multiple: true
|
||||||
|
options:
|
||||||
|
- Linux
|
||||||
|
- Mac
|
||||||
|
- Windows
|
||||||
|
- BSD
|
||||||
|
- Other? (Please let us know in description)
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: dropdown
|
||||||
|
id: backends
|
||||||
|
attributes:
|
||||||
|
label: GGML backends
|
||||||
|
description: Which GGML backends do you know to be affected?
|
||||||
|
options: [AMX, BLAS, CPU, CUDA, HIP, Kompute, Metal, Musa, RPC, SYCL, Vulkan]
|
||||||
|
multiple: true
|
||||||
|
- type: textarea
|
||||||
|
id: steps_to_reproduce
|
||||||
|
attributes:
|
||||||
|
label: Steps to Reproduce
|
||||||
|
description: >
|
||||||
|
Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it.
|
||||||
|
If you can narrow down the bug to specific compile flags, that information would be very much appreciated by us.
|
||||||
|
placeholder: >
|
||||||
|
Here are the exact commands that I used: ...
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
id: first_bad_commit
|
||||||
|
attributes:
|
||||||
|
label: First Bad Commit
|
||||||
|
description: >
|
||||||
|
If the bug was not present on an earlier version: when did it start appearing?
|
||||||
|
If possible, please do a git bisect and identify the exact commit that introduced the bug.
|
||||||
|
validations:
|
||||||
|
required: false
|
||||||
|
- type: textarea
|
||||||
|
id: logs
|
||||||
|
attributes:
|
||||||
|
label: Relevant log output
|
||||||
|
description: >
|
||||||
|
Please copy and paste any relevant log output, including the command that you entered and any generated text.
|
||||||
|
This will be automatically formatted into code, so no need for backticks.
|
||||||
|
render: shell
|
||||||
|
validations:
|
||||||
|
required: true
|
98
.github/ISSUE_TEMPLATE/011-bug-results.yml
vendored
Normal file
98
.github/ISSUE_TEMPLATE/011-bug-results.yml
vendored
Normal file
@ -0,0 +1,98 @@
|
|||||||
|
name: Bug (model use)
|
||||||
|
description: Something goes wrong when using a model (in general, not specific to a single llama.cpp module).
|
||||||
|
title: "Eval bug: "
|
||||||
|
labels: ["bug-unconfirmed", "model evaluation"]
|
||||||
|
body:
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: >
|
||||||
|
Thanks for taking the time to fill out this bug report!
|
||||||
|
This issue template is intended for bug reports where the model evaluation results
|
||||||
|
(i.e. the generated text) are incorrect or llama.cpp crashes during model evaluation.
|
||||||
|
If you encountered the issue while using an external UI (e.g. ollama),
|
||||||
|
please reproduce your issue using one of the examples/binaries in this repository.
|
||||||
|
The `llama-cli` binary can be used for simple and reproducible model inference.
|
||||||
|
- type: textarea
|
||||||
|
id: version
|
||||||
|
attributes:
|
||||||
|
label: Name and Version
|
||||||
|
description: Which version of our software are you running? (use `--version` to get a version string)
|
||||||
|
placeholder: |
|
||||||
|
$./llama-cli --version
|
||||||
|
version: 2999 (42b4109e)
|
||||||
|
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: dropdown
|
||||||
|
id: operating-system
|
||||||
|
attributes:
|
||||||
|
label: Which operating systems do you know to be affected?
|
||||||
|
multiple: true
|
||||||
|
options:
|
||||||
|
- Linux
|
||||||
|
- Mac
|
||||||
|
- Windows
|
||||||
|
- BSD
|
||||||
|
- Other? (Please let us know in description)
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: dropdown
|
||||||
|
id: backends
|
||||||
|
attributes:
|
||||||
|
label: GGML backends
|
||||||
|
description: Which GGML backends do you know to be affected?
|
||||||
|
options: [AMX, BLAS, CPU, CUDA, HIP, Kompute, Metal, Musa, RPC, SYCL, Vulkan]
|
||||||
|
multiple: true
|
||||||
|
- type: textarea
|
||||||
|
id: hardware
|
||||||
|
attributes:
|
||||||
|
label: Hardware
|
||||||
|
description: Which CPUs/GPUs are you using?
|
||||||
|
placeholder: >
|
||||||
|
e.g. Ryzen 5950X + 2x RTX 4090
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
id: model
|
||||||
|
attributes:
|
||||||
|
label: Model
|
||||||
|
description: >
|
||||||
|
Which model at which quantization were you using when encountering the bug?
|
||||||
|
If you downloaded a GGUF file off of Huggingface, please provide a link.
|
||||||
|
placeholder: >
|
||||||
|
e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M
|
||||||
|
validations:
|
||||||
|
required: false
|
||||||
|
- type: textarea
|
||||||
|
id: steps_to_reproduce
|
||||||
|
attributes:
|
||||||
|
label: Steps to Reproduce
|
||||||
|
description: >
|
||||||
|
Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it.
|
||||||
|
If you can narrow down the bug to specific hardware, compile flags, or command line arguments,
|
||||||
|
that information would be very much appreciated by us.
|
||||||
|
placeholder: >
|
||||||
|
e.g. when I run llama-cli with -ngl 99 I get garbled outputs.
|
||||||
|
When I use -ngl 0 it works correctly.
|
||||||
|
Here are the exact commands that I used: ...
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
id: first_bad_commit
|
||||||
|
attributes:
|
||||||
|
label: First Bad Commit
|
||||||
|
description: >
|
||||||
|
If the bug was not present on an earlier version: when did it start appearing?
|
||||||
|
If possible, please do a git bisect and identify the exact commit that introduced the bug.
|
||||||
|
validations:
|
||||||
|
required: false
|
||||||
|
- type: textarea
|
||||||
|
id: logs
|
||||||
|
attributes:
|
||||||
|
label: Relevant log output
|
||||||
|
description: >
|
||||||
|
Please copy and paste any relevant log output, including the command that you entered and any generated text.
|
||||||
|
This will be automatically formatted into code, so no need for backticks.
|
||||||
|
render: shell
|
||||||
|
validations:
|
||||||
|
required: true
|
78
.github/ISSUE_TEMPLATE/019-bug-misc.yml
vendored
Normal file
78
.github/ISSUE_TEMPLATE/019-bug-misc.yml
vendored
Normal file
@ -0,0 +1,78 @@
|
|||||||
|
name: Bug (misc.)
|
||||||
|
description: Something is not working the way it should (and it's not covered by any of the above cases).
|
||||||
|
title: "Misc. bug: "
|
||||||
|
labels: ["bug-unconfirmed"]
|
||||||
|
body:
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: >
|
||||||
|
Thanks for taking the time to fill out this bug report!
|
||||||
|
This issue template is intended for miscellaneous bugs that don't fit into any other category.
|
||||||
|
If you encountered the issue while using an external UI (e.g. ollama),
|
||||||
|
please reproduce your issue using one of the examples/binaries in this repository.
|
||||||
|
- type: textarea
|
||||||
|
id: version
|
||||||
|
attributes:
|
||||||
|
label: Name and Version
|
||||||
|
description: Which version of our software are you running? (use `--version` to get a version string)
|
||||||
|
placeholder: |
|
||||||
|
$./llama-cli --version
|
||||||
|
version: 2999 (42b4109e)
|
||||||
|
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: dropdown
|
||||||
|
id: operating-system
|
||||||
|
attributes:
|
||||||
|
label: Which operating systems do you know to be affected?
|
||||||
|
multiple: true
|
||||||
|
options:
|
||||||
|
- Linux
|
||||||
|
- Mac
|
||||||
|
- Windows
|
||||||
|
- BSD
|
||||||
|
- Other? (Please let us know in description)
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: dropdown
|
||||||
|
id: module
|
||||||
|
attributes:
|
||||||
|
label: Which llama.cpp modules do you know to be affected?
|
||||||
|
multiple: true
|
||||||
|
options:
|
||||||
|
- libllama (core library)
|
||||||
|
- llama-cli
|
||||||
|
- llama-server
|
||||||
|
- llama-bench
|
||||||
|
- llama-quantize
|
||||||
|
- Python/Bash scripts
|
||||||
|
- Other (Please specify in the next section)
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
id: steps_to_reproduce
|
||||||
|
attributes:
|
||||||
|
label: Steps to Reproduce
|
||||||
|
description: >
|
||||||
|
Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it.
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
id: first_bad_commit
|
||||||
|
attributes:
|
||||||
|
label: First Bad Commit
|
||||||
|
description: >
|
||||||
|
If the bug was not present on an earlier version: when did it start appearing?
|
||||||
|
If possible, please do a git bisect and identify the exact commit that introduced the bug.
|
||||||
|
validations:
|
||||||
|
required: false
|
||||||
|
- type: textarea
|
||||||
|
id: logs
|
||||||
|
attributes:
|
||||||
|
label: Relevant log output
|
||||||
|
description: >
|
||||||
|
Please copy and paste any relevant log output, including the command that you entered and any generated text.
|
||||||
|
This will be automatically formatted into code, so no need for backticks.
|
||||||
|
render: shell
|
||||||
|
validations:
|
||||||
|
required: true
|
50
.github/ISSUE_TEMPLATE/02-bug-medium.yml
vendored
50
.github/ISSUE_TEMPLATE/02-bug-medium.yml
vendored
@ -1,50 +0,0 @@
|
|||||||
name: Medium Severity Bug
|
|
||||||
description: Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but generally still useable)
|
|
||||||
title: "Bug: "
|
|
||||||
labels: ["bug-unconfirmed", "medium severity"]
|
|
||||||
body:
|
|
||||||
- type: markdown
|
|
||||||
attributes:
|
|
||||||
value: |
|
|
||||||
Thanks for taking the time to fill out this bug report!
|
|
||||||
Please include information about your system, the steps to reproduce the bug,
|
|
||||||
and the version of llama.cpp that you are using.
|
|
||||||
If possible, please provide a minimal code example that reproduces the bug.
|
|
||||||
- type: textarea
|
|
||||||
id: what-happened
|
|
||||||
attributes:
|
|
||||||
label: What happened?
|
|
||||||
description: Also tell us, what did you expect to happen?
|
|
||||||
placeholder: Tell us what you see!
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
- type: textarea
|
|
||||||
id: version
|
|
||||||
attributes:
|
|
||||||
label: Name and Version
|
|
||||||
description: Which executable and which version of our software are you running? (use `--version` to get a version string)
|
|
||||||
placeholder: |
|
|
||||||
$./llama-cli --version
|
|
||||||
version: 2999 (42b4109e)
|
|
||||||
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
- type: dropdown
|
|
||||||
id: operating-system
|
|
||||||
attributes:
|
|
||||||
label: What operating system are you seeing the problem on?
|
|
||||||
multiple: true
|
|
||||||
options:
|
|
||||||
- Linux
|
|
||||||
- Mac
|
|
||||||
- Windows
|
|
||||||
- BSD
|
|
||||||
- Other? (Please let us know in description)
|
|
||||||
validations:
|
|
||||||
required: false
|
|
||||||
- type: textarea
|
|
||||||
id: logs
|
|
||||||
attributes:
|
|
||||||
label: Relevant log output
|
|
||||||
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
|
|
||||||
render: shell
|
|
@ -1,5 +1,5 @@
|
|||||||
name: Enhancement
|
name: Enhancement
|
||||||
description: Used to request enhancements for llama.cpp
|
description: Used to request enhancements for llama.cpp.
|
||||||
title: "Feature Request: "
|
title: "Feature Request: "
|
||||||
labels: ["enhancement"]
|
labels: ["enhancement"]
|
||||||
body:
|
body:
|
50
.github/ISSUE_TEMPLATE/03-bug-high.yml
vendored
50
.github/ISSUE_TEMPLATE/03-bug-high.yml
vendored
@ -1,50 +0,0 @@
|
|||||||
name: High Severity Bug
|
|
||||||
description: Used to report high severity bugs in llama.cpp (e.g. Malfunctioning features hindering important common workflow)
|
|
||||||
title: "Bug: "
|
|
||||||
labels: ["bug-unconfirmed", "high severity"]
|
|
||||||
body:
|
|
||||||
- type: markdown
|
|
||||||
attributes:
|
|
||||||
value: |
|
|
||||||
Thanks for taking the time to fill out this bug report!
|
|
||||||
Please include information about your system, the steps to reproduce the bug,
|
|
||||||
and the version of llama.cpp that you are using.
|
|
||||||
If possible, please provide a minimal code example that reproduces the bug.
|
|
||||||
- type: textarea
|
|
||||||
id: what-happened
|
|
||||||
attributes:
|
|
||||||
label: What happened?
|
|
||||||
description: Also tell us, what did you expect to happen?
|
|
||||||
placeholder: Tell us what you see!
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
- type: textarea
|
|
||||||
id: version
|
|
||||||
attributes:
|
|
||||||
label: Name and Version
|
|
||||||
description: Which executable and which version of our software are you running? (use `--version` to get a version string)
|
|
||||||
placeholder: |
|
|
||||||
$./llama-cli --version
|
|
||||||
version: 2999 (42b4109e)
|
|
||||||
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
- type: dropdown
|
|
||||||
id: operating-system
|
|
||||||
attributes:
|
|
||||||
label: What operating system are you seeing the problem on?
|
|
||||||
multiple: true
|
|
||||||
options:
|
|
||||||
- Linux
|
|
||||||
- Mac
|
|
||||||
- Windows
|
|
||||||
- BSD
|
|
||||||
- Other? (Please let us know in description)
|
|
||||||
validations:
|
|
||||||
required: false
|
|
||||||
- type: textarea
|
|
||||||
id: logs
|
|
||||||
attributes:
|
|
||||||
label: Relevant log output
|
|
||||||
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
|
|
||||||
render: shell
|
|
@ -1,5 +1,5 @@
|
|||||||
name: Research
|
name: Research
|
||||||
description: Track new technical research area
|
description: Track new technical research area.
|
||||||
title: "Research: "
|
title: "Research: "
|
||||||
labels: ["research 🔬"]
|
labels: ["research 🔬"]
|
||||||
body:
|
body:
|
50
.github/ISSUE_TEMPLATE/04-bug-critical.yml
vendored
50
.github/ISSUE_TEMPLATE/04-bug-critical.yml
vendored
@ -1,50 +0,0 @@
|
|||||||
name: Critical Severity Bug
|
|
||||||
description: Used to report critical severity bugs in llama.cpp (e.g. Crashing, Corrupted, Dataloss)
|
|
||||||
title: "Bug: "
|
|
||||||
labels: ["bug-unconfirmed", "critical severity"]
|
|
||||||
body:
|
|
||||||
- type: markdown
|
|
||||||
attributes:
|
|
||||||
value: |
|
|
||||||
Thanks for taking the time to fill out this bug report!
|
|
||||||
Please include information about your system, the steps to reproduce the bug,
|
|
||||||
and the version of llama.cpp that you are using.
|
|
||||||
If possible, please provide a minimal code example that reproduces the bug.
|
|
||||||
- type: textarea
|
|
||||||
id: what-happened
|
|
||||||
attributes:
|
|
||||||
label: What happened?
|
|
||||||
description: Also tell us, what did you expect to happen?
|
|
||||||
placeholder: Tell us what you see!
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
- type: textarea
|
|
||||||
id: version
|
|
||||||
attributes:
|
|
||||||
label: Name and Version
|
|
||||||
description: Which executable and which version of our software are you running? (use `--version` to get a version string)
|
|
||||||
placeholder: |
|
|
||||||
$./llama-cli --version
|
|
||||||
version: 2999 (42b4109e)
|
|
||||||
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
- type: dropdown
|
|
||||||
id: operating-system
|
|
||||||
attributes:
|
|
||||||
label: What operating system are you seeing the problem on?
|
|
||||||
multiple: true
|
|
||||||
options:
|
|
||||||
- Linux
|
|
||||||
- Mac
|
|
||||||
- Windows
|
|
||||||
- BSD
|
|
||||||
- Other? (Please let us know in description)
|
|
||||||
validations:
|
|
||||||
required: false
|
|
||||||
- type: textarea
|
|
||||||
id: logs
|
|
||||||
attributes:
|
|
||||||
label: Relevant log output
|
|
||||||
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
|
|
||||||
render: shell
|
|
@ -1,5 +1,5 @@
|
|||||||
name: Refactor (Maintainers)
|
name: Refactor (Maintainers)
|
||||||
description: Used to track refactoring opportunities
|
description: Used to track refactoring opportunities.
|
||||||
title: "Refactor: "
|
title: "Refactor: "
|
||||||
labels: ["refactor"]
|
labels: ["refactor"]
|
||||||
body:
|
body:
|
Loading…
Reference in New Issue
Block a user