diff --git a/.github/ISSUE_TEMPLATE/010-bug-compilation.yml b/.github/ISSUE_TEMPLATE/010-bug-compilation.yml index 550ee1b49..f10b3a2b2 100644 --- a/.github/ISSUE_TEMPLATE/010-bug-compilation.yml +++ b/.github/ISSUE_TEMPLATE/010-bug-compilation.yml @@ -24,7 +24,8 @@ body: - type: dropdown id: operating-system attributes: - label: Which operating systems do you know to be affected? + label: Operating systems + description: Which operating systems do you know to be affected? multiple: true options: - Linux @@ -41,14 +42,17 @@ body: description: Which GGML backends do you know to be affected? options: [AMX, BLAS, CPU, CUDA, HIP, Kompute, Metal, Musa, RPC, SYCL, Vulkan] multiple: true + validations: + required: true - type: textarea - id: steps_to_reproduce + id: info attributes: - label: Steps to Reproduce + label: Problem description & steps to reproduce description: > - Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it. + Please give us a summary of the problem and tell us how to reproduce it. If you can narrow down the bug to specific compile flags, that information would be very much appreciated by us. placeholder: > + I'm trying to compile llama.cpp with CUDA support on a fresh install of Ubuntu and get error XY. Here are the exact commands that I used: ... validations: required: true diff --git a/.github/ISSUE_TEMPLATE/011-bug-results.yml b/.github/ISSUE_TEMPLATE/011-bug-results.yml index 1adb162b7..1ccef0793 100644 --- a/.github/ISSUE_TEMPLATE/011-bug-results.yml +++ b/.github/ISSUE_TEMPLATE/011-bug-results.yml @@ -26,7 +26,8 @@ body: - type: dropdown id: operating-system attributes: - label: Which operating systems do you know to be affected? + label: Operating systems + description: Which operating systems do you know to be affected? multiple: true options: - Linux @@ -43,6 +44,8 @@ body: description: Which GGML backends do you know to be affected? options: [AMX, BLAS, CPU, CUDA, HIP, Kompute, Metal, Musa, RPC, SYCL, Vulkan] multiple: true + validations: + required: true - type: textarea id: hardware attributes: @@ -55,20 +58,20 @@ body: - type: textarea id: model attributes: - label: Model + label: Models description: > - Which model at which quantization were you using when encountering the bug? + Which model(s) at which quantization were you using when encountering the bug? If you downloaded a GGUF file off of Huggingface, please provide a link. placeholder: > e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M validations: required: false - type: textarea - id: steps_to_reproduce + id: info attributes: - label: Steps to Reproduce + label: Problem description & steps to reproduce description: > - Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it. + Please give us a summary of the problem and tell us how to reproduce it. If you can narrow down the bug to specific hardware, compile flags, or command line arguments, that information would be very much appreciated by us. placeholder: > diff --git a/.github/ISSUE_TEMPLATE/019-bug-misc.yml b/.github/ISSUE_TEMPLATE/019-bug-misc.yml index 124cdee91..d157ea307 100644 --- a/.github/ISSUE_TEMPLATE/019-bug-misc.yml +++ b/.github/ISSUE_TEMPLATE/019-bug-misc.yml @@ -14,7 +14,7 @@ body: id: version attributes: label: Name and Version - description: Which version of our software are you running? (use `--version` to get a version string) + description: Which version of our software is affected? (You can use `--version` to get a version string.) placeholder: | $./llama-cli --version version: 2999 (42b4109e) @@ -24,7 +24,8 @@ body: - type: dropdown id: operating-system attributes: - label: Which operating systems do you know to be affected? + label: Operating systems + description: Which operating systems do you know to be affected? multiple: true options: - Linux @@ -33,28 +34,30 @@ body: - BSD - Other? (Please let us know in description) validations: - required: true + required: false - type: dropdown id: module attributes: label: Which llama.cpp modules do you know to be affected? multiple: true options: + - Documentation/Github - libllama (core library) - llama-cli - llama-server - llama-bench - llama-quantize - Python/Bash scripts + - Test code - Other (Please specify in the next section) validations: - required: true + required: false - type: textarea - id: steps_to_reproduce + id: info attributes: - label: Steps to Reproduce + label: Problem description & steps to reproduce description: > - Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it. + Please give us a summary of the problem and tell us how to reproduce it (if applicable). validations: required: true - type: textarea @@ -62,7 +65,7 @@ body: attributes: label: First Bad Commit description: > - If the bug was not present on an earlier version: when did it start appearing? + If the bug was not present on an earlier version and it's not trivial to track down: when did it start appearing? If possible, please do a git bisect and identify the exact commit that introduced the bug. validations: required: false @@ -71,8 +74,8 @@ body: attributes: label: Relevant log output description: > - Please copy and paste any relevant log output, including the command that you entered and any generated text. + If applicable, please copy and paste any relevant log output, including the command that you entered and any generated text. This will be automatically formatted into code, so no need for backticks. render: shell validations: - required: true + required: false