mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-26 11:24:35 +00:00
4cb003dd8d
* contrib : refresh * contrib : expand [no ci] * contrib : expand test-backend-ops instructions * contrib : add CODEOWNERS * prs : update template to not have checkbox [no ci]
36 lines
2.5 KiB
Markdown
36 lines
2.5 KiB
Markdown
# Pull requests (for contributors)
|
|
|
|
- Test your changes:
|
|
- Execute [the full CI locally on your machine](ci/README.md) before publishing
|
|
- Verify that the perplexity and the performance are not affected negatively by your changes (use `llama-perplexity` and `llama-bench`)
|
|
- If you modified the `ggml` source, run the `test-backend-ops` tool to check whether different backend implementations of the `ggml` operators produce consistent results (this requires access to at least two different `ggml` backends)
|
|
- If you modified a `ggml` operator or added a new one, add the corresponding test cases to `test-backend-ops`
|
|
- Consider allowing write access to your branch for faster reviews, as reviewers can push commits directly
|
|
- If your PR becomes stale, don't hesitate to ping the maintainers in the comments
|
|
|
|
# Pull requests (for collaborators)
|
|
|
|
- Squash-merge PRs
|
|
- Use the following format for the squashed commit title: `<module> : <commit title> (#<issue_number>)`. For example: `utils : fix typo in utils.py (#1234)`
|
|
- Optionally pick a `<module>` from here: https://github.com/ggerganov/llama.cpp/wiki/Modules
|
|
- Consider adding yourself to [CODEOWNERS](CODEOWNERS)
|
|
|
|
# Coding guidelines
|
|
|
|
- Avoid adding third-party dependencies, extra files, extra headers, etc.
|
|
- Always consider cross-compatibility with other operating systems and architectures
|
|
- Avoid fancy-looking modern STL constructs, use basic `for` loops, avoid templates, keep it simple
|
|
- There are no strict rules for the code style, but try to follow the patterns in the code (indentation, spaces, etc.). Vertical alignment makes things more readable and easier to batch edit
|
|
- Clean-up any trailing whitespaces, use 4 spaces for indentation, brackets on the same line, `void * ptr`, `int & a`
|
|
- Naming usually optimizes for common prefix (see https://github.com/ggerganov/ggml/pull/302#discussion_r1243240963)
|
|
- Tensors store data in row-major order. We refer to dimension 0 as columns, 1 as rows, 2 as matrices
|
|
- Matrix multiplication is unconventional: [`C = ggml_mul_mat(ctx, A, B)`](https://github.com/ggerganov/llama.cpp/blob/880e352277fc017df4d5794f0c21c44e1eae2b84/ggml.h#L1058-L1064) means $C^T = A B^T \Leftrightarrow C = B A^T.$
|
|
|
|
![matmul](media/matmul.png)
|
|
|
|
# Resources
|
|
|
|
The Github issues, PRs and discussions contain a lot of information that can be useful to get familiar with the codebase. For convenience, some of the more important information is referenced from Github projects:
|
|
|
|
https://github.com/ggerganov/llama.cpp/projects
|