llama.cpp/ggml
2024-08-03 20:09:34 +08:00
..
cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
include Fix conversion of unnormalized BF16->BF16 weights (#7843) 2024-08-02 15:11:39 -04:00
src Fix conversion of unnormalized BF16->BF16 weights (#7843) 2024-08-02 15:11:39 -04:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt set GGML_WIN_VER to 0x601 (WIN7) 2024-08-03 20:09:34 +08:00