llama.cpp/tests
mqy 06b00827a0 bulk refactoring task profile and related to run CL GPU offloading.
* removed ggml_task_backend, infavour of ggml_task_profile.runner and newly added id and name.
* extracted mul_mat blas codes into ggml_compute_forward_mul_mat_blas,
  thus align with CUDA/CL a bit more and make it easier to fix profile and run tune.
* rewrote task profile and update/add some cuda/cl codes, finnaly made CL GPU offloading work.
* misc minor fix/update to tune, the data format was changed.
2023-06-18 14:27:56 +08:00
..
.gitignore initial 2023-06-18 14:27:53 +08:00
CMakeLists.txt initial 2023-06-18 14:27:53 +08:00
test-double-float.c all : be more strict about converting float to double (#458) 2023-03-28 19:48:20 +03:00
test-ggml-threading.c bulk refactoring task profile and related to run CL GPU offloading. 2023-06-18 14:27:56 +08:00
test-ggml-tune.c bulk refactoring task profile and related to run CL GPU offloading. 2023-06-18 14:27:56 +08:00
test-grad0.c train : improved training-from-scratch example (#1652) 2023-06-13 22:04:40 +03:00
test-opt.c ggml : implement backward pass for llama + small training-llama-from-scratch example (#1360) 2023-05-13 15:56:40 +03:00
test-quantize-fns.cpp build : fix and ignore MSVC warnings (#1889) 2023-06-16 21:23:53 +03:00
test-quantize-perf.cpp build : fix and ignore MSVC warnings (#1889) 2023-06-16 21:23:53 +03:00
test-sampling.cpp build : fix and ignore MSVC warnings (#1889) 2023-06-16 21:23:53 +03:00
test-tokenizer-0.cpp build : fix and ignore MSVC warnings (#1889) 2023-06-16 21:23:53 +03:00