llama.cpp/examples/llama.android/llama
Diego Devesa 9177484f58
Some checks are pending
Python check requirements.txt / check-requirements (push) Waiting to run
flake8 Lint / Lint (push) Waiting to run
Python Type-Check / pyright type-check (push) Waiting to run
ggml : fix arm build (#10890)
* ggml: GGML_NATIVE uses -mcpu=native on ARM

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* ggml: Show detected features with GGML_NATIVE

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* remove msvc support, add GGML_CPU_ARM_ARCH option

* disable llamafile in android example

* march -> mcpu, skip adding feature macros

ggml-ci

---------

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
Co-authored-by: Adrien Gallouët <angt@huggingface.co>
2024-12-18 23:21:42 +01:00
..
src llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
.gitignore android : module (#7502) 2024-05-25 11:11:33 +03:00
build.gradle.kts ggml : fix arm build (#10890) 2024-12-18 23:21:42 +01:00
consumer-rules.pro android : module (#7502) 2024-05-25 11:11:33 +03:00
proguard-rules.pro android : module (#7502) 2024-05-25 11:11:33 +03:00