mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-11 21:39:52 +00:00
438c2ca830
* implementing parallel decoding in server example * crash fixed * save dev progress * refactored sampling function * completion endpoint working * multiple client support * grammar + no stream completion * cached prompt support * chat.mjs support cached prompt + some fixes * server ui now support multiple clients * unused change reverted * fixed timings per slot * add context swap * add changes to README.md * llava multimodal integration * fixed tokens probs * add multimodal input - alfa * refactor code + remove unused comments + improved README.md * fix compilation errors with llvm * notify the user from server ui that multimodality is unavialable * some ci fixes * fix ci make build undefined ref errors * fix long prompt than ctx proposed in #3639 * fixed premature end due stop word * context shift fixed * fix llava implementation * sync README.md changes * readme change * update api like OpenAI * multimodal support enabled by default * fix make bui;d errors * fix multiple clients * fix zig build * new sampling API * latest changes of sampling API * server : coding-style normalization * server : coding-style normalization (part 2) * server : remove beam-search functionality * server : bug fix in ingest_images n_tokens is incremented internally by llama_batch_add * server : use refs + use llama_batch_clear() * server : snake case * server : minor sync * added thread safe pipeline * server : bach has to be allocated for n_parallel sequences * server : no need for atomic int - already using mutex * server : logs + minor code style * server : fix multibyte handle in partial response (#3706) * fix image load + view image in chat * make : silence stb warnings * clip : link to ggml, not to llama * server : fix switch fallthrough * server : fix crash in Debug on macOS (I have no idea why this fixes it!?) * server : refactor ctx_sampling init + n_ctx + names * server : bug fix for prompt caching * Do not save/load image_data to localStorage * editorconfig : new line in index.html * server : completion requests remember slot_id * Update readme to document multimodal in server * server : minor style * Update readme to document multimodal in server * server : hide ctx_sampling->prev behind API (#3696) * server : apply fix from #3722 * server : fix slot reuse * server : add comment about changing slot_state to bool --------- Co-authored-by: FSSRepo <go778sgt@gmail.com> Co-authored-by: Damian Stewart <d@damianstewart.com> Co-authored-by: Steward Garcia <57494570+FSSRepo@users.noreply.github.com> Co-authored-by: Jhen-Jie Hong <iainst0409@gmail.com> Co-authored-by: M. Yusuf Sarıgöz <yusufsarigoz@gmail.com>
100 lines
1.1 KiB
Plaintext
100 lines
1.1 KiB
Plaintext
*.o
|
|
*.a
|
|
*.so
|
|
*.gguf
|
|
*.bin
|
|
*.exe
|
|
*.dll
|
|
*.log
|
|
*.gcov
|
|
*.gcno
|
|
*.gcda
|
|
*.dot
|
|
*.bat
|
|
*.metallib
|
|
.DS_Store
|
|
.build/
|
|
.cache/
|
|
.direnv/
|
|
.envrc
|
|
.swiftpm
|
|
.venv
|
|
.clang-tidy
|
|
.vs/
|
|
.vscode/
|
|
|
|
lcov-report/
|
|
gcovr-report/
|
|
|
|
build*/
|
|
out/
|
|
tmp/
|
|
|
|
models/*
|
|
models-mnt
|
|
|
|
/Pipfile
|
|
/baby-llama
|
|
/beam-search
|
|
/benchmark-matmult
|
|
/convert-llama2c-to-ggml
|
|
/embd-input-test
|
|
/embedding
|
|
/gguf
|
|
/gguf-llama-simple
|
|
/infill
|
|
/libllama.so
|
|
/llama-bench
|
|
/llava
|
|
/main
|
|
/metal
|
|
/perplexity
|
|
/q8dot
|
|
/quantize
|
|
/quantize-stats
|
|
/result
|
|
/save-load-state
|
|
/server
|
|
/simple
|
|
/batched
|
|
/batched-bench
|
|
/export-lora
|
|
/finetune
|
|
/speculative
|
|
/parallel
|
|
/train-text-from-scratch
|
|
/vdot
|
|
build-info.h
|
|
arm_neon.h
|
|
compile_commands.json
|
|
CMakeSettings.json
|
|
|
|
__pycache__
|
|
dist
|
|
|
|
zig-out/
|
|
zig-cache/
|
|
|
|
ppl-*.txt
|
|
qnt-*.txt
|
|
perf-*.txt
|
|
|
|
examples/jeopardy/results.txt
|
|
|
|
poetry.lock
|
|
poetry.toml
|
|
|
|
# Test binaries
|
|
tests/test-grammar-parser
|
|
tests/test-llama-grammar
|
|
tests/test-double-float
|
|
tests/test-grad0
|
|
tests/test-opt
|
|
tests/test-quantize-fns
|
|
tests/test-quantize-perf
|
|
tests/test-sampling
|
|
tests/test-tokenizer-0-llama
|
|
tests/test-tokenizer-0-falcon
|
|
tests/test-tokenizer-1-llama
|
|
tests/test-tokenizer-1-bpe
|