Commit Graph

  • d6aa749ccf
    Swap from exclusions to allowlist Jed Fox 2023-03-23 13:58:47 -0400
  • ea10d3ded2
    Command line args bounds checking (#424) master-ea10d3d anzz1 2023-03-23 19:54:28 +0200
  • ab02a2441c
    Move llama_progress_handler into llama_context_params Jed Fox 2023-03-23 13:36:43 -0400
  • e47924fd4b
    File load progress reporting Jed Fox 2023-03-22 13:13:29 -0400
  • 927bc26e03
    Add a Package.swift for SwiftPM support Jed Fox 2023-03-22 10:05:33 -0400
  • a18c19259a Fix Nix build Ben Siraphob 2023-03-22 00:37:02 -0500
  • af5ec1ba63
    Fix Nix build Ben Siraphob 2023-03-22 00:37:02 -0500
  • 1166fda943 Merge branch 'master' into concedo Concedo 2023-03-23 23:51:07 +0800
  • bfcb4e7c92
    Turn ON PIC when BUILD_SHARED_LIBS is ON nusu-github 2023-03-24 00:23:54 +0900
  • a50e39c6fe
    Revert "Delete SHA256SUMS for now" (#429) Stephan Walter 2023-03-23 14:15:48 +0000
  • e60c31af70
    Generate library with CMake nusu-github 2023-03-23 23:12:49 +0900
  • 632a3257e1
    Add also model/tokenizer.model to SHA256SUMS + update README Pavol Rusnak 2023-03-23 15:10:32 +0100
  • d442e0210c
    Remove alpaca json Stephan Walter 2023-03-23 14:58:23 +0100
  • 2580d75522
    Remove ggml files until the can be verified Stephan Walter 2023-03-23 14:55:55 +0100
  • e0607ae91a Revert "Delete SHA256SUMS for now (#416)" Stephan Walter 2023-03-23 14:54:20 +0100
  • 128c503392
    Fix quantize script not finding models in parent directory Jed Fox 2023-03-23 09:03:26 -0400
  • f7dda362f2
    Merge branch 'ggerganov:master' into patch-1 RSereno 2023-03-23 12:51:42 +0000
  • 2eb9d043d3
    fix comment anzz1 2023-03-23 14:20:44 +0200
  • 8f0c8bcc8e
    unknown and invalid param exit codes 0 -> 1 anzz1 2023-03-23 14:09:49 +0200
  • c96a80a3c6
    feat: '--in-prefix STRING' option anzz1 2023-03-23 13:59:09 +0200
  • 2d01e60bc8
    command line args bounds checking anzz1 2023-03-23 13:49:27 +0200
  • a140219e81
    Fix Makefile echo escape codes (by removing them). (#418) master-a140219 Kerfuffle 2023-03-23 05:41:32 -0600
  • 8a3e5ef801
    Move model section from issue template to README.md (#421) Gary Mulder 2023-03-23 11:30:40 +0000
  • 76e82a815b
    Fix GPTQ converter Timmy Knight 2023-03-23 01:19:36 -1000
  • f58154abe0 Fix Makefile echo escape codes (by removing them). KerfuffleV2 2023-03-23 01:58:43 -0600
  • 8eea5ae0e5
    Delete SHA256SUMS for now (#416) anzz1 2023-03-23 12:26:19 +0200
  • dbb0683293 Updates to README.md model section Gary Mulder 2023-03-23 09:34:50 +0000
  • f2df89685f
    (Windows) Set console to UTF-8 on init anzz1 2023-03-23 11:09:09 +0200
  • 5d307f1815
    Update custom.md Gary Mulder 2023-03-23 09:02:41 +0000
  • 93208cfb92
    Adjust repetition penalty .. Georgi Gerganov 2023-03-23 10:46:58 +0200
  • 47ea33ab59
    Update README.md LostRuins 2023-03-23 16:02:19 +0800
  • 03ace14cfd
    Add link to recent podcast about whisper.cpp and llama.cpp Georgi Gerganov 2023-03-23 09:48:51 +0200
  • 10526e8c00
    Delete SHA256SUMS for now anzz1 2023-03-23 09:39:23 +0200
  • 66ea164e1d Kahan summation on Q4_1 q4_1_more_accel_kahan Matvey Soloviev 2023-03-23 04:28:51 +0100
  • e4412b45e3
    CI: CMake: Separate build and test steps (#376) master-e4412b4 anzz1 2023-03-23 04:20:34 +0200
  • 711224708d Break up loop for numeric stability q4_1_more_accel_loopsplit Matvey Soloviev 2023-03-23 03:14:44 +0100
  • ad2210bfda
    CI: CMake: Separate Build and Test steps anzz1 2023-03-23 03:33:05 +0200
  • 859e70899a start doing the instructions but not finished. This probably doesnt compile strikingLoo 2023-03-22 17:52:46 -0700
  • 80744d6fed
    Merge branch 'ggerganov:master' into master taher 2023-03-22 17:50:00 -0700
  • 88df270f6b
    add space to comment rabidcopy 2023-03-22 19:44:00 -0500
  • 666c5a0395
    Merge branch 'master' into interactive-eos-fix rabidcopy 2023-03-22 19:31:47 -0500
  • f7dc43bc0d
    Fix instruct mode broken by PR #354 (#409) master-f7dc43b tjohnman 2023-03-23 01:30:23 +0100
  • 84ab887349 merge strikingLoo 2023-03-22 17:22:45 -0700
  • 7864eef92c
    tokenize newline token rabidcopy 2023-03-22 19:19:49 -0500
  • 8f83ce8380
    remove newline token rabidcopy 2023-03-22 18:53:10 -0500
  • 10206d0360
    remove newline token rabidcopy 2023-03-22 18:52:51 -0500
  • 6a4cfc4dfa
    not needed rabidcopy 2023-03-22 18:02:35 -0500
  • 4e4cfdfb67
    tokenize and inject reverse prompt as needed rabidcopy 2023-03-22 17:46:23 -0500
  • 69071d3b6b Squeeze out about 5% more performance in Q4_1 inference Matvey Soloviev 2023-03-21 22:55:35 +0100
  • ce339001c4 Fix instruct mode broken by PR #354 Johnman 2023-03-22 22:23:14 +0100
  • ae1519f681
    Update tools.sh RSereno 2023-03-22 20:29:20 +0000
  • 9ea43d4d91 Add support to batch size for perplexity Gary Linscott 2023-03-22 12:09:42 -0700
  • ee8a788786
    Update issue template so people will use it (#404) Gary Mulder 2023-03-22 19:06:18 +0000
  • a6bd606cd0
    typo Stephan Walter 2023-03-22 19:02:39 +0000
  • 49197bbd6b
    Update custom.md Gary Mulder 2023-03-22 18:06:15 +0000
  • 84ba1fd25b add capabiliy to convert from ggml back to torch or hf format for further consumption/training/finetuning Tai Duc Nguyen 2023-03-22 13:38:39 -0400
  • 3a0dcb3920
    Implement server mode. tcp_server Thiago Padilha 2023-03-22 10:41:26 -0300
  • bf44faa0ee
    Remove direct access to std streams from "run" Thiago Padilha 2023-03-22 09:55:45 -0300
  • b7f1fa6d8c
    Move llama_context setup + perplexity back to main.cpp Thiago Padilha 2023-03-22 09:39:25 -0300
  • d7d53b84db
    Add main.cpp back and invoke "run" from it Thiago Padilha 2023-03-22 09:16:33 -0300
  • 90175ee13f
    Move main.cpp to run.cpp Thiago Padilha 2023-03-22 09:05:50 -0300
  • 69c92298a9
    Deduplicate q4 quantization functions (#383) master-69c9229 Stephan Walter 2023-03-22 17:29:06 +0000
  • 97940520e8
    fix: add POSIX functionality for Linux compilation (#51) master-9794052 master-305ba6f Valentyn Bezshapkin 2023-03-22 18:20:25 +0100
  • 305ba6f0e6
    Don't force immediate interactive without -i (#354) tjohnman 2023-03-22 18:16:35 +0100
  • b29e6f318a Disable AVX2 flags in CI Georgi Gerganov 2023-03-22 19:08:14 +0200
  • 992ebff68b
    Re-enable quantization test Georgi Gerganov 2023-03-22 18:58:16 +0200
  • e590787ab3
    Update main.cpp rabidcopy 2023-03-22 11:43:57 -0500
  • 879da33ab4
    Update main.cpp rabidcopy 2023-03-22 11:41:19 -0500
  • 4122dffff9
    cmake: make llama an actual library (#392) master-4122dff Erik Scholz 2023-03-22 17:37:10 +0100
  • c4efdb22af
    tokenize nothing for antiprompt if no reverse rabidcopy 2023-03-22 11:22:56 -0500
  • 56e659a0b2
    fix perplexity after c-api refactor (#390) master-56e659a Erik Scholz 2023-03-22 17:09:38 +0100
  • da0837f55f
    tokenize/inject reverse prompt for refactor rabidcopy 2023-03-22 11:01:47 -0500
  • 40ea807a97
    Add details on perplexity to README.md (#395) Gary Linscott 2023-03-22 08:53:54 -0700
  • c65eff0d14
    Add details on dataset/context length Gary Linscott 2023-03-22 08:48:36 -0700
  • 9d9e152b6d
    Add details on perplexity to README.md Gary Linscott 2023-03-22 08:19:17 -0700
  • c5c1c8d5ce
    Update README.md LostRuins 2023-03-22 22:54:27 +0800
  • 4ff58f73e5 Merge branch 'master' into concedo Concedo 2023-03-22 22:32:11 +0800
  • 86c7457e24 Merge branch 'master' into concedo Concedo 2023-03-22 22:31:45 +0800
  • 7b77319054
    cmake: make llama an actual library Green Sky 2023-03-22 14:46:29 +0100
  • 3501b9df50 Use const; add basic test Stephan Walter 2023-03-22 13:42:03 +0100
  • 57fee166d2
    don't create a new std::string (especially here, where it's usually large) Green Sky 2023-03-22 12:58:20 +0100
  • 7b1b575fe8
    preallocate a buffer of fitting size for tokenization (utils.cpp) Green Sky 2023-03-22 12:56:42 +0100
  • 827bcb1375
    fix perplexity after c-api refactor by proving a large enough token buffer Green Sky 2023-03-22 12:44:26 +0100
  • d5850c53ca
    Add missing header for memcpy (#386) master-d5850c5 Yusuf Kağan Hanoğlu 2023-03-22 11:55:45 +0300
  • 99acb7a352
    Added missing include for memcpy to llama.cpp niansa/tuxifan 2023-03-22 09:50:58 +0100
  • 23e75fbae9
    Build Error Fixed Yusuf Kağan Hanoğlu 2023-03-22 11:43:55 +0300
  • 5c475503ce resize image Concedo 2023-03-22 16:21:40 +0800
  • 4e95e7f87f Updated readme Concedo 2023-03-22 16:20:37 +0800
  • 5f142df76e dynamic max context size defaulting to 1024, also implemented the basic API as a fallback Concedo 2023-03-22 15:56:47 +0800
  • b4dfdf7a77 Deduplicate q4 quantization functions Stephan Walter 2023-03-22 08:30:11 +0100
  • 23bb78fbdc
    add newline token rabidcopy 2023-03-22 01:55:57 -0500
  • 1752bc92eb
    add newline token rabidcopy 2023-03-22 01:55:17 -0500
  • 6fb0db31d7
    Merge branch 'master' into interactive-eos-fix rabidcopy 2023-03-22 01:54:07 -0500
  • 84130caf5e merge with main, move logic for embeddings into llama.cpp strikingLoo 2023-03-21 23:44:04 -0700
  • 78cff58427 make params argument instead of hardcoded boolean. remove useless time check strikingLoo 2023-03-21 23:21:07 -0700
  • ae44e23ee3
    When seed <= 0 - use the clock to generate one master-ae44e23 master-928480e Georgi Gerganov 2023-03-22 07:47:15 +0200
  • 928480ef5b
    Init llama_context_params properly from CLI (#370) Georgi Gerganov 2023-03-22 07:45:00 +0200
  • 56817b1f88
    Remove temporary notice and update hot topics master-f5a77a6 Georgi Gerganov 2023-03-22 07:34:02 +0200
  • f5a77a629b
    Introduce C-style API (#370) Georgi Gerganov 2023-03-22 07:32:36 +0200
  • c3d13eaa4d
    Change llama_tokenize return meaning Georgi Gerganov 2023-03-22 07:27:26 +0200