Commit Graph

  • 31edd6fa25
    add command line switch to use f16 instead of f32 for memory k+v Green Sky 2023-03-19 14:49:28 +0100
  • 640b5602e6
    Use F16 for memory_k and memory_v Ty Everett 2023-03-14 23:10:12 -0700
  • 9a1d2c76d0 resolve conflicts Rickey Bowers Jr 2023-03-19 11:21:31 -0600
  • 474f760411 updated binaries Concedo 2023-03-20 01:19:15 +0800
  • a097703ec4 Merge branch 'master' into concedo Concedo 2023-03-20 01:18:42 +0800
  • 29054a2bee explicit buffer allocation from python Concedo 2023-03-20 01:18:34 +0800
  • 467b149761
    Refactoring convert-pth-to-ggml.py: more concise and readable (#109) qunash 2023-03-19 20:17:39 +0300
  • 6535332d69
    Merge branch 'master' into master Georgi Gerganov 2023-03-19 19:17:22 +0200
  • 5ef2da2bf4 Merge branch 'master' of github.com:tjohnman/llama.cpp into eternal-interactive-mode Johnman 2023-03-19 18:06:04 +0100
  • 70f01cb863
    Drop trailing new line from file prompts (#80) master-70f01cb Georgi Gerganov 2023-03-19 19:04:44 +0200
  • bb5e8ec79a Never exit the main loop in interactive mode. Johnman 2023-03-19 16:26:21 +0100
  • 356c1b87ba bugfixes and support for persistent states Concedo 2023-03-20 00:59:45 +0800
  • a4e63b73df
    Add instruction for using Alpaca (#240) Georgi Gerganov 2023-03-19 18:49:50 +0200
  • 9e1707218a
    Add "--instruct" argument for usage with Alpaca (#240) master-9e17072 Georgi Gerganov 2023-03-19 18:37:02 +0200
  • 9ef4920795 Support for multiple reverse prompts. Johnman 2023-03-19 17:29:27 +0100
  • 5e7f909ff5 Make prompt randomization optional. Johnman 2023-03-19 16:59:45 +0100
  • 80825b0173 Support for multiple reverse prompts. Johnman 2023-03-19 17:29:27 +0100
  • 1b8f8ad0ba
    Include n_predict to 2048 in examples/chatLLaMa Jean-Christophe Hoelt 2023-03-19 18:27:54 +0200
  • b8c383a9b9
    Reduce chatLLaMa context size to 2048 Jean-Christophe Hoelt 2023-03-19 14:11:05 +0200
  • b6bcd016b1
    Move chatLLaMa script to examples directory Jean-Christophe Hoelt 2023-03-19 13:56:07 +0200
  • 2aaf379982
    Fix shellcheck errors and do some cleanup Jean-Christophe Hoelt 2023-03-17 08:47:12 +0200
  • fdb864a61d
    Add chatLLaMa script Jean-Christophe Hoelt 2023-03-16 09:54:24 +0200
  • e2bfaeb9c1 Added support for Windows and updated README to use this script Gerardo Romero 2023-03-19 10:26:38 -0600
  • c62cffc2d9 Make prompt randomization optional. Johnman 2023-03-19 16:59:45 +0100
  • b78caa6bff Pause sampling if waiting for user input. Johnman 2023-03-19 16:57:02 +0100
  • 10f1c9ed30 Never exit the main loop in interactive mode. Johnman 2023-03-19 16:26:21 +0100
  • 22213a17b5
    Change RMSNorm eps to 1e-6 (#173) master-22213a1 Georgi Gerganov 2023-03-19 17:30:00 +0200
  • acf9e522cd [WIP] x86 performance improvements Steven han 2023-03-19 09:59:43 -0400
  • aa79d7d40e
    Remove torchvision torchaudio, add requests Stephan Walter 2023-03-19 13:58:04 +0000
  • 14e98b8e13
    Add tqdm to Python requirements Stephan Walter 2023-03-19 12:18:05 +0000
  • a8f0e40e30 Fix scripts to support cross-platform execution Aizaixyq 2023-03-19 17:07:19 +0800
  • 1d7e32bba7
    bugfix: centos 7, gcc (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) std::string mesh up vocab. Lou Xiao 2023-03-19 17:03:31 +0800
  • 048c8abacb interactive mode: print '\n' in sigint_handler, this flush stdout thus ensure color reset. mqy 2023-03-19 14:50:20 +0800
  • d2b1d3a439 typo strikingLoo 2023-03-18 23:36:36 -0700
  • f22ae5729f Merge branch 'master' of https://github.com/StrikingLoo/llama.cpp strikingLoo 2023-03-18 23:34:38 -0700
  • 801071ec4f add arg flag, not working on embedding mode strikingLoo 2023-03-18 23:34:20 -0700
  • c028226704 Corrected to use the original glob pattern Gerardo Romero 2023-03-19 00:21:37 -0600
  • 1602ca681c Fix tokenization for variable-length characters yuguorui 2023-03-19 13:37:24 +0800
  • 01237dd6f1 Small fixes to the previous commit SuajCarrot 2023-03-18 21:58:55 -0600
  • 2ab33114de Fixes and improvements based on Matt's observations SuajCarrot 2023-03-18 21:36:40 -0600
  • f952b7c613 Removed junk, fixed some bugs and support dynamic number of sharded files Concedo 2023-03-19 11:13:00 +0800
  • f3d0530ed3
    Update README.md gyunggyung 2023-03-19 09:55:23 +0900
  • d7def1a752
    Warn user if a context size greater than 2048 tokens is specified (#274) master-d7def1a Ronsor 2023-03-18 17:10:47 -0700
  • 052027d41d
    Warn user if a context size greater than 2048 is specified Ronsor 2023-03-18 16:14:49 -0700
  • 6f61c18ec9 Fix typo in readme Pavol Rusnak 2023-03-18 22:39:46 +0100
  • 8055a430a5
    Fix typo in readme Pavol Rusnak 2023-03-18 22:39:46 +0100
  • ff4032538b Added script to invoke alpaca model Taher 2023-03-18 14:38:02 -0700
  • 1e5a6d088d Add note about Python 3.11 to readme Pavol Rusnak 2023-03-18 22:20:04 +0100
  • 554b541521 Add memory/disk requirements to readme Pavol Rusnak 2023-03-18 21:58:46 +0100
  • 8cb60021fa
    Add note about Python 3.11 to readme Pavol Rusnak 2023-03-18 22:20:04 +0100
  • b97df76c54 working but ugly strikingLoo 2023-03-18 14:10:16 -0700
  • 5d83a294d1
    Add memory/disk requirements to readme Pavol Rusnak 2023-03-18 21:58:46 +0100
  • e94bd9c7b9 Compute perplexity over prompt Gary Linscott 2023-03-18 14:03:20 -0700
  • 3a208b917b
    Merge pull request #42 from MariusCiocanel/master Kevin Kwok 2023-03-18 13:57:20 -0700
  • ad0f01b366
    Merge pull request #56 from anzz1/patch-2 Kevin Kwok 2023-03-18 13:56:08 -0700
  • 60c84e6735
    Merge pull request #54 from NatoBoram/feature/gitignore-chat Kevin Kwok 2023-03-18 13:55:19 -0700
  • 1b19586681
    Init the var too anzz1 2023-03-18 22:21:58 +0200
  • f69062f68e
    Do the windows ANSI color fix properly anzz1 2023-03-18 21:51:12 +0200
  • c0e1cb53c7
    🙈 Add output chat to .gitignore Nato Boram 2023-03-18 14:45:10 -0400
  • c21c89edca
    Update README.md LostRuins 2023-03-19 00:50:03 +0800
  • 42f307ef6a
    Update README.md LostRuins 2023-03-19 00:21:59 +0800
  • 2b188521a1
    Merge branch 'ggerganov:master' into concedo LostRuins 2023-03-19 00:20:09 +0800
  • 5a6f3b01bd update readme Concedo 2023-03-19 00:19:34 +0800
  • 0dc3ab930c Updated binaries Concedo 2023-03-19 00:09:00 +0800
  • e3d85aa08b Merge branch 'master' into concedo Concedo 2023-03-19 00:07:32 +0800
  • 2c8f870f53 Created a python bindings for llama.cpp and emulated a simple Kobold HTTP API Endpoint Concedo 2023-03-19 00:07:11 +0800
  • edc17cfa9f
    Remove direct access to std streams from llama_main Thiago Padilha 2023-03-18 12:20:20 -0300
  • 1088d2dd04
    Move model loading back to main.cpp Thiago Padilha 2023-03-18 12:12:00 -0300
  • e3648474d6
    Add main.cpp back, and invoke llama_main from it Thiago Padilha 2023-03-18 11:58:11 -0300
  • 82e70dbfe0
    Move struct definitions in llama.cpp to llama.h Thiago Padilha 2023-03-18 11:52:55 -0300
  • 51d003e885
    Move main.cpp to llama.cpp Thiago Padilha 2023-03-18 11:49:09 -0300
  • b64ca1c07c
    Merge pull request #40 from rupeshs/windows-console-ansi-color-fix Kevin Kwok 2023-03-18 07:37:29 -0700
  • d3f202d57b
    Remove unused code since n_vocab is model.hparams.n_vocab (#262) master-d3f202d Alex Nguyen 2023-03-18 20:51:49 +0700
  • fd73543510 make publishable Emanuel Seemann 2023-03-18 14:34:32 +0100
  • 8bb0dd55f4
    Merge pull request #1 from MariusCiocanel/MariusCiocanel-curl-instead-of-wget-1 Marius Ciocanel 2023-03-18 13:12:18 +0000
  • bb60fdaf32
    Update command for downloading the weights to use curl Marius Ciocanel 2023-03-18 13:10:25 +0000
  • 51fa40be1b Remove unused code since n_vocab is model.hparams.n_vocab Tien Dung 2023-03-18 19:31:24 +0700
  • 60f519c74a add self to license Emanuel Seemann 2023-03-18 13:29:43 +0100
  • 092393781f add all llamacpypy Emanuel Seemann 2023-03-18 13:26:50 +0100
  • eb3d30e53d add modules Emanuel Seemann 2023-03-18 13:24:14 +0100
  • e03e359730
    fixed warning with std::ignore about unused function result (#151) Justin Suess 2023-03-18 07:44:09 -0400
  • a81d0c2a17
    Fix n^2 loop in tokenization (#254) Gary Linscott 2023-03-18 04:17:19 -0700
  • a83e2e7a24 Windows console ANSI color issue fixed Rupesh Sreeraman 2023-03-18 16:41:20 +0530
  • a44ccef6ac
    Merge branch 'master' into optimize-convert tpoisonooo 2023-03-18 18:41:28 +0800
  • 4a524c51ba commenting out aarch antimatter15 2023-03-18 01:15:51 -0700
  • ddc4e24cb8 maybe macos-arm64 is case sensitive antimatter15 2023-03-18 01:11:02 -0700
  • 564b861bac archiving artifacts antimatter15 2023-03-18 01:06:44 -0700
  • 3f7d187b6b more copying stuff antimatter15 2023-03-18 00:55:00 -0700
  • 1c62e35984 create release antimatter15 2023-03-18 00:37:34 -0700
  • 7e126618c4 ci releases for mac and linux antimatter15 2023-03-18 00:34:53 -0700
  • 501a8e19d9 adding to credit section antimatter15 2023-03-18 00:28:05 -0700
  • 1cb9215e5d removing random prompt generation antimatter15 2023-03-18 00:19:52 -0700
  • e95e64bd49 Implement non-greedy tokenizer that tries to maximize token lengths (#242) thement 2023-03-17 21:05:58 +0100
  • b2de7f18df
    CI Improvements (#230) anzz1 2023-03-18 09:27:12 +0200
  • 7b24407613
    Merge pull request #31 from anzz1/ci_test Kevin Kwok 2023-03-18 00:06:16 -0700
  • 96e0519ae1 extending context window antimatter15 2023-03-17 23:46:31 -0700
  • 97d327e1bf
    Update chat.cpp Kevin Kwok 2023-03-17 23:43:09 -0700
  • 7cd84a7027
    Update README.md Kevin Kwok 2023-03-17 22:57:27 -0700
  • 4cf24a4df4
    Fix n^2 loop in tokenization Gary Linscott 2023-03-17 22:34:11 -0700
  • 0b5448a3a4
    Implement system polyfill for win32 / posix.1 Justine Tunney 2023-03-17 21:22:40 -0700