2023-08-18 10:44:58 +00:00
|
|
|
#include <algorithm>
|
|
|
|
#include <array>
|
|
|
|
#include <cassert>
|
|
|
|
#include <chrono>
|
|
|
|
#include <cinttypes>
|
2023-08-28 17:19:18 +00:00
|
|
|
#include <clocale>
|
|
|
|
#include <cmath>
|
|
|
|
#include <cstdio>
|
2024-11-20 11:57:53 +00:00
|
|
|
#include <cstdlib>
|
2023-08-18 10:44:58 +00:00
|
|
|
#include <cstring>
|
|
|
|
#include <ctime>
|
|
|
|
#include <iterator>
|
|
|
|
#include <map>
|
|
|
|
#include <numeric>
|
|
|
|
#include <regex>
|
|
|
|
#include <sstream>
|
|
|
|
#include <string>
|
Threadpool: take 2 (#8672)
* Introduce ggml_compute_threadpool
- OpenMP functional: check
- Vanilla ggml functional: Check
- ggml w/threadpool functional: Check
- OpenMP no regression: No glaring problems
- Vanilla ggml no regression: No glaring problems
- ggml w/threadpool no regression: No glaring problems
* Minor fixes
* fixed use after release bug
* fixed a harmless race condition
* Fix Android bulid issue
* fix more race conditions
* fix deadlock for cases where cgraph.n_nodes == 1
and fix --poll case
* threadpool: use cpu_get_num_math to set the default number of threadpool threads
This way we avoid using E-Cores and Hyperthreaded siblings.
* bench: create fresh threadpool for each test
For benchmarking it's better to start a fresh pool for each test with the exact number of threads
needed for that test. Having larger pools is suboptimal (causes more load, etc).
* atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier
This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.
* threadpool: make polling the default to match openmp behavior
All command line args now allow for setting poll to 0 (false).
* threadpool: do not wakeup threads in already paused threadpool
* fix potential race condition in check_for_work
* threadpool: do not create two threadpools if their params are identical
* threadpool: reduce pause/resume/wakeup overhead in common cases
We now start threadpool in paused state only if we have two.
The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.
* threadpool: add support for hybrid polling
poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...
The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
We can tune this further as things evolve.
* threadpool: reduce the number of barrier required
New work is now indicated with an atomic counter that is incremented for
each new graph that needs to be computed.
This removes the need for extra barrier for clearing the "new_work" and
removes the special case for trivial graphs.
* threadpool: remove special-casing for disposable threadpools
With the efficient hybrid polling there is no need to make disposable pools any different.
This simplifies the overall logic and reduces branching.
Include n_threads in debug print for disposable threadpool.
Declare pause and stop flags as atomic_bool
This doesn't actually generate any memory barriers and simply informs
the thread sanitizer that these flags can be written & read by different
threads without locking.
* threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)
This fixes the race condition with very small graphs where the main thread happens to
start a new graph while the workers are just about to exit from barriers.
* threadpool: use relaxed order for chunk sync
Full memory barrier is an overkill for this since each thread works on different chunk
* threadpool: remove abort_callback from threadpool state
* threadpool: better naming for thread/cpumask releated functions
* threadpool: consistent use of int type for n_threads params
* threadpool: add support for ggml_threadpool_params_default/init
Also removes the need for explicit mask_specified param.
all-zero cpumask means use default (usually inherited) cpu affinity mask.
* threadpool: move typedef into ggml.h
* threadpool: fix apply_priority() function name
* threadpool: fix swift wrapper errors due to n_threads int type cleanup
* threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled
* threadpool: replace checks for compute_thread ret code with proper status check
* threadpool: simplify threadpool init logic and fix main thread affinity application
Most of the init code is now exactly the same between threadpool and openmp.
* threadpool: update threadpool resume/pause function names
* threadpool: enable openmp by default for now
* threadpool: don't forget to free workers state when omp is enabled
* threadpool: avoid updating process priority on the platforms that do not require it
On Windows we need to change overall process priority class in order to set thread priorities,
but on Linux, Mac, etc we do not need to touch the overall process settings.
* threadpool: update calling thread prio and affinity only at start/resume
This avoids extra syscalls for each graph_compute()
* llama-bench: turn threadpool params into vectors, add output headers, etc
* llama-bench: add support for cool off between tests --delay
This helps for long running tests on platforms that are thermally limited (phones, laptops, etc).
--delay (disabled by default) introduces the sleep for N seconds before starting each test.
* threadpool: move process priority setting into the apps (bench and cli)
This avoids changing the overall process priority on Windows for the apps
that use ggml/llama.cpp directy.
* threadpool: move all pause/resume logic into ggml
* threadpool: futher api cleanup and prep for future refactoring
All threadpool related functions and structs use ggml_threadpool prefix.
* threadpool: minor indent fixes
* threadpool: improve setprioty error message
* Update examples/llama-bench/llama-bench.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* threadpool: fix indent in set_threadpool call
* use int32_t for n_thread type in public llama.cpp API
* threadpool: use _new and _free instead of _create and _release
* fix two more public APIs to use int32_t for n_threads
* build: set _GNU_SOURCE for Adroid
---------
Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Co-authored-by: fmz <quic_fzaghlou@quic.com>
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2024-08-29 23:20:53 +00:00
|
|
|
#include <thread>
|
2024-11-20 11:57:53 +00:00
|
|
|
#include <vector>
|
2023-08-18 10:44:58 +00:00
|
|
|
|
2024-11-20 11:57:53 +00:00
|
|
|
#include "common.h"
|
2023-08-18 10:44:58 +00:00
|
|
|
#include "ggml.h"
|
|
|
|
#include "llama.h"
|
2024-07-17 11:23:50 +00:00
|
|
|
|
2024-08-07 01:01:06 +00:00
|
|
|
#ifdef _WIN32
|
2024-11-20 11:57:53 +00:00
|
|
|
# define WIN32_LEAN_AND_MEAN
|
|
|
|
# ifndef NOMINMAX
|
|
|
|
# define NOMINMAX
|
|
|
|
# endif
|
|
|
|
# include <windows.h>
|
2024-08-07 01:01:06 +00:00
|
|
|
#endif
|
|
|
|
|
2023-08-18 10:44:58 +00:00
|
|
|
// utils
|
|
|
|
static uint64_t get_time_ns() {
|
|
|
|
using clock = std::chrono::high_resolution_clock;
|
|
|
|
return std::chrono::nanoseconds(clock::now().time_since_epoch()).count();
|
|
|
|
}
|
|
|
|
|
2024-11-20 11:57:53 +00:00
|
|
|
template <class T> static std::string join(const std::vector<T> & values, const std::string & delim) {
|
2023-08-18 10:44:58 +00:00
|
|
|
std::ostringstream str;
|
|
|
|
for (size_t i = 0; i < values.size(); i++) {
|
|
|
|
str << values[i];
|
|
|
|
if (i < values.size() - 1) {
|
|
|
|
str << delim;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return str.str();
|
|
|
|
}
|
|
|
|
|
2024-11-20 11:57:53 +00:00
|
|
|
template <typename T, typename F> static std::vector<std::string> transform_to_str(const std::vector<T> & values, F f) {
|
2023-12-07 11:03:17 +00:00
|
|
|
std::vector<std::string> str_values;
|
|
|
|
std::transform(values.begin(), values.end(), std::back_inserter(str_values), f);
|
|
|
|
return str_values;
|
|
|
|
}
|
|
|
|
|
2024-11-20 11:57:53 +00:00
|
|
|
template <typename T> static T avg(const std::vector<T> & v) {
|
2023-08-18 10:44:58 +00:00
|
|
|
if (v.empty()) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
T sum = std::accumulate(v.begin(), v.end(), T(0));
|
2024-11-20 11:57:53 +00:00
|
|
|
return sum / (T) v.size();
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
|
2024-11-20 11:57:53 +00:00
|
|
|
template <typename T> static T stdev(const std::vector<T> & v) {
|
2023-08-18 10:44:58 +00:00
|
|
|
if (v.size() <= 1) {
|
|
|
|
return 0;
|
|
|
|
}
|
2024-11-20 11:57:53 +00:00
|
|
|
T mean = avg(v);
|
2023-08-18 10:44:58 +00:00
|
|
|
T sq_sum = std::inner_product(v.begin(), v.end(), v.begin(), T(0));
|
2024-11-20 11:57:53 +00:00
|
|
|
T stdev = std::sqrt(sq_sum / (T) (v.size() - 1) - mean * mean * (T) v.size() / (T) (v.size() - 1));
|
2023-08-18 10:44:58 +00:00
|
|
|
return stdev;
|
|
|
|
}
|
|
|
|
|
|
|
|
static std::string get_cpu_info() {
|
2024-10-30 01:01:23 +00:00
|
|
|
std::vector<std::string> cpu_list;
|
|
|
|
for (size_t i = 0; i < ggml_backend_dev_count(); i++) {
|
2024-11-20 11:57:53 +00:00
|
|
|
auto * dev = ggml_backend_dev_get(i);
|
|
|
|
auto dev_type = ggml_backend_dev_type(dev);
|
2024-10-30 01:01:23 +00:00
|
|
|
if (dev_type == GGML_BACKEND_DEVICE_TYPE_CPU || dev_type == GGML_BACKEND_DEVICE_TYPE_ACCEL) {
|
|
|
|
cpu_list.push_back(ggml_backend_dev_description(dev));
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
2024-08-07 01:01:06 +00:00
|
|
|
}
|
2024-10-30 01:01:23 +00:00
|
|
|
return join(cpu_list, ", ");
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static std::string get_gpu_info() {
|
2024-10-30 01:01:23 +00:00
|
|
|
std::vector<std::string> gpu_list;
|
|
|
|
for (size_t i = 0; i < ggml_backend_dev_count(); i++) {
|
2024-11-20 11:57:53 +00:00
|
|
|
auto * dev = ggml_backend_dev_get(i);
|
|
|
|
auto dev_type = ggml_backend_dev_type(dev);
|
2024-10-30 01:01:23 +00:00
|
|
|
if (dev_type == GGML_BACKEND_DEVICE_TYPE_GPU) {
|
|
|
|
gpu_list.push_back(ggml_backend_dev_description(dev));
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
}
|
2024-10-30 01:01:23 +00:00
|
|
|
return join(gpu_list, ", ");
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// command line params
|
2024-11-20 11:57:53 +00:00
|
|
|
enum output_formats { NONE, CSV, JSON, JSONL, MARKDOWN, SQL };
|
2023-08-18 10:44:58 +00:00
|
|
|
|
2024-01-12 19:07:38 +00:00
|
|
|
static const char * output_format_str(output_formats format) {
|
|
|
|
switch (format) {
|
2024-11-20 11:57:53 +00:00
|
|
|
case NONE:
|
|
|
|
return "none";
|
|
|
|
case CSV:
|
|
|
|
return "csv";
|
|
|
|
case JSON:
|
|
|
|
return "json";
|
|
|
|
case JSONL:
|
|
|
|
return "jsonl";
|
|
|
|
case MARKDOWN:
|
|
|
|
return "md";
|
|
|
|
case SQL:
|
|
|
|
return "sql";
|
|
|
|
default:
|
|
|
|
GGML_ABORT("invalid output format");
|
2024-01-12 19:07:38 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-06-04 12:32:42 +00:00
|
|
|
static bool output_format_from_str(const std::string & s, output_formats & format) {
|
|
|
|
if (s == "none") {
|
|
|
|
format = NONE;
|
|
|
|
} else if (s == "csv") {
|
|
|
|
format = CSV;
|
|
|
|
} else if (s == "json") {
|
|
|
|
format = JSON;
|
2024-09-03 17:58:54 +00:00
|
|
|
} else if (s == "jsonl") {
|
|
|
|
format = JSONL;
|
2024-06-04 12:32:42 +00:00
|
|
|
} else if (s == "md") {
|
|
|
|
format = MARKDOWN;
|
|
|
|
} else if (s == "sql") {
|
|
|
|
format = SQL;
|
|
|
|
} else {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2024-01-12 19:07:38 +00:00
|
|
|
static const char * split_mode_str(llama_split_mode mode) {
|
|
|
|
switch (mode) {
|
2024-11-20 11:57:53 +00:00
|
|
|
case LLAMA_SPLIT_MODE_NONE:
|
|
|
|
return "none";
|
|
|
|
case LLAMA_SPLIT_MODE_LAYER:
|
|
|
|
return "layer";
|
|
|
|
case LLAMA_SPLIT_MODE_ROW:
|
|
|
|
return "row";
|
|
|
|
default:
|
|
|
|
GGML_ABORT("invalid split mode");
|
2024-01-12 19:07:38 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-05-10 16:03:54 +00:00
|
|
|
static std::string pair_str(const std::pair<int, int> & p) {
|
|
|
|
static char buf[32];
|
|
|
|
snprintf(buf, sizeof(buf), "%d,%d", p.first, p.second);
|
|
|
|
return buf;
|
|
|
|
}
|
|
|
|
|
2023-08-18 10:44:58 +00:00
|
|
|
struct cmd_params {
|
2024-11-20 11:57:53 +00:00
|
|
|
std::vector<std::string> model;
|
|
|
|
std::vector<int> n_prompt;
|
|
|
|
std::vector<int> n_gen;
|
2024-05-10 16:03:54 +00:00
|
|
|
std::vector<std::pair<int, int>> n_pg;
|
2024-11-20 11:57:53 +00:00
|
|
|
std::vector<int> n_batch;
|
|
|
|
std::vector<int> n_ubatch;
|
|
|
|
std::vector<ggml_type> type_k;
|
|
|
|
std::vector<ggml_type> type_v;
|
|
|
|
std::vector<int> n_threads;
|
|
|
|
std::vector<std::string> cpu_mask;
|
|
|
|
std::vector<bool> cpu_strict;
|
|
|
|
std::vector<int> poll;
|
|
|
|
std::vector<int> n_gpu_layers;
|
|
|
|
std::vector<std::string> rpc_servers;
|
|
|
|
std::vector<llama_split_mode> split_mode;
|
|
|
|
std::vector<int> main_gpu;
|
|
|
|
std::vector<bool> no_kv_offload;
|
|
|
|
std::vector<bool> flash_attn;
|
|
|
|
std::vector<std::vector<float>> tensor_split;
|
|
|
|
std::vector<bool> use_mmap;
|
|
|
|
std::vector<bool> embeddings;
|
|
|
|
ggml_numa_strategy numa;
|
|
|
|
int reps;
|
|
|
|
ggml_sched_priority prio;
|
|
|
|
int delay;
|
|
|
|
bool verbose;
|
|
|
|
bool progress;
|
|
|
|
output_formats output_format;
|
|
|
|
output_formats output_format_stderr;
|
2023-08-18 10:44:58 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
static const cmd_params cmd_params_defaults = {
|
2024-11-20 11:57:53 +00:00
|
|
|
/* model */ { "models/7B/ggml-model-q4_0.gguf" },
|
|
|
|
/* n_prompt */ { 512 },
|
|
|
|
/* n_gen */ { 128 },
|
2024-06-04 12:32:42 +00:00
|
|
|
/* n_pg */ {},
|
2024-11-20 11:57:53 +00:00
|
|
|
/* n_batch */ { 2048 },
|
|
|
|
/* n_ubatch */ { 512 },
|
|
|
|
/* type_k */ { GGML_TYPE_F16 },
|
|
|
|
/* type_v */ { GGML_TYPE_F16 },
|
|
|
|
/* n_threads */ { cpu_get_num_math() },
|
|
|
|
/* cpu_mask */ { "0x0" },
|
|
|
|
/* cpu_strict */ { false },
|
|
|
|
/* poll */ { 50 },
|
|
|
|
/* n_gpu_layers */ { 99 },
|
|
|
|
/* rpc_servers */ { "" },
|
|
|
|
/* split_mode */ { LLAMA_SPLIT_MODE_LAYER },
|
|
|
|
/* main_gpu */ { 0 },
|
|
|
|
/* no_kv_offload */ { false },
|
|
|
|
/* flash_attn */ { false },
|
|
|
|
/* tensor_split */ { std::vector<float>(llama_max_devices(), 0.0f) },
|
|
|
|
/* use_mmap */ { true },
|
|
|
|
/* embeddings */ { false },
|
2024-06-04 12:32:42 +00:00
|
|
|
/* numa */ GGML_NUMA_STRATEGY_DISABLED,
|
|
|
|
/* reps */ 5,
|
Threadpool: take 2 (#8672)
* Introduce ggml_compute_threadpool
- OpenMP functional: check
- Vanilla ggml functional: Check
- ggml w/threadpool functional: Check
- OpenMP no regression: No glaring problems
- Vanilla ggml no regression: No glaring problems
- ggml w/threadpool no regression: No glaring problems
* Minor fixes
* fixed use after release bug
* fixed a harmless race condition
* Fix Android bulid issue
* fix more race conditions
* fix deadlock for cases where cgraph.n_nodes == 1
and fix --poll case
* threadpool: use cpu_get_num_math to set the default number of threadpool threads
This way we avoid using E-Cores and Hyperthreaded siblings.
* bench: create fresh threadpool for each test
For benchmarking it's better to start a fresh pool for each test with the exact number of threads
needed for that test. Having larger pools is suboptimal (causes more load, etc).
* atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier
This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.
* threadpool: make polling the default to match openmp behavior
All command line args now allow for setting poll to 0 (false).
* threadpool: do not wakeup threads in already paused threadpool
* fix potential race condition in check_for_work
* threadpool: do not create two threadpools if their params are identical
* threadpool: reduce pause/resume/wakeup overhead in common cases
We now start threadpool in paused state only if we have two.
The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.
* threadpool: add support for hybrid polling
poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...
The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
We can tune this further as things evolve.
* threadpool: reduce the number of barrier required
New work is now indicated with an atomic counter that is incremented for
each new graph that needs to be computed.
This removes the need for extra barrier for clearing the "new_work" and
removes the special case for trivial graphs.
* threadpool: remove special-casing for disposable threadpools
With the efficient hybrid polling there is no need to make disposable pools any different.
This simplifies the overall logic and reduces branching.
Include n_threads in debug print for disposable threadpool.
Declare pause and stop flags as atomic_bool
This doesn't actually generate any memory barriers and simply informs
the thread sanitizer that these flags can be written & read by different
threads without locking.
* threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)
This fixes the race condition with very small graphs where the main thread happens to
start a new graph while the workers are just about to exit from barriers.
* threadpool: use relaxed order for chunk sync
Full memory barrier is an overkill for this since each thread works on different chunk
* threadpool: remove abort_callback from threadpool state
* threadpool: better naming for thread/cpumask releated functions
* threadpool: consistent use of int type for n_threads params
* threadpool: add support for ggml_threadpool_params_default/init
Also removes the need for explicit mask_specified param.
all-zero cpumask means use default (usually inherited) cpu affinity mask.
* threadpool: move typedef into ggml.h
* threadpool: fix apply_priority() function name
* threadpool: fix swift wrapper errors due to n_threads int type cleanup
* threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled
* threadpool: replace checks for compute_thread ret code with proper status check
* threadpool: simplify threadpool init logic and fix main thread affinity application
Most of the init code is now exactly the same between threadpool and openmp.
* threadpool: update threadpool resume/pause function names
* threadpool: enable openmp by default for now
* threadpool: don't forget to free workers state when omp is enabled
* threadpool: avoid updating process priority on the platforms that do not require it
On Windows we need to change overall process priority class in order to set thread priorities,
but on Linux, Mac, etc we do not need to touch the overall process settings.
* threadpool: update calling thread prio and affinity only at start/resume
This avoids extra syscalls for each graph_compute()
* llama-bench: turn threadpool params into vectors, add output headers, etc
* llama-bench: add support for cool off between tests --delay
This helps for long running tests on platforms that are thermally limited (phones, laptops, etc).
--delay (disabled by default) introduces the sleep for N seconds before starting each test.
* threadpool: move process priority setting into the apps (bench and cli)
This avoids changing the overall process priority on Windows for the apps
that use ggml/llama.cpp directy.
* threadpool: move all pause/resume logic into ggml
* threadpool: futher api cleanup and prep for future refactoring
All threadpool related functions and structs use ggml_threadpool prefix.
* threadpool: minor indent fixes
* threadpool: improve setprioty error message
* Update examples/llama-bench/llama-bench.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* threadpool: fix indent in set_threadpool call
* use int32_t for n_thread type in public llama.cpp API
* threadpool: use _new and _free instead of _create and _release
* fix two more public APIs to use int32_t for n_threads
* build: set _GNU_SOURCE for Adroid
---------
Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Co-authored-by: fmz <quic_fzaghlou@quic.com>
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2024-08-29 23:20:53 +00:00
|
|
|
/* prio */ GGML_SCHED_PRIO_NORMAL,
|
|
|
|
/* delay */ 0,
|
2024-06-04 12:32:42 +00:00
|
|
|
/* verbose */ false,
|
2024-09-06 21:03:01 +00:00
|
|
|
/* progress */ false,
|
2024-06-04 12:32:42 +00:00
|
|
|
/* output_format */ MARKDOWN,
|
|
|
|
/* output_format_stderr */ NONE,
|
2023-08-18 10:44:58 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
static void print_usage(int /* argc */, char ** argv) {
|
2023-09-05 19:10:27 +00:00
|
|
|
printf("usage: %s [options]\n", argv[0]);
|
|
|
|
printf("\n");
|
|
|
|
printf("options:\n");
|
|
|
|
printf(" -h, --help\n");
|
2024-09-03 17:58:54 +00:00
|
|
|
printf(" -m, --model <filename> (default: %s)\n", join(cmd_params_defaults.model, ",").c_str());
|
2024-11-20 11:57:53 +00:00
|
|
|
printf(" -p, --n-prompt <n> (default: %s)\n",
|
|
|
|
join(cmd_params_defaults.n_prompt, ",").c_str());
|
2024-09-03 17:58:54 +00:00
|
|
|
printf(" -n, --n-gen <n> (default: %s)\n", join(cmd_params_defaults.n_gen, ",").c_str());
|
2024-11-20 11:57:53 +00:00
|
|
|
printf(" -pg <pp,tg> (default: %s)\n",
|
|
|
|
join(transform_to_str(cmd_params_defaults.n_pg, pair_str), ",").c_str());
|
|
|
|
printf(" -b, --batch-size <n> (default: %s)\n",
|
|
|
|
join(cmd_params_defaults.n_batch, ",").c_str());
|
|
|
|
printf(" -ub, --ubatch-size <n> (default: %s)\n",
|
|
|
|
join(cmd_params_defaults.n_ubatch, ",").c_str());
|
|
|
|
printf(" -ctk, --cache-type-k <t> (default: %s)\n",
|
|
|
|
join(transform_to_str(cmd_params_defaults.type_k, ggml_type_name), ",").c_str());
|
|
|
|
printf(" -ctv, --cache-type-v <t> (default: %s)\n",
|
|
|
|
join(transform_to_str(cmd_params_defaults.type_v, ggml_type_name), ",").c_str());
|
|
|
|
printf(" -t, --threads <n> (default: %s)\n",
|
|
|
|
join(cmd_params_defaults.n_threads, ",").c_str());
|
|
|
|
printf(" -C, --cpu-mask <hex,hex> (default: %s)\n",
|
|
|
|
join(cmd_params_defaults.cpu_mask, ",").c_str());
|
|
|
|
printf(" --cpu-strict <0|1> (default: %s)\n",
|
|
|
|
join(cmd_params_defaults.cpu_strict, ",").c_str());
|
2024-09-03 17:58:54 +00:00
|
|
|
printf(" --poll <0...100> (default: %s)\n", join(cmd_params_defaults.poll, ",").c_str());
|
2024-11-20 11:57:53 +00:00
|
|
|
printf(" -ngl, --n-gpu-layers <n> (default: %s)\n",
|
|
|
|
join(cmd_params_defaults.n_gpu_layers, ",").c_str());
|
2024-10-10 18:14:55 +00:00
|
|
|
if (llama_supports_rpc()) {
|
2024-11-20 11:57:53 +00:00
|
|
|
printf(" -rpc, --rpc <rpc_servers> (default: %s)\n",
|
|
|
|
join(cmd_params_defaults.rpc_servers, ",").c_str());
|
2024-10-10 18:14:55 +00:00
|
|
|
}
|
2024-11-20 11:57:53 +00:00
|
|
|
printf(" -sm, --split-mode <none|layer|row> (default: %s)\n",
|
|
|
|
join(transform_to_str(cmd_params_defaults.split_mode, split_mode_str), ",").c_str());
|
|
|
|
printf(" -mg, --main-gpu <i> (default: %s)\n",
|
|
|
|
join(cmd_params_defaults.main_gpu, ",").c_str());
|
|
|
|
printf(" -nkvo, --no-kv-offload <0|1> (default: %s)\n",
|
|
|
|
join(cmd_params_defaults.no_kv_offload, ",").c_str());
|
|
|
|
printf(" -fa, --flash-attn <0|1> (default: %s)\n",
|
|
|
|
join(cmd_params_defaults.flash_attn, ",").c_str());
|
|
|
|
printf(" -mmp, --mmap <0|1> (default: %s)\n",
|
|
|
|
join(cmd_params_defaults.use_mmap, ",").c_str());
|
2024-09-03 17:58:54 +00:00
|
|
|
printf(" --numa <distribute|isolate|numactl> (default: disabled)\n");
|
2024-11-20 11:57:53 +00:00
|
|
|
printf(" -embd, --embeddings <0|1> (default: %s)\n",
|
|
|
|
join(cmd_params_defaults.embeddings, ",").c_str());
|
2024-09-03 17:58:54 +00:00
|
|
|
printf(" -ts, --tensor-split <ts0/ts1/..> (default: 0)\n");
|
|
|
|
printf(" -r, --repetitions <n> (default: %d)\n", cmd_params_defaults.reps);
|
|
|
|
printf(" --prio <0|1|2|3> (default: %d)\n", cmd_params_defaults.prio);
|
|
|
|
printf(" --delay <0...N> (seconds) (default: %d)\n", cmd_params_defaults.delay);
|
2024-11-20 11:57:53 +00:00
|
|
|
printf(" -o, --output <csv|json|jsonl|md|sql> (default: %s)\n",
|
|
|
|
output_format_str(cmd_params_defaults.output_format));
|
|
|
|
printf(" -oe, --output-err <csv|json|jsonl|md|sql> (default: %s)\n",
|
|
|
|
output_format_str(cmd_params_defaults.output_format_stderr));
|
2024-09-03 17:58:54 +00:00
|
|
|
printf(" -v, --verbose (default: %s)\n", cmd_params_defaults.verbose ? "1" : "0");
|
2024-09-06 21:03:01 +00:00
|
|
|
printf(" --progress (default: %s)\n", cmd_params_defaults.progress ? "1" : "0");
|
2023-09-05 19:10:27 +00:00
|
|
|
printf("\n");
|
2024-11-20 11:57:53 +00:00
|
|
|
printf(
|
|
|
|
"Multiple values can be given for each parameter by separating them with ',' or by specifying the parameter "
|
|
|
|
"multiple times.\n");
|
2023-12-07 11:03:17 +00:00
|
|
|
}
|
2023-08-18 10:44:58 +00:00
|
|
|
|
2023-12-07 11:03:17 +00:00
|
|
|
static ggml_type ggml_type_from_name(const std::string & s) {
|
|
|
|
if (s == "f16") {
|
|
|
|
return GGML_TYPE_F16;
|
|
|
|
}
|
2024-11-08 11:47:22 +00:00
|
|
|
if (s == "bf16") {
|
|
|
|
return GGML_TYPE_BF16;
|
|
|
|
}
|
2023-12-07 11:03:17 +00:00
|
|
|
if (s == "q8_0") {
|
|
|
|
return GGML_TYPE_Q8_0;
|
|
|
|
}
|
|
|
|
if (s == "q4_0") {
|
|
|
|
return GGML_TYPE_Q4_0;
|
|
|
|
}
|
|
|
|
if (s == "q4_1") {
|
|
|
|
return GGML_TYPE_Q4_1;
|
|
|
|
}
|
|
|
|
if (s == "q5_0") {
|
|
|
|
return GGML_TYPE_Q5_0;
|
|
|
|
}
|
|
|
|
if (s == "q5_1") {
|
|
|
|
return GGML_TYPE_Q5_1;
|
|
|
|
}
|
2024-03-21 07:27:57 +00:00
|
|
|
if (s == "iq4_nl") {
|
|
|
|
return GGML_TYPE_IQ4_NL;
|
|
|
|
}
|
2023-12-07 11:03:17 +00:00
|
|
|
|
|
|
|
return GGML_TYPE_COUNT;
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static cmd_params parse_cmd_params(int argc, char ** argv) {
|
2024-11-20 11:57:53 +00:00
|
|
|
cmd_params params;
|
|
|
|
std::string arg;
|
|
|
|
bool invalid_param = false;
|
|
|
|
const std::string arg_prefix = "--";
|
|
|
|
const char split_delim = ',';
|
|
|
|
|
|
|
|
params.verbose = cmd_params_defaults.verbose;
|
|
|
|
params.output_format = cmd_params_defaults.output_format;
|
2024-06-04 12:32:42 +00:00
|
|
|
params.output_format_stderr = cmd_params_defaults.output_format_stderr;
|
2024-11-20 11:57:53 +00:00
|
|
|
params.reps = cmd_params_defaults.reps;
|
|
|
|
params.numa = cmd_params_defaults.numa;
|
|
|
|
params.prio = cmd_params_defaults.prio;
|
|
|
|
params.delay = cmd_params_defaults.delay;
|
|
|
|
params.progress = cmd_params_defaults.progress;
|
2023-08-18 10:44:58 +00:00
|
|
|
|
|
|
|
for (int i = 1; i < argc; i++) {
|
|
|
|
arg = argv[i];
|
|
|
|
if (arg.compare(0, arg_prefix.size(), arg_prefix) == 0) {
|
|
|
|
std::replace(arg.begin(), arg.end(), '_', '-');
|
|
|
|
}
|
|
|
|
|
|
|
|
if (arg == "-h" || arg == "--help") {
|
|
|
|
print_usage(argc, argv);
|
|
|
|
exit(0);
|
|
|
|
} else if (arg == "-m" || arg == "--model") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-06-04 18:23:39 +00:00
|
|
|
auto p = string_split<std::string>(argv[i], split_delim);
|
2023-08-18 10:44:58 +00:00
|
|
|
params.model.insert(params.model.end(), p.begin(), p.end());
|
|
|
|
} else if (arg == "-p" || arg == "--n-prompt") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-06-04 18:23:39 +00:00
|
|
|
auto p = string_split<int>(argv[i], split_delim);
|
2023-08-18 10:44:58 +00:00
|
|
|
params.n_prompt.insert(params.n_prompt.end(), p.begin(), p.end());
|
|
|
|
} else if (arg == "-n" || arg == "--n-gen") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-06-04 18:23:39 +00:00
|
|
|
auto p = string_split<int>(argv[i], split_delim);
|
2023-08-18 10:44:58 +00:00
|
|
|
params.n_gen.insert(params.n_gen.end(), p.begin(), p.end());
|
2024-05-10 16:03:54 +00:00
|
|
|
} else if (arg == "-pg") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-06-04 18:23:39 +00:00
|
|
|
auto p = string_split<std::string>(argv[i], ',');
|
2024-05-10 16:03:54 +00:00
|
|
|
if (p.size() != 2) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-11-20 11:57:53 +00:00
|
|
|
params.n_pg.push_back({ std::stoi(p[0]), std::stoi(p[1]) });
|
2023-08-18 10:44:58 +00:00
|
|
|
} else if (arg == "-b" || arg == "--batch-size") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-06-04 18:23:39 +00:00
|
|
|
auto p = string_split<int>(argv[i], split_delim);
|
2023-08-18 10:44:58 +00:00
|
|
|
params.n_batch.insert(params.n_batch.end(), p.begin(), p.end());
|
2024-03-13 17:54:21 +00:00
|
|
|
} else if (arg == "-ub" || arg == "--ubatch-size") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-06-04 18:23:39 +00:00
|
|
|
auto p = string_split<int>(argv[i], split_delim);
|
2024-03-13 17:54:21 +00:00
|
|
|
params.n_ubatch.insert(params.n_ubatch.end(), p.begin(), p.end());
|
2023-12-07 11:03:17 +00:00
|
|
|
} else if (arg == "-ctk" || arg == "--cache-type-k") {
|
2023-08-18 10:44:58 +00:00
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-11-20 11:57:53 +00:00
|
|
|
auto p = string_split<std::string>(argv[i], split_delim);
|
2023-12-07 11:03:17 +00:00
|
|
|
std::vector<ggml_type> types;
|
|
|
|
for (const auto & t : p) {
|
|
|
|
ggml_type gt = ggml_type_from_name(t);
|
|
|
|
if (gt == GGML_TYPE_COUNT) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
types.push_back(gt);
|
|
|
|
}
|
2024-09-17 20:41:38 +00:00
|
|
|
if (invalid_param) {
|
|
|
|
break;
|
|
|
|
}
|
2023-12-07 11:03:17 +00:00
|
|
|
params.type_k.insert(params.type_k.end(), types.begin(), types.end());
|
|
|
|
} else if (arg == "-ctv" || arg == "--cache-type-v") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-11-20 11:57:53 +00:00
|
|
|
auto p = string_split<std::string>(argv[i], split_delim);
|
2023-12-07 11:03:17 +00:00
|
|
|
std::vector<ggml_type> types;
|
|
|
|
for (const auto & t : p) {
|
|
|
|
ggml_type gt = ggml_type_from_name(t);
|
|
|
|
if (gt == GGML_TYPE_COUNT) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
types.push_back(gt);
|
|
|
|
}
|
2024-09-17 20:41:38 +00:00
|
|
|
if (invalid_param) {
|
|
|
|
break;
|
|
|
|
}
|
2023-12-07 11:03:17 +00:00
|
|
|
params.type_v.insert(params.type_v.end(), types.begin(), types.end());
|
2023-08-18 10:44:58 +00:00
|
|
|
} else if (arg == "-t" || arg == "--threads") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-06-04 18:23:39 +00:00
|
|
|
auto p = string_split<int>(argv[i], split_delim);
|
2023-08-18 10:44:58 +00:00
|
|
|
params.n_threads.insert(params.n_threads.end(), p.begin(), p.end());
|
Threadpool: take 2 (#8672)
* Introduce ggml_compute_threadpool
- OpenMP functional: check
- Vanilla ggml functional: Check
- ggml w/threadpool functional: Check
- OpenMP no regression: No glaring problems
- Vanilla ggml no regression: No glaring problems
- ggml w/threadpool no regression: No glaring problems
* Minor fixes
* fixed use after release bug
* fixed a harmless race condition
* Fix Android bulid issue
* fix more race conditions
* fix deadlock for cases where cgraph.n_nodes == 1
and fix --poll case
* threadpool: use cpu_get_num_math to set the default number of threadpool threads
This way we avoid using E-Cores and Hyperthreaded siblings.
* bench: create fresh threadpool for each test
For benchmarking it's better to start a fresh pool for each test with the exact number of threads
needed for that test. Having larger pools is suboptimal (causes more load, etc).
* atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier
This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.
* threadpool: make polling the default to match openmp behavior
All command line args now allow for setting poll to 0 (false).
* threadpool: do not wakeup threads in already paused threadpool
* fix potential race condition in check_for_work
* threadpool: do not create two threadpools if their params are identical
* threadpool: reduce pause/resume/wakeup overhead in common cases
We now start threadpool in paused state only if we have two.
The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.
* threadpool: add support for hybrid polling
poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...
The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
We can tune this further as things evolve.
* threadpool: reduce the number of barrier required
New work is now indicated with an atomic counter that is incremented for
each new graph that needs to be computed.
This removes the need for extra barrier for clearing the "new_work" and
removes the special case for trivial graphs.
* threadpool: remove special-casing for disposable threadpools
With the efficient hybrid polling there is no need to make disposable pools any different.
This simplifies the overall logic and reduces branching.
Include n_threads in debug print for disposable threadpool.
Declare pause and stop flags as atomic_bool
This doesn't actually generate any memory barriers and simply informs
the thread sanitizer that these flags can be written & read by different
threads without locking.
* threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)
This fixes the race condition with very small graphs where the main thread happens to
start a new graph while the workers are just about to exit from barriers.
* threadpool: use relaxed order for chunk sync
Full memory barrier is an overkill for this since each thread works on different chunk
* threadpool: remove abort_callback from threadpool state
* threadpool: better naming for thread/cpumask releated functions
* threadpool: consistent use of int type for n_threads params
* threadpool: add support for ggml_threadpool_params_default/init
Also removes the need for explicit mask_specified param.
all-zero cpumask means use default (usually inherited) cpu affinity mask.
* threadpool: move typedef into ggml.h
* threadpool: fix apply_priority() function name
* threadpool: fix swift wrapper errors due to n_threads int type cleanup
* threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled
* threadpool: replace checks for compute_thread ret code with proper status check
* threadpool: simplify threadpool init logic and fix main thread affinity application
Most of the init code is now exactly the same between threadpool and openmp.
* threadpool: update threadpool resume/pause function names
* threadpool: enable openmp by default for now
* threadpool: don't forget to free workers state when omp is enabled
* threadpool: avoid updating process priority on the platforms that do not require it
On Windows we need to change overall process priority class in order to set thread priorities,
but on Linux, Mac, etc we do not need to touch the overall process settings.
* threadpool: update calling thread prio and affinity only at start/resume
This avoids extra syscalls for each graph_compute()
* llama-bench: turn threadpool params into vectors, add output headers, etc
* llama-bench: add support for cool off between tests --delay
This helps for long running tests on platforms that are thermally limited (phones, laptops, etc).
--delay (disabled by default) introduces the sleep for N seconds before starting each test.
* threadpool: move process priority setting into the apps (bench and cli)
This avoids changing the overall process priority on Windows for the apps
that use ggml/llama.cpp directy.
* threadpool: move all pause/resume logic into ggml
* threadpool: futher api cleanup and prep for future refactoring
All threadpool related functions and structs use ggml_threadpool prefix.
* threadpool: minor indent fixes
* threadpool: improve setprioty error message
* Update examples/llama-bench/llama-bench.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* threadpool: fix indent in set_threadpool call
* use int32_t for n_thread type in public llama.cpp API
* threadpool: use _new and _free instead of _create and _release
* fix two more public APIs to use int32_t for n_threads
* build: set _GNU_SOURCE for Adroid
---------
Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Co-authored-by: fmz <quic_fzaghlou@quic.com>
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2024-08-29 23:20:53 +00:00
|
|
|
} else if (arg == "-C" || arg == "--cpu-mask") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
auto p = string_split<std::string>(argv[i], split_delim);
|
|
|
|
params.cpu_mask.insert(params.cpu_mask.end(), p.begin(), p.end());
|
|
|
|
} else if (arg == "--cpu-strict") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
auto p = string_split<bool>(argv[i], split_delim);
|
|
|
|
params.cpu_strict.insert(params.cpu_strict.end(), p.begin(), p.end());
|
|
|
|
} else if (arg == "--poll") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
auto p = string_split<int>(argv[i], split_delim);
|
|
|
|
params.poll.insert(params.poll.end(), p.begin(), p.end());
|
2023-08-18 10:44:58 +00:00
|
|
|
} else if (arg == "-ngl" || arg == "--n-gpu-layers") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-06-04 18:23:39 +00:00
|
|
|
auto p = string_split<int>(argv[i], split_delim);
|
2023-08-18 10:44:58 +00:00
|
|
|
params.n_gpu_layers.insert(params.n_gpu_layers.end(), p.begin(), p.end());
|
2024-10-10 18:14:55 +00:00
|
|
|
} else if (llama_supports_rpc() && (arg == "-rpc" || arg == "--rpc")) {
|
2024-05-29 11:45:44 +00:00
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
params.rpc_servers.push_back(argv[i]);
|
2024-01-12 19:07:38 +00:00
|
|
|
} else if (arg == "-sm" || arg == "--split-mode") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-11-20 11:57:53 +00:00
|
|
|
auto p = string_split<std::string>(argv[i], split_delim);
|
2024-01-12 19:07:38 +00:00
|
|
|
std::vector<llama_split_mode> modes;
|
|
|
|
for (const auto & m : p) {
|
|
|
|
llama_split_mode mode;
|
|
|
|
if (m == "none") {
|
2024-02-25 10:09:09 +00:00
|
|
|
mode = LLAMA_SPLIT_MODE_NONE;
|
2024-01-12 19:07:38 +00:00
|
|
|
} else if (m == "layer") {
|
2024-02-25 10:09:09 +00:00
|
|
|
mode = LLAMA_SPLIT_MODE_LAYER;
|
2024-01-12 19:07:38 +00:00
|
|
|
} else if (m == "row") {
|
2024-02-25 10:09:09 +00:00
|
|
|
mode = LLAMA_SPLIT_MODE_ROW;
|
2024-01-12 19:07:38 +00:00
|
|
|
} else {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
modes.push_back(mode);
|
|
|
|
}
|
2024-09-17 20:41:38 +00:00
|
|
|
if (invalid_param) {
|
|
|
|
break;
|
|
|
|
}
|
2024-01-12 19:07:38 +00:00
|
|
|
params.split_mode.insert(params.split_mode.end(), modes.begin(), modes.end());
|
2023-08-18 10:44:58 +00:00
|
|
|
} else if (arg == "-mg" || arg == "--main-gpu") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-06-04 18:23:39 +00:00
|
|
|
params.main_gpu = string_split<int>(argv[i], split_delim);
|
2024-01-07 16:59:01 +00:00
|
|
|
} else if (arg == "-nkvo" || arg == "--no-kv-offload") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-06-04 18:23:39 +00:00
|
|
|
auto p = string_split<bool>(argv[i], split_delim);
|
2024-01-07 16:59:01 +00:00
|
|
|
params.no_kv_offload.insert(params.no_kv_offload.end(), p.begin(), p.end());
|
2024-05-05 12:17:47 +00:00
|
|
|
} else if (arg == "--numa") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
} else {
|
|
|
|
std::string value(argv[i]);
|
2024-11-20 11:57:53 +00:00
|
|
|
/**/ if (value == "distribute" || value == "") {
|
|
|
|
params.numa = GGML_NUMA_STRATEGY_DISTRIBUTE;
|
|
|
|
} else if (value == "isolate") {
|
|
|
|
params.numa = GGML_NUMA_STRATEGY_ISOLATE;
|
|
|
|
} else if (value == "numactl") {
|
|
|
|
params.numa = GGML_NUMA_STRATEGY_NUMACTL;
|
|
|
|
} else {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-05-05 12:17:47 +00:00
|
|
|
}
|
ggml : add Flash Attention (#5021)
* ggml : add ggml_flash_attn_ext API
* ggml : fix GQA support in ggml_flash_attn_ext
* ggml : online attention (CPU)
* metal : initial implementation
* metal : f16 precision
* metal : reduce branches
* metal : specialize for head size
* wip : 8 rows per simd group
* wip : 4 rows per simd group
* wip : template for rows per warp
* metal : parallelize across KV size
* metal : parallel reduce across heads
* metal : efficient flash_attn_f16 implementation
* metal : avoid redundant loads of the attention
* metal : scale and mask in matrix form
* metal : fix comment
* llama : avoid ggml_cast, use F32 query
* metal : add parallel reduce version (disabled)
* metal : move output into local memory + optimize
- the result from each simdgroup now stays in the registers
- significantly reduced SRAM usage
- more efficient skipping of -INF blocks
- avoid simdgroup barrier in hot loop
- add comments
* metal : add tests, fix scaling, support C > 32
* metal : improve precision
* ggml : fix f16 mad
* metal : minor
* metal : support Q > 8
* tests : add ATTN tests
* metal : disable buffer allocation logs
* tests : more
* metal : faster inner loop for C == 32
* metal : fix array initialization
* tests : ifdef
* ggml : switch to padded F16 mask for ggml_soft_max, ggml_flash_attn_ext
* ggml : fix ggml_soft_max mask requirement
* cuda : fix soft_max to use correct mask size
* cuda : add flash_attn kernel (wip)
* metal : optimize softmax for C > 32
* metal : optimize softmax
* tests : minor fix
* cuda : avoid zeroing fragments
* tests : update dims
* cuda : fix __hisinf() result check
* cuda : avoid warp_reduce for smax
* cuda : use int instead of int64_t
Noticeably improves performance (thanks to Johannes)
* cuda : make loops use the same loop values
Thanks Johannes again for the tip
* cuda : unroll some of the loops
* cuda : avoid __hisinf branches
* cuda : use half2 in softmax
* cuda : switch to 1 warp for bs > 16
* cuda : speed-up reduce part of the kernel
* cuda : unroll Q*K^T loop
* cuda : fix -INF block check
* cuda : simplify softmax
* cuda : fix matrix names
* cuda : minor
* llama : adapt to F16 KQ_pos
* llama : adapt new models to F16 KQ_mask
* ggml : fix F16 store (ARM NEON)
* llama : fix type of KQ_mask and KQ_pos
* ggml : fix CPU soft_max
* tests : add hs=256
* cuda : fix build
* metal : improve perf via smaller int registers
* cuda : adapt soft_max to F16 mask and pos
* CUDA: faster FlashAttention, kernel for bs == 1
* 16 cols for Phi-2
* no vec for hs, no hs==256 ncols==32 for Volta
* adjust kernel selection logic
* 4 warps, 256 stride for all D
* no ncols == 64
* Multiple parallel blocks for batch size 1
* fix compile warnings
* fix excessive KQ_b loads
* fix cmake build
* fix KV cache padding, NaN from INFINITY (#6438)
* llama : flash_attn cparam + fix defrag
* server: support flash_attn param
* server: bench: enable flash_attn param
* CUDA: refactor host code, dyn. par. blocks
* fix flash_attn_vec_f16 race condition
* flush softmax exp below threshold to 0
* store temp KQ in registers
* Calculate KQ as FP32 if KQV has GGML_PREC_F32
* Add __hgt2_mask implementation for CUDA 11
* fix KQ FP32 precision fpr parallel_blocks > 1
* llama-bench : add -fa,--flash-attn arg
* metal : add BS=1 kernel for flash attention (#6508)
* metal : add BS=1 kernel for flash attention (wip)
* metal : support more than 1 warps
* metal : opts
* metal : opt
* metal : switch to parallel reduce
* metal : reduce registers
* metal : simplify
* metal : initial FA vec kernel
* metal : use F32 attention accumulators
* batched-bench : add fattn arg
* llama : simplify llama_build_kv_store
ggml-ci
* llama : adapt build_olmo to changes
* ggml : fix arm fp16 store on windows
* metal : clean-up
* metal : clean-up kernel code
* metal : minor
* tests : remove benchmarks
ggml-ci
* ggml : fix avx512 const correctness
ggml-ci
* ggml : fix soft_max with bias on CPU
ggml-ci
* common : print --flash-attn in help
* ggml : fix num dimensions in ggml_flash_attn_ext
* llama : force disable flash attention for incompatible models
* ggml : ggml_soft_max support F16/F32 mask/pos
ggml-ci
* cuda : uint -> uint32_t
* cuda : "constexpr dim3" -> "const dim3"
ggml-ci
* cuda : try to fix __hgt2_mask
ggml-ci
* ggml : add TODO's for F16/F32 mask/pos support in other backends
* llama : replace bool need_kq_pos with use_alibi
* llama : prep ALiBi support for BERT models
ggml-ci
* llama : fix n_batch requirements
ggml-ci
* cont
* server : add help for --flash-attn arg
* llama : disable FA for AMD
* tests : remove TMP_ATTN_BENCH
ggml-ci
* llama : support save/load state with FA enabled
ggml-ci
* ci : add CUDA save-load-state tests
ggml-ci
* llama : llama_kv_cache_clear zeroes data + fix save-load seq
ggml-ci
* llama : fix copy-paste errors, add TODO
* llama : disallow incompatible states
* llama : update llama_state_get_size after v_trans field
* metal : remove tmp log
* llama : add static reminder for llama_state_get_size
* metal : fix max nsg
ggml-ci
* ci : fix arg order
ggml-ci
---------
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Pierrick HYMBERT <pierrick.hymbert@gmail.com>
2024-04-30 09:16:08 +00:00
|
|
|
} else if (arg == "-fa" || arg == "--flash-attn") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-06-04 18:23:39 +00:00
|
|
|
auto p = string_split<bool>(argv[i], split_delim);
|
ggml : add Flash Attention (#5021)
* ggml : add ggml_flash_attn_ext API
* ggml : fix GQA support in ggml_flash_attn_ext
* ggml : online attention (CPU)
* metal : initial implementation
* metal : f16 precision
* metal : reduce branches
* metal : specialize for head size
* wip : 8 rows per simd group
* wip : 4 rows per simd group
* wip : template for rows per warp
* metal : parallelize across KV size
* metal : parallel reduce across heads
* metal : efficient flash_attn_f16 implementation
* metal : avoid redundant loads of the attention
* metal : scale and mask in matrix form
* metal : fix comment
* llama : avoid ggml_cast, use F32 query
* metal : add parallel reduce version (disabled)
* metal : move output into local memory + optimize
- the result from each simdgroup now stays in the registers
- significantly reduced SRAM usage
- more efficient skipping of -INF blocks
- avoid simdgroup barrier in hot loop
- add comments
* metal : add tests, fix scaling, support C > 32
* metal : improve precision
* ggml : fix f16 mad
* metal : minor
* metal : support Q > 8
* tests : add ATTN tests
* metal : disable buffer allocation logs
* tests : more
* metal : faster inner loop for C == 32
* metal : fix array initialization
* tests : ifdef
* ggml : switch to padded F16 mask for ggml_soft_max, ggml_flash_attn_ext
* ggml : fix ggml_soft_max mask requirement
* cuda : fix soft_max to use correct mask size
* cuda : add flash_attn kernel (wip)
* metal : optimize softmax for C > 32
* metal : optimize softmax
* tests : minor fix
* cuda : avoid zeroing fragments
* tests : update dims
* cuda : fix __hisinf() result check
* cuda : avoid warp_reduce for smax
* cuda : use int instead of int64_t
Noticeably improves performance (thanks to Johannes)
* cuda : make loops use the same loop values
Thanks Johannes again for the tip
* cuda : unroll some of the loops
* cuda : avoid __hisinf branches
* cuda : use half2 in softmax
* cuda : switch to 1 warp for bs > 16
* cuda : speed-up reduce part of the kernel
* cuda : unroll Q*K^T loop
* cuda : fix -INF block check
* cuda : simplify softmax
* cuda : fix matrix names
* cuda : minor
* llama : adapt to F16 KQ_pos
* llama : adapt new models to F16 KQ_mask
* ggml : fix F16 store (ARM NEON)
* llama : fix type of KQ_mask and KQ_pos
* ggml : fix CPU soft_max
* tests : add hs=256
* cuda : fix build
* metal : improve perf via smaller int registers
* cuda : adapt soft_max to F16 mask and pos
* CUDA: faster FlashAttention, kernel for bs == 1
* 16 cols for Phi-2
* no vec for hs, no hs==256 ncols==32 for Volta
* adjust kernel selection logic
* 4 warps, 256 stride for all D
* no ncols == 64
* Multiple parallel blocks for batch size 1
* fix compile warnings
* fix excessive KQ_b loads
* fix cmake build
* fix KV cache padding, NaN from INFINITY (#6438)
* llama : flash_attn cparam + fix defrag
* server: support flash_attn param
* server: bench: enable flash_attn param
* CUDA: refactor host code, dyn. par. blocks
* fix flash_attn_vec_f16 race condition
* flush softmax exp below threshold to 0
* store temp KQ in registers
* Calculate KQ as FP32 if KQV has GGML_PREC_F32
* Add __hgt2_mask implementation for CUDA 11
* fix KQ FP32 precision fpr parallel_blocks > 1
* llama-bench : add -fa,--flash-attn arg
* metal : add BS=1 kernel for flash attention (#6508)
* metal : add BS=1 kernel for flash attention (wip)
* metal : support more than 1 warps
* metal : opts
* metal : opt
* metal : switch to parallel reduce
* metal : reduce registers
* metal : simplify
* metal : initial FA vec kernel
* metal : use F32 attention accumulators
* batched-bench : add fattn arg
* llama : simplify llama_build_kv_store
ggml-ci
* llama : adapt build_olmo to changes
* ggml : fix arm fp16 store on windows
* metal : clean-up
* metal : clean-up kernel code
* metal : minor
* tests : remove benchmarks
ggml-ci
* ggml : fix avx512 const correctness
ggml-ci
* ggml : fix soft_max with bias on CPU
ggml-ci
* common : print --flash-attn in help
* ggml : fix num dimensions in ggml_flash_attn_ext
* llama : force disable flash attention for incompatible models
* ggml : ggml_soft_max support F16/F32 mask/pos
ggml-ci
* cuda : uint -> uint32_t
* cuda : "constexpr dim3" -> "const dim3"
ggml-ci
* cuda : try to fix __hgt2_mask
ggml-ci
* ggml : add TODO's for F16/F32 mask/pos support in other backends
* llama : replace bool need_kq_pos with use_alibi
* llama : prep ALiBi support for BERT models
ggml-ci
* llama : fix n_batch requirements
ggml-ci
* cont
* server : add help for --flash-attn arg
* llama : disable FA for AMD
* tests : remove TMP_ATTN_BENCH
ggml-ci
* llama : support save/load state with FA enabled
ggml-ci
* ci : add CUDA save-load-state tests
ggml-ci
* llama : llama_kv_cache_clear zeroes data + fix save-load seq
ggml-ci
* llama : fix copy-paste errors, add TODO
* llama : disallow incompatible states
* llama : update llama_state_get_size after v_trans field
* metal : remove tmp log
* llama : add static reminder for llama_state_get_size
* metal : fix max nsg
ggml-ci
* ci : fix arg order
ggml-ci
---------
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Pierrick HYMBERT <pierrick.hymbert@gmail.com>
2024-04-30 09:16:08 +00:00
|
|
|
params.flash_attn.insert(params.flash_attn.end(), p.begin(), p.end());
|
2024-02-01 19:48:53 +00:00
|
|
|
} else if (arg == "-mmp" || arg == "--mmap") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-06-04 18:23:39 +00:00
|
|
|
auto p = string_split<bool>(argv[i], split_delim);
|
2024-02-01 19:48:53 +00:00
|
|
|
params.use_mmap.insert(params.use_mmap.end(), p.begin(), p.end());
|
2024-03-07 14:32:38 +00:00
|
|
|
} else if (arg == "-embd" || arg == "--embeddings") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-06-04 18:23:39 +00:00
|
|
|
auto p = string_split<bool>(argv[i], split_delim);
|
2024-03-07 14:32:38 +00:00
|
|
|
params.embeddings.insert(params.embeddings.end(), p.begin(), p.end());
|
2023-08-18 10:44:58 +00:00
|
|
|
} else if (arg == "-ts" || arg == "--tensor-split") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-06-04 18:23:39 +00:00
|
|
|
for (auto ts : string_split<std::string>(argv[i], split_delim)) {
|
2023-08-18 10:44:58 +00:00
|
|
|
// split string by ; and /
|
2024-11-20 11:57:53 +00:00
|
|
|
const std::regex regex{ R"([;/]+)" };
|
|
|
|
std::sregex_token_iterator it{ ts.begin(), ts.end(), regex, -1 };
|
|
|
|
std::vector<std::string> split_arg{ it, {} };
|
2024-01-31 15:30:17 +00:00
|
|
|
GGML_ASSERT(split_arg.size() <= llama_max_devices());
|
2023-08-18 10:44:58 +00:00
|
|
|
|
2024-01-31 15:30:17 +00:00
|
|
|
std::vector<float> tensor_split(llama_max_devices());
|
|
|
|
for (size_t i = 0; i < llama_max_devices(); ++i) {
|
2023-08-18 10:44:58 +00:00
|
|
|
if (i < split_arg.size()) {
|
|
|
|
tensor_split[i] = std::stof(split_arg[i]);
|
|
|
|
} else {
|
|
|
|
tensor_split[i] = 0.0f;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
params.tensor_split.push_back(tensor_split);
|
|
|
|
}
|
|
|
|
} else if (arg == "-r" || arg == "--repetitions") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
params.reps = std::stoi(argv[i]);
|
Threadpool: take 2 (#8672)
* Introduce ggml_compute_threadpool
- OpenMP functional: check
- Vanilla ggml functional: Check
- ggml w/threadpool functional: Check
- OpenMP no regression: No glaring problems
- Vanilla ggml no regression: No glaring problems
- ggml w/threadpool no regression: No glaring problems
* Minor fixes
* fixed use after release bug
* fixed a harmless race condition
* Fix Android bulid issue
* fix more race conditions
* fix deadlock for cases where cgraph.n_nodes == 1
and fix --poll case
* threadpool: use cpu_get_num_math to set the default number of threadpool threads
This way we avoid using E-Cores and Hyperthreaded siblings.
* bench: create fresh threadpool for each test
For benchmarking it's better to start a fresh pool for each test with the exact number of threads
needed for that test. Having larger pools is suboptimal (causes more load, etc).
* atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier
This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.
* threadpool: make polling the default to match openmp behavior
All command line args now allow for setting poll to 0 (false).
* threadpool: do not wakeup threads in already paused threadpool
* fix potential race condition in check_for_work
* threadpool: do not create two threadpools if their params are identical
* threadpool: reduce pause/resume/wakeup overhead in common cases
We now start threadpool in paused state only if we have two.
The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.
* threadpool: add support for hybrid polling
poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...
The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
We can tune this further as things evolve.
* threadpool: reduce the number of barrier required
New work is now indicated with an atomic counter that is incremented for
each new graph that needs to be computed.
This removes the need for extra barrier for clearing the "new_work" and
removes the special case for trivial graphs.
* threadpool: remove special-casing for disposable threadpools
With the efficient hybrid polling there is no need to make disposable pools any different.
This simplifies the overall logic and reduces branching.
Include n_threads in debug print for disposable threadpool.
Declare pause and stop flags as atomic_bool
This doesn't actually generate any memory barriers and simply informs
the thread sanitizer that these flags can be written & read by different
threads without locking.
* threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)
This fixes the race condition with very small graphs where the main thread happens to
start a new graph while the workers are just about to exit from barriers.
* threadpool: use relaxed order for chunk sync
Full memory barrier is an overkill for this since each thread works on different chunk
* threadpool: remove abort_callback from threadpool state
* threadpool: better naming for thread/cpumask releated functions
* threadpool: consistent use of int type for n_threads params
* threadpool: add support for ggml_threadpool_params_default/init
Also removes the need for explicit mask_specified param.
all-zero cpumask means use default (usually inherited) cpu affinity mask.
* threadpool: move typedef into ggml.h
* threadpool: fix apply_priority() function name
* threadpool: fix swift wrapper errors due to n_threads int type cleanup
* threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled
* threadpool: replace checks for compute_thread ret code with proper status check
* threadpool: simplify threadpool init logic and fix main thread affinity application
Most of the init code is now exactly the same between threadpool and openmp.
* threadpool: update threadpool resume/pause function names
* threadpool: enable openmp by default for now
* threadpool: don't forget to free workers state when omp is enabled
* threadpool: avoid updating process priority on the platforms that do not require it
On Windows we need to change overall process priority class in order to set thread priorities,
but on Linux, Mac, etc we do not need to touch the overall process settings.
* threadpool: update calling thread prio and affinity only at start/resume
This avoids extra syscalls for each graph_compute()
* llama-bench: turn threadpool params into vectors, add output headers, etc
* llama-bench: add support for cool off between tests --delay
This helps for long running tests on platforms that are thermally limited (phones, laptops, etc).
--delay (disabled by default) introduces the sleep for N seconds before starting each test.
* threadpool: move process priority setting into the apps (bench and cli)
This avoids changing the overall process priority on Windows for the apps
that use ggml/llama.cpp directy.
* threadpool: move all pause/resume logic into ggml
* threadpool: futher api cleanup and prep for future refactoring
All threadpool related functions and structs use ggml_threadpool prefix.
* threadpool: minor indent fixes
* threadpool: improve setprioty error message
* Update examples/llama-bench/llama-bench.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* threadpool: fix indent in set_threadpool call
* use int32_t for n_thread type in public llama.cpp API
* threadpool: use _new and _free instead of _create and _release
* fix two more public APIs to use int32_t for n_threads
* build: set _GNU_SOURCE for Adroid
---------
Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Co-authored-by: fmz <quic_fzaghlou@quic.com>
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2024-08-29 23:20:53 +00:00
|
|
|
} else if (arg == "--prio") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
params.prio = (enum ggml_sched_priority) std::stoi(argv[i]);
|
|
|
|
} else if (arg == "--delay") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
params.delay = std::stoi(argv[i]);
|
2023-08-18 10:44:58 +00:00
|
|
|
} else if (arg == "-o" || arg == "--output") {
|
|
|
|
if (++i >= argc) {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-06-04 12:32:42 +00:00
|
|
|
invalid_param = !output_format_from_str(argv[i], params.output_format);
|
|
|
|
} else if (arg == "-oe" || arg == "--output-err") {
|
|
|
|
if (++i >= argc) {
|
2023-08-18 10:44:58 +00:00
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
2024-06-04 12:32:42 +00:00
|
|
|
invalid_param = !output_format_from_str(argv[i], params.output_format_stderr);
|
2023-08-18 10:44:58 +00:00
|
|
|
} else if (arg == "-v" || arg == "--verbose") {
|
|
|
|
params.verbose = true;
|
2024-09-06 21:03:01 +00:00
|
|
|
} else if (arg == "--progress") {
|
|
|
|
params.progress = true;
|
2023-08-18 10:44:58 +00:00
|
|
|
} else {
|
|
|
|
invalid_param = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (invalid_param) {
|
|
|
|
fprintf(stderr, "error: invalid parameter for argument: %s\n", arg.c_str());
|
|
|
|
print_usage(argc, argv);
|
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
|
|
|
|
// set defaults
|
2024-11-20 11:57:53 +00:00
|
|
|
if (params.model.empty()) {
|
|
|
|
params.model = cmd_params_defaults.model;
|
|
|
|
}
|
|
|
|
if (params.n_prompt.empty()) {
|
|
|
|
params.n_prompt = cmd_params_defaults.n_prompt;
|
|
|
|
}
|
|
|
|
if (params.n_gen.empty()) {
|
|
|
|
params.n_gen = cmd_params_defaults.n_gen;
|
|
|
|
}
|
|
|
|
if (params.n_pg.empty()) {
|
|
|
|
params.n_pg = cmd_params_defaults.n_pg;
|
|
|
|
}
|
|
|
|
if (params.n_batch.empty()) {
|
|
|
|
params.n_batch = cmd_params_defaults.n_batch;
|
|
|
|
}
|
|
|
|
if (params.n_ubatch.empty()) {
|
|
|
|
params.n_ubatch = cmd_params_defaults.n_ubatch;
|
|
|
|
}
|
|
|
|
if (params.type_k.empty()) {
|
|
|
|
params.type_k = cmd_params_defaults.type_k;
|
|
|
|
}
|
|
|
|
if (params.type_v.empty()) {
|
|
|
|
params.type_v = cmd_params_defaults.type_v;
|
|
|
|
}
|
|
|
|
if (params.n_gpu_layers.empty()) {
|
|
|
|
params.n_gpu_layers = cmd_params_defaults.n_gpu_layers;
|
|
|
|
}
|
|
|
|
if (params.rpc_servers.empty()) {
|
|
|
|
params.rpc_servers = cmd_params_defaults.rpc_servers;
|
|
|
|
}
|
|
|
|
if (params.split_mode.empty()) {
|
|
|
|
params.split_mode = cmd_params_defaults.split_mode;
|
|
|
|
}
|
|
|
|
if (params.main_gpu.empty()) {
|
|
|
|
params.main_gpu = cmd_params_defaults.main_gpu;
|
|
|
|
}
|
|
|
|
if (params.no_kv_offload.empty()) {
|
|
|
|
params.no_kv_offload = cmd_params_defaults.no_kv_offload;
|
|
|
|
}
|
|
|
|
if (params.flash_attn.empty()) {
|
|
|
|
params.flash_attn = cmd_params_defaults.flash_attn;
|
|
|
|
}
|
|
|
|
if (params.tensor_split.empty()) {
|
|
|
|
params.tensor_split = cmd_params_defaults.tensor_split;
|
|
|
|
}
|
|
|
|
if (params.use_mmap.empty()) {
|
|
|
|
params.use_mmap = cmd_params_defaults.use_mmap;
|
|
|
|
}
|
|
|
|
if (params.embeddings.empty()) {
|
|
|
|
params.embeddings = cmd_params_defaults.embeddings;
|
|
|
|
}
|
|
|
|
if (params.n_threads.empty()) {
|
|
|
|
params.n_threads = cmd_params_defaults.n_threads;
|
|
|
|
}
|
|
|
|
if (params.cpu_mask.empty()) {
|
|
|
|
params.cpu_mask = cmd_params_defaults.cpu_mask;
|
|
|
|
}
|
|
|
|
if (params.cpu_strict.empty()) {
|
|
|
|
params.cpu_strict = cmd_params_defaults.cpu_strict;
|
|
|
|
}
|
|
|
|
if (params.poll.empty()) {
|
|
|
|
params.poll = cmd_params_defaults.poll;
|
|
|
|
}
|
2023-08-18 10:44:58 +00:00
|
|
|
|
|
|
|
return params;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct cmd_params_instance {
|
2024-11-20 11:57:53 +00:00
|
|
|
std::string model;
|
|
|
|
int n_prompt;
|
|
|
|
int n_gen;
|
|
|
|
int n_batch;
|
|
|
|
int n_ubatch;
|
|
|
|
ggml_type type_k;
|
|
|
|
ggml_type type_v;
|
|
|
|
int n_threads;
|
|
|
|
std::string cpu_mask;
|
|
|
|
bool cpu_strict;
|
|
|
|
int poll;
|
|
|
|
int n_gpu_layers;
|
|
|
|
std::string rpc_servers;
|
|
|
|
llama_split_mode split_mode;
|
|
|
|
int main_gpu;
|
|
|
|
bool no_kv_offload;
|
|
|
|
bool flash_attn;
|
2024-01-31 15:30:17 +00:00
|
|
|
std::vector<float> tensor_split;
|
2024-11-20 11:57:53 +00:00
|
|
|
bool use_mmap;
|
|
|
|
bool embeddings;
|
2023-08-18 10:44:58 +00:00
|
|
|
|
2023-09-28 19:42:38 +00:00
|
|
|
llama_model_params to_llama_mparams() const {
|
|
|
|
llama_model_params mparams = llama_model_default_params();
|
|
|
|
|
|
|
|
mparams.n_gpu_layers = n_gpu_layers;
|
2024-05-29 11:45:44 +00:00
|
|
|
if (!rpc_servers.empty()) {
|
|
|
|
mparams.rpc_servers = rpc_servers.c_str();
|
|
|
|
}
|
2024-11-20 11:57:53 +00:00
|
|
|
mparams.split_mode = split_mode;
|
|
|
|
mparams.main_gpu = main_gpu;
|
2023-09-28 19:42:38 +00:00
|
|
|
mparams.tensor_split = tensor_split.data();
|
2024-11-20 11:57:53 +00:00
|
|
|
mparams.use_mmap = use_mmap;
|
2023-09-28 19:42:38 +00:00
|
|
|
|
|
|
|
return mparams;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool equal_mparams(const cmd_params_instance & other) const {
|
2024-11-20 11:57:53 +00:00
|
|
|
return model == other.model && n_gpu_layers == other.n_gpu_layers && rpc_servers == other.rpc_servers &&
|
|
|
|
split_mode == other.split_mode && main_gpu == other.main_gpu && use_mmap == other.use_mmap &&
|
2023-09-28 19:42:38 +00:00
|
|
|
tensor_split == other.tensor_split;
|
|
|
|
}
|
|
|
|
|
|
|
|
llama_context_params to_llama_cparams() const {
|
|
|
|
llama_context_params cparams = llama_context_default_params();
|
2023-08-18 10:44:58 +00:00
|
|
|
|
2024-11-20 11:57:53 +00:00
|
|
|
cparams.n_ctx = n_prompt + n_gen;
|
|
|
|
cparams.n_batch = n_batch;
|
|
|
|
cparams.n_ubatch = n_ubatch;
|
|
|
|
cparams.type_k = type_k;
|
|
|
|
cparams.type_v = type_v;
|
2024-01-07 16:59:01 +00:00
|
|
|
cparams.offload_kqv = !no_kv_offload;
|
2024-11-20 11:57:53 +00:00
|
|
|
cparams.flash_attn = flash_attn;
|
|
|
|
cparams.embeddings = embeddings;
|
2023-09-28 19:42:38 +00:00
|
|
|
|
|
|
|
return cparams;
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
static std::vector<cmd_params_instance> get_cmd_params_instances(const cmd_params & params) {
|
|
|
|
std::vector<cmd_params_instance> instances;
|
|
|
|
|
2023-09-28 19:42:38 +00:00
|
|
|
// this ordering minimizes the number of times that each model needs to be reloaded
|
2024-11-20 11:57:53 +00:00
|
|
|
// clang-format off
|
2023-09-28 19:42:38 +00:00
|
|
|
for (const auto & m : params.model)
|
|
|
|
for (const auto & nl : params.n_gpu_layers)
|
2024-05-29 11:45:44 +00:00
|
|
|
for (const auto & rpc : params.rpc_servers)
|
2024-01-12 19:07:38 +00:00
|
|
|
for (const auto & sm : params.split_mode)
|
2023-09-28 19:42:38 +00:00
|
|
|
for (const auto & mg : params.main_gpu)
|
|
|
|
for (const auto & ts : params.tensor_split)
|
2024-02-01 19:48:53 +00:00
|
|
|
for (const auto & mmp : params.use_mmap)
|
2024-03-07 14:32:38 +00:00
|
|
|
for (const auto & embd : params.embeddings)
|
2023-09-28 19:42:38 +00:00
|
|
|
for (const auto & nb : params.n_batch)
|
2024-03-13 17:54:21 +00:00
|
|
|
for (const auto & nub : params.n_ubatch)
|
2023-12-07 11:03:17 +00:00
|
|
|
for (const auto & tk : params.type_k)
|
|
|
|
for (const auto & tv : params.type_v)
|
2024-01-07 16:59:01 +00:00
|
|
|
for (const auto & nkvo : params.no_kv_offload)
|
ggml : add Flash Attention (#5021)
* ggml : add ggml_flash_attn_ext API
* ggml : fix GQA support in ggml_flash_attn_ext
* ggml : online attention (CPU)
* metal : initial implementation
* metal : f16 precision
* metal : reduce branches
* metal : specialize for head size
* wip : 8 rows per simd group
* wip : 4 rows per simd group
* wip : template for rows per warp
* metal : parallelize across KV size
* metal : parallel reduce across heads
* metal : efficient flash_attn_f16 implementation
* metal : avoid redundant loads of the attention
* metal : scale and mask in matrix form
* metal : fix comment
* llama : avoid ggml_cast, use F32 query
* metal : add parallel reduce version (disabled)
* metal : move output into local memory + optimize
- the result from each simdgroup now stays in the registers
- significantly reduced SRAM usage
- more efficient skipping of -INF blocks
- avoid simdgroup barrier in hot loop
- add comments
* metal : add tests, fix scaling, support C > 32
* metal : improve precision
* ggml : fix f16 mad
* metal : minor
* metal : support Q > 8
* tests : add ATTN tests
* metal : disable buffer allocation logs
* tests : more
* metal : faster inner loop for C == 32
* metal : fix array initialization
* tests : ifdef
* ggml : switch to padded F16 mask for ggml_soft_max, ggml_flash_attn_ext
* ggml : fix ggml_soft_max mask requirement
* cuda : fix soft_max to use correct mask size
* cuda : add flash_attn kernel (wip)
* metal : optimize softmax for C > 32
* metal : optimize softmax
* tests : minor fix
* cuda : avoid zeroing fragments
* tests : update dims
* cuda : fix __hisinf() result check
* cuda : avoid warp_reduce for smax
* cuda : use int instead of int64_t
Noticeably improves performance (thanks to Johannes)
* cuda : make loops use the same loop values
Thanks Johannes again for the tip
* cuda : unroll some of the loops
* cuda : avoid __hisinf branches
* cuda : use half2 in softmax
* cuda : switch to 1 warp for bs > 16
* cuda : speed-up reduce part of the kernel
* cuda : unroll Q*K^T loop
* cuda : fix -INF block check
* cuda : simplify softmax
* cuda : fix matrix names
* cuda : minor
* llama : adapt to F16 KQ_pos
* llama : adapt new models to F16 KQ_mask
* ggml : fix F16 store (ARM NEON)
* llama : fix type of KQ_mask and KQ_pos
* ggml : fix CPU soft_max
* tests : add hs=256
* cuda : fix build
* metal : improve perf via smaller int registers
* cuda : adapt soft_max to F16 mask and pos
* CUDA: faster FlashAttention, kernel for bs == 1
* 16 cols for Phi-2
* no vec for hs, no hs==256 ncols==32 for Volta
* adjust kernel selection logic
* 4 warps, 256 stride for all D
* no ncols == 64
* Multiple parallel blocks for batch size 1
* fix compile warnings
* fix excessive KQ_b loads
* fix cmake build
* fix KV cache padding, NaN from INFINITY (#6438)
* llama : flash_attn cparam + fix defrag
* server: support flash_attn param
* server: bench: enable flash_attn param
* CUDA: refactor host code, dyn. par. blocks
* fix flash_attn_vec_f16 race condition
* flush softmax exp below threshold to 0
* store temp KQ in registers
* Calculate KQ as FP32 if KQV has GGML_PREC_F32
* Add __hgt2_mask implementation for CUDA 11
* fix KQ FP32 precision fpr parallel_blocks > 1
* llama-bench : add -fa,--flash-attn arg
* metal : add BS=1 kernel for flash attention (#6508)
* metal : add BS=1 kernel for flash attention (wip)
* metal : support more than 1 warps
* metal : opts
* metal : opt
* metal : switch to parallel reduce
* metal : reduce registers
* metal : simplify
* metal : initial FA vec kernel
* metal : use F32 attention accumulators
* batched-bench : add fattn arg
* llama : simplify llama_build_kv_store
ggml-ci
* llama : adapt build_olmo to changes
* ggml : fix arm fp16 store on windows
* metal : clean-up
* metal : clean-up kernel code
* metal : minor
* tests : remove benchmarks
ggml-ci
* ggml : fix avx512 const correctness
ggml-ci
* ggml : fix soft_max with bias on CPU
ggml-ci
* common : print --flash-attn in help
* ggml : fix num dimensions in ggml_flash_attn_ext
* llama : force disable flash attention for incompatible models
* ggml : ggml_soft_max support F16/F32 mask/pos
ggml-ci
* cuda : uint -> uint32_t
* cuda : "constexpr dim3" -> "const dim3"
ggml-ci
* cuda : try to fix __hgt2_mask
ggml-ci
* ggml : add TODO's for F16/F32 mask/pos support in other backends
* llama : replace bool need_kq_pos with use_alibi
* llama : prep ALiBi support for BERT models
ggml-ci
* llama : fix n_batch requirements
ggml-ci
* cont
* server : add help for --flash-attn arg
* llama : disable FA for AMD
* tests : remove TMP_ATTN_BENCH
ggml-ci
* llama : support save/load state with FA enabled
ggml-ci
* ci : add CUDA save-load-state tests
ggml-ci
* llama : llama_kv_cache_clear zeroes data + fix save-load seq
ggml-ci
* llama : fix copy-paste errors, add TODO
* llama : disallow incompatible states
* llama : update llama_state_get_size after v_trans field
* metal : remove tmp log
* llama : add static reminder for llama_state_get_size
* metal : fix max nsg
ggml-ci
* ci : fix arg order
ggml-ci
---------
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Pierrick HYMBERT <pierrick.hymbert@gmail.com>
2024-04-30 09:16:08 +00:00
|
|
|
for (const auto & fa : params.flash_attn)
|
Threadpool: take 2 (#8672)
* Introduce ggml_compute_threadpool
- OpenMP functional: check
- Vanilla ggml functional: Check
- ggml w/threadpool functional: Check
- OpenMP no regression: No glaring problems
- Vanilla ggml no regression: No glaring problems
- ggml w/threadpool no regression: No glaring problems
* Minor fixes
* fixed use after release bug
* fixed a harmless race condition
* Fix Android bulid issue
* fix more race conditions
* fix deadlock for cases where cgraph.n_nodes == 1
and fix --poll case
* threadpool: use cpu_get_num_math to set the default number of threadpool threads
This way we avoid using E-Cores and Hyperthreaded siblings.
* bench: create fresh threadpool for each test
For benchmarking it's better to start a fresh pool for each test with the exact number of threads
needed for that test. Having larger pools is suboptimal (causes more load, etc).
* atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier
This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.
* threadpool: make polling the default to match openmp behavior
All command line args now allow for setting poll to 0 (false).
* threadpool: do not wakeup threads in already paused threadpool
* fix potential race condition in check_for_work
* threadpool: do not create two threadpools if their params are identical
* threadpool: reduce pause/resume/wakeup overhead in common cases
We now start threadpool in paused state only if we have two.
The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.
* threadpool: add support for hybrid polling
poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...
The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
We can tune this further as things evolve.
* threadpool: reduce the number of barrier required
New work is now indicated with an atomic counter that is incremented for
each new graph that needs to be computed.
This removes the need for extra barrier for clearing the "new_work" and
removes the special case for trivial graphs.
* threadpool: remove special-casing for disposable threadpools
With the efficient hybrid polling there is no need to make disposable pools any different.
This simplifies the overall logic and reduces branching.
Include n_threads in debug print for disposable threadpool.
Declare pause and stop flags as atomic_bool
This doesn't actually generate any memory barriers and simply informs
the thread sanitizer that these flags can be written & read by different
threads without locking.
* threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)
This fixes the race condition with very small graphs where the main thread happens to
start a new graph while the workers are just about to exit from barriers.
* threadpool: use relaxed order for chunk sync
Full memory barrier is an overkill for this since each thread works on different chunk
* threadpool: remove abort_callback from threadpool state
* threadpool: better naming for thread/cpumask releated functions
* threadpool: consistent use of int type for n_threads params
* threadpool: add support for ggml_threadpool_params_default/init
Also removes the need for explicit mask_specified param.
all-zero cpumask means use default (usually inherited) cpu affinity mask.
* threadpool: move typedef into ggml.h
* threadpool: fix apply_priority() function name
* threadpool: fix swift wrapper errors due to n_threads int type cleanup
* threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled
* threadpool: replace checks for compute_thread ret code with proper status check
* threadpool: simplify threadpool init logic and fix main thread affinity application
Most of the init code is now exactly the same between threadpool and openmp.
* threadpool: update threadpool resume/pause function names
* threadpool: enable openmp by default for now
* threadpool: don't forget to free workers state when omp is enabled
* threadpool: avoid updating process priority on the platforms that do not require it
On Windows we need to change overall process priority class in order to set thread priorities,
but on Linux, Mac, etc we do not need to touch the overall process settings.
* threadpool: update calling thread prio and affinity only at start/resume
This avoids extra syscalls for each graph_compute()
* llama-bench: turn threadpool params into vectors, add output headers, etc
* llama-bench: add support for cool off between tests --delay
This helps for long running tests on platforms that are thermally limited (phones, laptops, etc).
--delay (disabled by default) introduces the sleep for N seconds before starting each test.
* threadpool: move process priority setting into the apps (bench and cli)
This avoids changing the overall process priority on Windows for the apps
that use ggml/llama.cpp directy.
* threadpool: move all pause/resume logic into ggml
* threadpool: futher api cleanup and prep for future refactoring
All threadpool related functions and structs use ggml_threadpool prefix.
* threadpool: minor indent fixes
* threadpool: improve setprioty error message
* Update examples/llama-bench/llama-bench.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* threadpool: fix indent in set_threadpool call
* use int32_t for n_thread type in public llama.cpp API
* threadpool: use _new and _free instead of _create and _release
* fix two more public APIs to use int32_t for n_threads
* build: set _GNU_SOURCE for Adroid
---------
Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Co-authored-by: fmz <quic_fzaghlou@quic.com>
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2024-08-29 23:20:53 +00:00
|
|
|
for (const auto & nt : params.n_threads)
|
|
|
|
for (const auto & cm : params.cpu_mask)
|
|
|
|
for (const auto & cs : params.cpu_strict)
|
|
|
|
for (const auto & pl : params.poll) {
|
2023-09-28 19:42:38 +00:00
|
|
|
for (const auto & n_prompt : params.n_prompt) {
|
|
|
|
if (n_prompt == 0) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
cmd_params_instance instance = {
|
|
|
|
/* .model = */ m,
|
|
|
|
/* .n_prompt = */ n_prompt,
|
|
|
|
/* .n_gen = */ 0,
|
|
|
|
/* .n_batch = */ nb,
|
2024-03-13 17:54:21 +00:00
|
|
|
/* .n_ubatch = */ nub,
|
2023-12-07 11:03:17 +00:00
|
|
|
/* .type_k = */ tk,
|
|
|
|
/* .type_v = */ tv,
|
2023-09-28 19:42:38 +00:00
|
|
|
/* .n_threads = */ nt,
|
Threadpool: take 2 (#8672)
* Introduce ggml_compute_threadpool
- OpenMP functional: check
- Vanilla ggml functional: Check
- ggml w/threadpool functional: Check
- OpenMP no regression: No glaring problems
- Vanilla ggml no regression: No glaring problems
- ggml w/threadpool no regression: No glaring problems
* Minor fixes
* fixed use after release bug
* fixed a harmless race condition
* Fix Android bulid issue
* fix more race conditions
* fix deadlock for cases where cgraph.n_nodes == 1
and fix --poll case
* threadpool: use cpu_get_num_math to set the default number of threadpool threads
This way we avoid using E-Cores and Hyperthreaded siblings.
* bench: create fresh threadpool for each test
For benchmarking it's better to start a fresh pool for each test with the exact number of threads
needed for that test. Having larger pools is suboptimal (causes more load, etc).
* atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier
This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.
* threadpool: make polling the default to match openmp behavior
All command line args now allow for setting poll to 0 (false).
* threadpool: do not wakeup threads in already paused threadpool
* fix potential race condition in check_for_work
* threadpool: do not create two threadpools if their params are identical
* threadpool: reduce pause/resume/wakeup overhead in common cases
We now start threadpool in paused state only if we have two.
The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.
* threadpool: add support for hybrid polling
poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...
The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
We can tune this further as things evolve.
* threadpool: reduce the number of barrier required
New work is now indicated with an atomic counter that is incremented for
each new graph that needs to be computed.
This removes the need for extra barrier for clearing the "new_work" and
removes the special case for trivial graphs.
* threadpool: remove special-casing for disposable threadpools
With the efficient hybrid polling there is no need to make disposable pools any different.
This simplifies the overall logic and reduces branching.
Include n_threads in debug print for disposable threadpool.
Declare pause and stop flags as atomic_bool
This doesn't actually generate any memory barriers and simply informs
the thread sanitizer that these flags can be written & read by different
threads without locking.
* threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)
This fixes the race condition with very small graphs where the main thread happens to
start a new graph while the workers are just about to exit from barriers.
* threadpool: use relaxed order for chunk sync
Full memory barrier is an overkill for this since each thread works on different chunk
* threadpool: remove abort_callback from threadpool state
* threadpool: better naming for thread/cpumask releated functions
* threadpool: consistent use of int type for n_threads params
* threadpool: add support for ggml_threadpool_params_default/init
Also removes the need for explicit mask_specified param.
all-zero cpumask means use default (usually inherited) cpu affinity mask.
* threadpool: move typedef into ggml.h
* threadpool: fix apply_priority() function name
* threadpool: fix swift wrapper errors due to n_threads int type cleanup
* threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled
* threadpool: replace checks for compute_thread ret code with proper status check
* threadpool: simplify threadpool init logic and fix main thread affinity application
Most of the init code is now exactly the same between threadpool and openmp.
* threadpool: update threadpool resume/pause function names
* threadpool: enable openmp by default for now
* threadpool: don't forget to free workers state when omp is enabled
* threadpool: avoid updating process priority on the platforms that do not require it
On Windows we need to change overall process priority class in order to set thread priorities,
but on Linux, Mac, etc we do not need to touch the overall process settings.
* threadpool: update calling thread prio and affinity only at start/resume
This avoids extra syscalls for each graph_compute()
* llama-bench: turn threadpool params into vectors, add output headers, etc
* llama-bench: add support for cool off between tests --delay
This helps for long running tests on platforms that are thermally limited (phones, laptops, etc).
--delay (disabled by default) introduces the sleep for N seconds before starting each test.
* threadpool: move process priority setting into the apps (bench and cli)
This avoids changing the overall process priority on Windows for the apps
that use ggml/llama.cpp directy.
* threadpool: move all pause/resume logic into ggml
* threadpool: futher api cleanup and prep for future refactoring
All threadpool related functions and structs use ggml_threadpool prefix.
* threadpool: minor indent fixes
* threadpool: improve setprioty error message
* Update examples/llama-bench/llama-bench.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* threadpool: fix indent in set_threadpool call
* use int32_t for n_thread type in public llama.cpp API
* threadpool: use _new and _free instead of _create and _release
* fix two more public APIs to use int32_t for n_threads
* build: set _GNU_SOURCE for Adroid
---------
Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Co-authored-by: fmz <quic_fzaghlou@quic.com>
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2024-08-29 23:20:53 +00:00
|
|
|
/* .cpu_mask = */ cm,
|
|
|
|
/* .cpu_strict = */ cs,
|
|
|
|
/* .poll = */ pl,
|
2023-09-28 19:42:38 +00:00
|
|
|
/* .n_gpu_layers = */ nl,
|
2024-05-29 11:45:44 +00:00
|
|
|
/* .rpc_servers = */ rpc,
|
2024-01-12 19:07:38 +00:00
|
|
|
/* .split_mode = */ sm,
|
2023-09-28 19:42:38 +00:00
|
|
|
/* .main_gpu = */ mg,
|
2024-01-07 16:59:01 +00:00
|
|
|
/* .no_kv_offload= */ nkvo,
|
ggml : add Flash Attention (#5021)
* ggml : add ggml_flash_attn_ext API
* ggml : fix GQA support in ggml_flash_attn_ext
* ggml : online attention (CPU)
* metal : initial implementation
* metal : f16 precision
* metal : reduce branches
* metal : specialize for head size
* wip : 8 rows per simd group
* wip : 4 rows per simd group
* wip : template for rows per warp
* metal : parallelize across KV size
* metal : parallel reduce across heads
* metal : efficient flash_attn_f16 implementation
* metal : avoid redundant loads of the attention
* metal : scale and mask in matrix form
* metal : fix comment
* llama : avoid ggml_cast, use F32 query
* metal : add parallel reduce version (disabled)
* metal : move output into local memory + optimize
- the result from each simdgroup now stays in the registers
- significantly reduced SRAM usage
- more efficient skipping of -INF blocks
- avoid simdgroup barrier in hot loop
- add comments
* metal : add tests, fix scaling, support C > 32
* metal : improve precision
* ggml : fix f16 mad
* metal : minor
* metal : support Q > 8
* tests : add ATTN tests
* metal : disable buffer allocation logs
* tests : more
* metal : faster inner loop for C == 32
* metal : fix array initialization
* tests : ifdef
* ggml : switch to padded F16 mask for ggml_soft_max, ggml_flash_attn_ext
* ggml : fix ggml_soft_max mask requirement
* cuda : fix soft_max to use correct mask size
* cuda : add flash_attn kernel (wip)
* metal : optimize softmax for C > 32
* metal : optimize softmax
* tests : minor fix
* cuda : avoid zeroing fragments
* tests : update dims
* cuda : fix __hisinf() result check
* cuda : avoid warp_reduce for smax
* cuda : use int instead of int64_t
Noticeably improves performance (thanks to Johannes)
* cuda : make loops use the same loop values
Thanks Johannes again for the tip
* cuda : unroll some of the loops
* cuda : avoid __hisinf branches
* cuda : use half2 in softmax
* cuda : switch to 1 warp for bs > 16
* cuda : speed-up reduce part of the kernel
* cuda : unroll Q*K^T loop
* cuda : fix -INF block check
* cuda : simplify softmax
* cuda : fix matrix names
* cuda : minor
* llama : adapt to F16 KQ_pos
* llama : adapt new models to F16 KQ_mask
* ggml : fix F16 store (ARM NEON)
* llama : fix type of KQ_mask and KQ_pos
* ggml : fix CPU soft_max
* tests : add hs=256
* cuda : fix build
* metal : improve perf via smaller int registers
* cuda : adapt soft_max to F16 mask and pos
* CUDA: faster FlashAttention, kernel for bs == 1
* 16 cols for Phi-2
* no vec for hs, no hs==256 ncols==32 for Volta
* adjust kernel selection logic
* 4 warps, 256 stride for all D
* no ncols == 64
* Multiple parallel blocks for batch size 1
* fix compile warnings
* fix excessive KQ_b loads
* fix cmake build
* fix KV cache padding, NaN from INFINITY (#6438)
* llama : flash_attn cparam + fix defrag
* server: support flash_attn param
* server: bench: enable flash_attn param
* CUDA: refactor host code, dyn. par. blocks
* fix flash_attn_vec_f16 race condition
* flush softmax exp below threshold to 0
* store temp KQ in registers
* Calculate KQ as FP32 if KQV has GGML_PREC_F32
* Add __hgt2_mask implementation for CUDA 11
* fix KQ FP32 precision fpr parallel_blocks > 1
* llama-bench : add -fa,--flash-attn arg
* metal : add BS=1 kernel for flash attention (#6508)
* metal : add BS=1 kernel for flash attention (wip)
* metal : support more than 1 warps
* metal : opts
* metal : opt
* metal : switch to parallel reduce
* metal : reduce registers
* metal : simplify
* metal : initial FA vec kernel
* metal : use F32 attention accumulators
* batched-bench : add fattn arg
* llama : simplify llama_build_kv_store
ggml-ci
* llama : adapt build_olmo to changes
* ggml : fix arm fp16 store on windows
* metal : clean-up
* metal : clean-up kernel code
* metal : minor
* tests : remove benchmarks
ggml-ci
* ggml : fix avx512 const correctness
ggml-ci
* ggml : fix soft_max with bias on CPU
ggml-ci
* common : print --flash-attn in help
* ggml : fix num dimensions in ggml_flash_attn_ext
* llama : force disable flash attention for incompatible models
* ggml : ggml_soft_max support F16/F32 mask/pos
ggml-ci
* cuda : uint -> uint32_t
* cuda : "constexpr dim3" -> "const dim3"
ggml-ci
* cuda : try to fix __hgt2_mask
ggml-ci
* ggml : add TODO's for F16/F32 mask/pos support in other backends
* llama : replace bool need_kq_pos with use_alibi
* llama : prep ALiBi support for BERT models
ggml-ci
* llama : fix n_batch requirements
ggml-ci
* cont
* server : add help for --flash-attn arg
* llama : disable FA for AMD
* tests : remove TMP_ATTN_BENCH
ggml-ci
* llama : support save/load state with FA enabled
ggml-ci
* ci : add CUDA save-load-state tests
ggml-ci
* llama : llama_kv_cache_clear zeroes data + fix save-load seq
ggml-ci
* llama : fix copy-paste errors, add TODO
* llama : disallow incompatible states
* llama : update llama_state_get_size after v_trans field
* metal : remove tmp log
* llama : add static reminder for llama_state_get_size
* metal : fix max nsg
ggml-ci
* ci : fix arg order
ggml-ci
---------
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Pierrick HYMBERT <pierrick.hymbert@gmail.com>
2024-04-30 09:16:08 +00:00
|
|
|
/* .flash_attn = */ fa,
|
2023-09-28 19:42:38 +00:00
|
|
|
/* .tensor_split = */ ts,
|
2024-02-01 19:48:53 +00:00
|
|
|
/* .use_mmap = */ mmp,
|
2024-03-07 14:32:38 +00:00
|
|
|
/* .embeddings = */ embd,
|
2023-09-28 19:42:38 +00:00
|
|
|
};
|
|
|
|
instances.push_back(instance);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (const auto & n_gen : params.n_gen) {
|
|
|
|
if (n_gen == 0) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
cmd_params_instance instance = {
|
|
|
|
/* .model = */ m,
|
|
|
|
/* .n_prompt = */ 0,
|
|
|
|
/* .n_gen = */ n_gen,
|
|
|
|
/* .n_batch = */ nb,
|
2024-03-13 17:54:21 +00:00
|
|
|
/* .n_ubatch = */ nub,
|
2023-12-07 11:03:17 +00:00
|
|
|
/* .type_k = */ tk,
|
|
|
|
/* .type_v = */ tv,
|
2023-09-28 19:42:38 +00:00
|
|
|
/* .n_threads = */ nt,
|
Threadpool: take 2 (#8672)
* Introduce ggml_compute_threadpool
- OpenMP functional: check
- Vanilla ggml functional: Check
- ggml w/threadpool functional: Check
- OpenMP no regression: No glaring problems
- Vanilla ggml no regression: No glaring problems
- ggml w/threadpool no regression: No glaring problems
* Minor fixes
* fixed use after release bug
* fixed a harmless race condition
* Fix Android bulid issue
* fix more race conditions
* fix deadlock for cases where cgraph.n_nodes == 1
and fix --poll case
* threadpool: use cpu_get_num_math to set the default number of threadpool threads
This way we avoid using E-Cores and Hyperthreaded siblings.
* bench: create fresh threadpool for each test
For benchmarking it's better to start a fresh pool for each test with the exact number of threads
needed for that test. Having larger pools is suboptimal (causes more load, etc).
* atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier
This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.
* threadpool: make polling the default to match openmp behavior
All command line args now allow for setting poll to 0 (false).
* threadpool: do not wakeup threads in already paused threadpool
* fix potential race condition in check_for_work
* threadpool: do not create two threadpools if their params are identical
* threadpool: reduce pause/resume/wakeup overhead in common cases
We now start threadpool in paused state only if we have two.
The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.
* threadpool: add support for hybrid polling
poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...
The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
We can tune this further as things evolve.
* threadpool: reduce the number of barrier required
New work is now indicated with an atomic counter that is incremented for
each new graph that needs to be computed.
This removes the need for extra barrier for clearing the "new_work" and
removes the special case for trivial graphs.
* threadpool: remove special-casing for disposable threadpools
With the efficient hybrid polling there is no need to make disposable pools any different.
This simplifies the overall logic and reduces branching.
Include n_threads in debug print for disposable threadpool.
Declare pause and stop flags as atomic_bool
This doesn't actually generate any memory barriers and simply informs
the thread sanitizer that these flags can be written & read by different
threads without locking.
* threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)
This fixes the race condition with very small graphs where the main thread happens to
start a new graph while the workers are just about to exit from barriers.
* threadpool: use relaxed order for chunk sync
Full memory barrier is an overkill for this since each thread works on different chunk
* threadpool: remove abort_callback from threadpool state
* threadpool: better naming for thread/cpumask releated functions
* threadpool: consistent use of int type for n_threads params
* threadpool: add support for ggml_threadpool_params_default/init
Also removes the need for explicit mask_specified param.
all-zero cpumask means use default (usually inherited) cpu affinity mask.
* threadpool: move typedef into ggml.h
* threadpool: fix apply_priority() function name
* threadpool: fix swift wrapper errors due to n_threads int type cleanup
* threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled
* threadpool: replace checks for compute_thread ret code with proper status check
* threadpool: simplify threadpool init logic and fix main thread affinity application
Most of the init code is now exactly the same between threadpool and openmp.
* threadpool: update threadpool resume/pause function names
* threadpool: enable openmp by default for now
* threadpool: don't forget to free workers state when omp is enabled
* threadpool: avoid updating process priority on the platforms that do not require it
On Windows we need to change overall process priority class in order to set thread priorities,
but on Linux, Mac, etc we do not need to touch the overall process settings.
* threadpool: update calling thread prio and affinity only at start/resume
This avoids extra syscalls for each graph_compute()
* llama-bench: turn threadpool params into vectors, add output headers, etc
* llama-bench: add support for cool off between tests --delay
This helps for long running tests on platforms that are thermally limited (phones, laptops, etc).
--delay (disabled by default) introduces the sleep for N seconds before starting each test.
* threadpool: move process priority setting into the apps (bench and cli)
This avoids changing the overall process priority on Windows for the apps
that use ggml/llama.cpp directy.
* threadpool: move all pause/resume logic into ggml
* threadpool: futher api cleanup and prep for future refactoring
All threadpool related functions and structs use ggml_threadpool prefix.
* threadpool: minor indent fixes
* threadpool: improve setprioty error message
* Update examples/llama-bench/llama-bench.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* threadpool: fix indent in set_threadpool call
* use int32_t for n_thread type in public llama.cpp API
* threadpool: use _new and _free instead of _create and _release
* fix two more public APIs to use int32_t for n_threads
* build: set _GNU_SOURCE for Adroid
---------
Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Co-authored-by: fmz <quic_fzaghlou@quic.com>
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2024-08-29 23:20:53 +00:00
|
|
|
/* .cpu_mask = */ cm,
|
|
|
|
/* .cpu_strict = */ cs,
|
|
|
|
/* .poll = */ pl,
|
2023-09-28 19:42:38 +00:00
|
|
|
/* .n_gpu_layers = */ nl,
|
2024-05-29 11:45:44 +00:00
|
|
|
/* .rpc_servers = */ rpc,
|
2024-01-12 19:07:38 +00:00
|
|
|
/* .split_mode = */ sm,
|
2023-09-28 19:42:38 +00:00
|
|
|
/* .main_gpu = */ mg,
|
2024-01-07 16:59:01 +00:00
|
|
|
/* .no_kv_offload= */ nkvo,
|
ggml : add Flash Attention (#5021)
* ggml : add ggml_flash_attn_ext API
* ggml : fix GQA support in ggml_flash_attn_ext
* ggml : online attention (CPU)
* metal : initial implementation
* metal : f16 precision
* metal : reduce branches
* metal : specialize for head size
* wip : 8 rows per simd group
* wip : 4 rows per simd group
* wip : template for rows per warp
* metal : parallelize across KV size
* metal : parallel reduce across heads
* metal : efficient flash_attn_f16 implementation
* metal : avoid redundant loads of the attention
* metal : scale and mask in matrix form
* metal : fix comment
* llama : avoid ggml_cast, use F32 query
* metal : add parallel reduce version (disabled)
* metal : move output into local memory + optimize
- the result from each simdgroup now stays in the registers
- significantly reduced SRAM usage
- more efficient skipping of -INF blocks
- avoid simdgroup barrier in hot loop
- add comments
* metal : add tests, fix scaling, support C > 32
* metal : improve precision
* ggml : fix f16 mad
* metal : minor
* metal : support Q > 8
* tests : add ATTN tests
* metal : disable buffer allocation logs
* tests : more
* metal : faster inner loop for C == 32
* metal : fix array initialization
* tests : ifdef
* ggml : switch to padded F16 mask for ggml_soft_max, ggml_flash_attn_ext
* ggml : fix ggml_soft_max mask requirement
* cuda : fix soft_max to use correct mask size
* cuda : add flash_attn kernel (wip)
* metal : optimize softmax for C > 32
* metal : optimize softmax
* tests : minor fix
* cuda : avoid zeroing fragments
* tests : update dims
* cuda : fix __hisinf() result check
* cuda : avoid warp_reduce for smax
* cuda : use int instead of int64_t
Noticeably improves performance (thanks to Johannes)
* cuda : make loops use the same loop values
Thanks Johannes again for the tip
* cuda : unroll some of the loops
* cuda : avoid __hisinf branches
* cuda : use half2 in softmax
* cuda : switch to 1 warp for bs > 16
* cuda : speed-up reduce part of the kernel
* cuda : unroll Q*K^T loop
* cuda : fix -INF block check
* cuda : simplify softmax
* cuda : fix matrix names
* cuda : minor
* llama : adapt to F16 KQ_pos
* llama : adapt new models to F16 KQ_mask
* ggml : fix F16 store (ARM NEON)
* llama : fix type of KQ_mask and KQ_pos
* ggml : fix CPU soft_max
* tests : add hs=256
* cuda : fix build
* metal : improve perf via smaller int registers
* cuda : adapt soft_max to F16 mask and pos
* CUDA: faster FlashAttention, kernel for bs == 1
* 16 cols for Phi-2
* no vec for hs, no hs==256 ncols==32 for Volta
* adjust kernel selection logic
* 4 warps, 256 stride for all D
* no ncols == 64
* Multiple parallel blocks for batch size 1
* fix compile warnings
* fix excessive KQ_b loads
* fix cmake build
* fix KV cache padding, NaN from INFINITY (#6438)
* llama : flash_attn cparam + fix defrag
* server: support flash_attn param
* server: bench: enable flash_attn param
* CUDA: refactor host code, dyn. par. blocks
* fix flash_attn_vec_f16 race condition
* flush softmax exp below threshold to 0
* store temp KQ in registers
* Calculate KQ as FP32 if KQV has GGML_PREC_F32
* Add __hgt2_mask implementation for CUDA 11
* fix KQ FP32 precision fpr parallel_blocks > 1
* llama-bench : add -fa,--flash-attn arg
* metal : add BS=1 kernel for flash attention (#6508)
* metal : add BS=1 kernel for flash attention (wip)
* metal : support more than 1 warps
* metal : opts
* metal : opt
* metal : switch to parallel reduce
* metal : reduce registers
* metal : simplify
* metal : initial FA vec kernel
* metal : use F32 attention accumulators
* batched-bench : add fattn arg
* llama : simplify llama_build_kv_store
ggml-ci
* llama : adapt build_olmo to changes
* ggml : fix arm fp16 store on windows
* metal : clean-up
* metal : clean-up kernel code
* metal : minor
* tests : remove benchmarks
ggml-ci
* ggml : fix avx512 const correctness
ggml-ci
* ggml : fix soft_max with bias on CPU
ggml-ci
* common : print --flash-attn in help
* ggml : fix num dimensions in ggml_flash_attn_ext
* llama : force disable flash attention for incompatible models
* ggml : ggml_soft_max support F16/F32 mask/pos
ggml-ci
* cuda : uint -> uint32_t
* cuda : "constexpr dim3" -> "const dim3"
ggml-ci
* cuda : try to fix __hgt2_mask
ggml-ci
* ggml : add TODO's for F16/F32 mask/pos support in other backends
* llama : replace bool need_kq_pos with use_alibi
* llama : prep ALiBi support for BERT models
ggml-ci
* llama : fix n_batch requirements
ggml-ci
* cont
* server : add help for --flash-attn arg
* llama : disable FA for AMD
* tests : remove TMP_ATTN_BENCH
ggml-ci
* llama : support save/load state with FA enabled
ggml-ci
* ci : add CUDA save-load-state tests
ggml-ci
* llama : llama_kv_cache_clear zeroes data + fix save-load seq
ggml-ci
* llama : fix copy-paste errors, add TODO
* llama : disallow incompatible states
* llama : update llama_state_get_size after v_trans field
* metal : remove tmp log
* llama : add static reminder for llama_state_get_size
* metal : fix max nsg
ggml-ci
* ci : fix arg order
ggml-ci
---------
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Pierrick HYMBERT <pierrick.hymbert@gmail.com>
2024-04-30 09:16:08 +00:00
|
|
|
/* .flash_attn = */ fa,
|
2023-09-28 19:42:38 +00:00
|
|
|
/* .tensor_split = */ ts,
|
2024-02-01 19:48:53 +00:00
|
|
|
/* .use_mmap = */ mmp,
|
2024-03-07 14:32:38 +00:00
|
|
|
/* .embeddings = */ embd,
|
2023-09-28 19:42:38 +00:00
|
|
|
};
|
|
|
|
instances.push_back(instance);
|
|
|
|
}
|
2024-05-10 16:03:54 +00:00
|
|
|
|
|
|
|
for (const auto & n_pg : params.n_pg) {
|
|
|
|
if (n_pg.first == 0 && n_pg.second == 0) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
cmd_params_instance instance = {
|
|
|
|
/* .model = */ m,
|
|
|
|
/* .n_prompt = */ n_pg.first,
|
|
|
|
/* .n_gen = */ n_pg.second,
|
|
|
|
/* .n_batch = */ nb,
|
|
|
|
/* .n_ubatch = */ nub,
|
|
|
|
/* .type_k = */ tk,
|
|
|
|
/* .type_v = */ tv,
|
|
|
|
/* .n_threads = */ nt,
|
Threadpool: take 2 (#8672)
* Introduce ggml_compute_threadpool
- OpenMP functional: check
- Vanilla ggml functional: Check
- ggml w/threadpool functional: Check
- OpenMP no regression: No glaring problems
- Vanilla ggml no regression: No glaring problems
- ggml w/threadpool no regression: No glaring problems
* Minor fixes
* fixed use after release bug
* fixed a harmless race condition
* Fix Android bulid issue
* fix more race conditions
* fix deadlock for cases where cgraph.n_nodes == 1
and fix --poll case
* threadpool: use cpu_get_num_math to set the default number of threadpool threads
This way we avoid using E-Cores and Hyperthreaded siblings.
* bench: create fresh threadpool for each test
For benchmarking it's better to start a fresh pool for each test with the exact number of threads
needed for that test. Having larger pools is suboptimal (causes more load, etc).
* atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier
This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.
* threadpool: make polling the default to match openmp behavior
All command line args now allow for setting poll to 0 (false).
* threadpool: do not wakeup threads in already paused threadpool
* fix potential race condition in check_for_work
* threadpool: do not create two threadpools if their params are identical
* threadpool: reduce pause/resume/wakeup overhead in common cases
We now start threadpool in paused state only if we have two.
The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.
* threadpool: add support for hybrid polling
poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...
The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
We can tune this further as things evolve.
* threadpool: reduce the number of barrier required
New work is now indicated with an atomic counter that is incremented for
each new graph that needs to be computed.
This removes the need for extra barrier for clearing the "new_work" and
removes the special case for trivial graphs.
* threadpool: remove special-casing for disposable threadpools
With the efficient hybrid polling there is no need to make disposable pools any different.
This simplifies the overall logic and reduces branching.
Include n_threads in debug print for disposable threadpool.
Declare pause and stop flags as atomic_bool
This doesn't actually generate any memory barriers and simply informs
the thread sanitizer that these flags can be written & read by different
threads without locking.
* threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)
This fixes the race condition with very small graphs where the main thread happens to
start a new graph while the workers are just about to exit from barriers.
* threadpool: use relaxed order for chunk sync
Full memory barrier is an overkill for this since each thread works on different chunk
* threadpool: remove abort_callback from threadpool state
* threadpool: better naming for thread/cpumask releated functions
* threadpool: consistent use of int type for n_threads params
* threadpool: add support for ggml_threadpool_params_default/init
Also removes the need for explicit mask_specified param.
all-zero cpumask means use default (usually inherited) cpu affinity mask.
* threadpool: move typedef into ggml.h
* threadpool: fix apply_priority() function name
* threadpool: fix swift wrapper errors due to n_threads int type cleanup
* threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled
* threadpool: replace checks for compute_thread ret code with proper status check
* threadpool: simplify threadpool init logic and fix main thread affinity application
Most of the init code is now exactly the same between threadpool and openmp.
* threadpool: update threadpool resume/pause function names
* threadpool: enable openmp by default for now
* threadpool: don't forget to free workers state when omp is enabled
* threadpool: avoid updating process priority on the platforms that do not require it
On Windows we need to change overall process priority class in order to set thread priorities,
but on Linux, Mac, etc we do not need to touch the overall process settings.
* threadpool: update calling thread prio and affinity only at start/resume
This avoids extra syscalls for each graph_compute()
* llama-bench: turn threadpool params into vectors, add output headers, etc
* llama-bench: add support for cool off between tests --delay
This helps for long running tests on platforms that are thermally limited (phones, laptops, etc).
--delay (disabled by default) introduces the sleep for N seconds before starting each test.
* threadpool: move process priority setting into the apps (bench and cli)
This avoids changing the overall process priority on Windows for the apps
that use ggml/llama.cpp directy.
* threadpool: move all pause/resume logic into ggml
* threadpool: futher api cleanup and prep for future refactoring
All threadpool related functions and structs use ggml_threadpool prefix.
* threadpool: minor indent fixes
* threadpool: improve setprioty error message
* Update examples/llama-bench/llama-bench.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* threadpool: fix indent in set_threadpool call
* use int32_t for n_thread type in public llama.cpp API
* threadpool: use _new and _free instead of _create and _release
* fix two more public APIs to use int32_t for n_threads
* build: set _GNU_SOURCE for Adroid
---------
Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Co-authored-by: fmz <quic_fzaghlou@quic.com>
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2024-08-29 23:20:53 +00:00
|
|
|
/* .cpu_mask = */ cm,
|
|
|
|
/* .cpu_strict = */ cs,
|
|
|
|
/* .poll = */ pl,
|
2024-05-10 16:03:54 +00:00
|
|
|
/* .n_gpu_layers = */ nl,
|
2024-05-29 11:45:44 +00:00
|
|
|
/* .rpc_servers = */ rpc,
|
2024-05-10 16:03:54 +00:00
|
|
|
/* .split_mode = */ sm,
|
|
|
|
/* .main_gpu = */ mg,
|
|
|
|
/* .no_kv_offload= */ nkvo,
|
|
|
|
/* .flash_attn = */ fa,
|
|
|
|
/* .tensor_split = */ ts,
|
|
|
|
/* .use_mmap = */ mmp,
|
|
|
|
/* .embeddings = */ embd,
|
|
|
|
};
|
|
|
|
instances.push_back(instance);
|
|
|
|
}
|
2023-09-28 19:42:38 +00:00
|
|
|
}
|
2024-11-20 11:57:53 +00:00
|
|
|
// clang-format on
|
2023-08-18 10:44:58 +00:00
|
|
|
|
|
|
|
return instances;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct test {
|
|
|
|
static const std::string build_commit;
|
2024-11-20 11:57:53 +00:00
|
|
|
static const int build_number;
|
2023-08-18 10:44:58 +00:00
|
|
|
static const std::string cpu_info;
|
|
|
|
static const std::string gpu_info;
|
2024-11-20 11:57:53 +00:00
|
|
|
std::string model_filename;
|
|
|
|
std::string model_type;
|
|
|
|
uint64_t model_size;
|
|
|
|
uint64_t model_n_params;
|
|
|
|
int n_batch;
|
|
|
|
int n_ubatch;
|
|
|
|
int n_threads;
|
|
|
|
std::string cpu_mask;
|
|
|
|
bool cpu_strict;
|
|
|
|
int poll;
|
|
|
|
ggml_type type_k;
|
|
|
|
ggml_type type_v;
|
|
|
|
int n_gpu_layers;
|
|
|
|
llama_split_mode split_mode;
|
|
|
|
int main_gpu;
|
|
|
|
bool no_kv_offload;
|
|
|
|
bool flash_attn;
|
|
|
|
std::vector<float> tensor_split;
|
|
|
|
bool use_mmap;
|
|
|
|
bool embeddings;
|
|
|
|
int n_prompt;
|
|
|
|
int n_gen;
|
|
|
|
std::string test_time;
|
|
|
|
std::vector<uint64_t> samples_ns;
|
2023-08-18 10:44:58 +00:00
|
|
|
|
|
|
|
test(const cmd_params_instance & inst, const llama_model * lmodel, const llama_context * ctx) {
|
|
|
|
model_filename = inst.model;
|
|
|
|
char buf[128];
|
2023-08-25 13:16:19 +00:00
|
|
|
llama_model_desc(lmodel, buf, sizeof(buf));
|
2024-11-20 11:57:53 +00:00
|
|
|
model_type = buf;
|
|
|
|
model_size = llama_model_size(lmodel);
|
2023-08-25 13:16:19 +00:00
|
|
|
model_n_params = llama_model_n_params(lmodel);
|
2024-11-20 11:57:53 +00:00
|
|
|
n_batch = inst.n_batch;
|
|
|
|
n_ubatch = inst.n_ubatch;
|
|
|
|
n_threads = inst.n_threads;
|
|
|
|
cpu_mask = inst.cpu_mask;
|
|
|
|
cpu_strict = inst.cpu_strict;
|
|
|
|
poll = inst.poll;
|
|
|
|
type_k = inst.type_k;
|
|
|
|
type_v = inst.type_v;
|
|
|
|
n_gpu_layers = inst.n_gpu_layers;
|
|
|
|
split_mode = inst.split_mode;
|
|
|
|
main_gpu = inst.main_gpu;
|
|
|
|
no_kv_offload = inst.no_kv_offload;
|
|
|
|
flash_attn = inst.flash_attn;
|
|
|
|
tensor_split = inst.tensor_split;
|
|
|
|
use_mmap = inst.use_mmap;
|
|
|
|
embeddings = inst.embeddings;
|
|
|
|
n_prompt = inst.n_prompt;
|
|
|
|
n_gen = inst.n_gen;
|
2023-08-18 10:44:58 +00:00
|
|
|
// RFC 3339 date-time format
|
2024-11-20 11:57:53 +00:00
|
|
|
time_t t = time(NULL);
|
2023-08-18 10:44:58 +00:00
|
|
|
std::strftime(buf, sizeof(buf), "%FT%TZ", gmtime(&t));
|
|
|
|
test_time = buf;
|
|
|
|
|
|
|
|
(void) ctx;
|
|
|
|
}
|
|
|
|
|
2024-11-20 11:57:53 +00:00
|
|
|
uint64_t avg_ns() const { return ::avg(samples_ns); }
|
2023-08-18 10:44:58 +00:00
|
|
|
|
2024-11-20 11:57:53 +00:00
|
|
|
uint64_t stdev_ns() const { return ::stdev(samples_ns); }
|
2023-08-18 10:44:58 +00:00
|
|
|
|
|
|
|
std::vector<double> get_ts() const {
|
2024-11-20 11:57:53 +00:00
|
|
|
int n_tokens = n_prompt + n_gen;
|
2023-08-18 10:44:58 +00:00
|
|
|
std::vector<double> ts;
|
2024-11-20 11:57:53 +00:00
|
|
|
std::transform(samples_ns.begin(), samples_ns.end(), std::back_inserter(ts),
|
|
|
|
[n_tokens](uint64_t t) { return 1e9 * n_tokens / t; });
|
2023-08-18 10:44:58 +00:00
|
|
|
return ts;
|
|
|
|
}
|
|
|
|
|
2024-11-20 11:57:53 +00:00
|
|
|
double avg_ts() const { return ::avg(get_ts()); }
|
2023-08-18 10:44:58 +00:00
|
|
|
|
2024-11-20 11:57:53 +00:00
|
|
|
double stdev_ts() const { return ::stdev(get_ts()); }
|
2023-08-18 10:44:58 +00:00
|
|
|
|
|
|
|
static std::string get_backend() {
|
2024-10-30 01:01:23 +00:00
|
|
|
std::vector<std::string> backends;
|
|
|
|
for (size_t i = 0; i < ggml_backend_reg_count(); i++) {
|
2024-11-20 11:57:53 +00:00
|
|
|
auto * reg = ggml_backend_reg_get(i);
|
2024-10-30 01:01:23 +00:00
|
|
|
std::string name = ggml_backend_reg_name(reg);
|
|
|
|
if (name != "CPU") {
|
|
|
|
backends.push_back(ggml_backend_reg_name(reg));
|
|
|
|
}
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
2024-10-30 01:01:23 +00:00
|
|
|
return backends.empty() ? "CPU" : join(backends, ",");
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static const std::vector<std::string> & get_fields() {
|
|
|
|
static const std::vector<std::string> fields = {
|
2024-11-20 11:57:53 +00:00
|
|
|
"build_commit", "build_number", "cpu_info", "gpu_info", "backends", "model_filename",
|
|
|
|
"model_type", "model_size", "model_n_params", "n_batch", "n_ubatch", "n_threads",
|
|
|
|
"cpu_mask", "cpu_strict", "poll", "type_k", "type_v", "n_gpu_layers",
|
|
|
|
"split_mode", "main_gpu", "no_kv_offload", "flash_attn", "tensor_split", "use_mmap",
|
|
|
|
"embeddings", "n_prompt", "n_gen", "test_time", "avg_ns", "stddev_ns",
|
|
|
|
"avg_ts", "stddev_ts",
|
2023-08-18 10:44:58 +00:00
|
|
|
};
|
|
|
|
return fields;
|
|
|
|
}
|
|
|
|
|
2024-11-20 11:57:53 +00:00
|
|
|
enum field_type { STRING, BOOL, INT, FLOAT };
|
2023-08-18 10:44:58 +00:00
|
|
|
|
|
|
|
static field_type get_field_type(const std::string & field) {
|
2024-11-20 11:57:53 +00:00
|
|
|
if (field == "build_number" || field == "n_batch" || field == "n_ubatch" || field == "n_threads" ||
|
|
|
|
field == "poll" || field == "model_size" || field == "model_n_params" || field == "n_gpu_layers" ||
|
|
|
|
field == "main_gpu" || field == "n_prompt" || field == "n_gen" || field == "avg_ns" ||
|
|
|
|
field == "stddev_ns") {
|
2023-08-18 10:44:58 +00:00
|
|
|
return INT;
|
|
|
|
}
|
2024-11-20 11:57:53 +00:00
|
|
|
if (field == "f16_kv" || field == "no_kv_offload" || field == "cpu_strict" || field == "flash_attn" ||
|
|
|
|
field == "use_mmap" || field == "embeddings") {
|
2023-08-18 10:44:58 +00:00
|
|
|
return BOOL;
|
|
|
|
}
|
|
|
|
if (field == "avg_ts" || field == "stddev_ts") {
|
|
|
|
return FLOAT;
|
|
|
|
}
|
|
|
|
return STRING;
|
|
|
|
}
|
|
|
|
|
|
|
|
std::vector<std::string> get_values() const {
|
|
|
|
std::string tensor_split_str;
|
2024-11-20 11:57:53 +00:00
|
|
|
int max_nonzero = 0;
|
2024-01-31 15:30:17 +00:00
|
|
|
for (size_t i = 0; i < llama_max_devices(); i++) {
|
2023-08-18 10:44:58 +00:00
|
|
|
if (tensor_split[i] > 0) {
|
|
|
|
max_nonzero = i;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
for (int i = 0; i <= max_nonzero; i++) {
|
|
|
|
char buf[32];
|
|
|
|
snprintf(buf, sizeof(buf), "%.2f", tensor_split[i]);
|
|
|
|
tensor_split_str += buf;
|
|
|
|
if (i < max_nonzero) {
|
|
|
|
tensor_split_str += "/";
|
|
|
|
}
|
|
|
|
}
|
2024-11-20 11:57:53 +00:00
|
|
|
std::vector<std::string> values = { build_commit,
|
|
|
|
std::to_string(build_number),
|
|
|
|
cpu_info,
|
|
|
|
gpu_info,
|
|
|
|
get_backend(),
|
|
|
|
model_filename,
|
|
|
|
model_type,
|
|
|
|
std::to_string(model_size),
|
|
|
|
std::to_string(model_n_params),
|
|
|
|
std::to_string(n_batch),
|
|
|
|
std::to_string(n_ubatch),
|
|
|
|
std::to_string(n_threads),
|
|
|
|
cpu_mask,
|
|
|
|
std::to_string(cpu_strict),
|
|
|
|
std::to_string(poll),
|
|
|
|
ggml_type_name(type_k),
|
|
|
|
ggml_type_name(type_v),
|
|
|
|
std::to_string(n_gpu_layers),
|
|
|
|
split_mode_str(split_mode),
|
|
|
|
std::to_string(main_gpu),
|
|
|
|
std::to_string(no_kv_offload),
|
|
|
|
std::to_string(flash_attn),
|
|
|
|
tensor_split_str,
|
|
|
|
std::to_string(use_mmap),
|
|
|
|
std::to_string(embeddings),
|
|
|
|
std::to_string(n_prompt),
|
|
|
|
std::to_string(n_gen),
|
|
|
|
test_time,
|
|
|
|
std::to_string(avg_ns()),
|
|
|
|
std::to_string(stdev_ns()),
|
|
|
|
std::to_string(avg_ts()),
|
|
|
|
std::to_string(stdev_ts()) };
|
2023-08-18 10:44:58 +00:00
|
|
|
return values;
|
|
|
|
}
|
|
|
|
|
|
|
|
std::map<std::string, std::string> get_map() const {
|
|
|
|
std::map<std::string, std::string> map;
|
2024-11-20 11:57:53 +00:00
|
|
|
auto fields = get_fields();
|
|
|
|
auto values = get_values();
|
|
|
|
std::transform(fields.begin(), fields.end(), values.begin(), std::inserter(map, map.end()),
|
|
|
|
std::make_pair<const std::string &, const std::string &>);
|
2023-08-18 10:44:58 +00:00
|
|
|
return map;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2023-11-02 06:50:16 +00:00
|
|
|
const std::string test::build_commit = LLAMA_COMMIT;
|
|
|
|
const int test::build_number = LLAMA_BUILD_NUMBER;
|
2023-08-18 10:44:58 +00:00
|
|
|
const std::string test::cpu_info = get_cpu_info();
|
|
|
|
const std::string test::gpu_info = get_gpu_info();
|
|
|
|
|
|
|
|
struct printer {
|
2023-08-21 20:07:43 +00:00
|
|
|
virtual ~printer() {}
|
|
|
|
|
2023-08-18 10:44:58 +00:00
|
|
|
FILE * fout;
|
2024-11-20 11:57:53 +00:00
|
|
|
|
2023-09-28 21:41:44 +00:00
|
|
|
virtual void print_header(const cmd_params & params) { (void) params; }
|
2024-11-20 11:57:53 +00:00
|
|
|
|
2023-08-18 10:44:58 +00:00
|
|
|
virtual void print_test(const test & t) = 0;
|
2024-11-20 11:57:53 +00:00
|
|
|
|
|
|
|
virtual void print_footer() {}
|
2023-08-18 10:44:58 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct csv_printer : public printer {
|
|
|
|
static std::string escape_csv(const std::string & field) {
|
|
|
|
std::string escaped = "\"";
|
|
|
|
for (auto c : field) {
|
|
|
|
if (c == '"') {
|
|
|
|
escaped += "\"";
|
|
|
|
}
|
|
|
|
escaped += c;
|
|
|
|
}
|
|
|
|
escaped += "\"";
|
|
|
|
return escaped;
|
|
|
|
}
|
|
|
|
|
2024-11-20 11:57:53 +00:00
|
|
|
void print_header(const cmd_params & params) override {
|
2023-08-18 10:44:58 +00:00
|
|
|
std::vector<std::string> fields = test::get_fields();
|
|
|
|
fprintf(fout, "%s\n", join(fields, ",").c_str());
|
|
|
|
(void) params;
|
|
|
|
}
|
|
|
|
|
|
|
|
void print_test(const test & t) override {
|
|
|
|
std::vector<std::string> values = t.get_values();
|
|
|
|
std::transform(values.begin(), values.end(), values.begin(), escape_csv);
|
|
|
|
fprintf(fout, "%s\n", join(values, ",").c_str());
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2024-09-03 17:58:54 +00:00
|
|
|
static std::string escape_json(const std::string & value) {
|
|
|
|
std::string escaped;
|
|
|
|
for (auto c : value) {
|
|
|
|
if (c == '"') {
|
|
|
|
escaped += "\\\"";
|
|
|
|
} else if (c == '\\') {
|
|
|
|
escaped += "\\\\";
|
2024-11-20 11:57:53 +00:00
|
|
|
} else if (c <= 0x1f) {
|
2024-09-03 17:58:54 +00:00
|
|
|
char buf[8];
|
|
|
|
snprintf(buf, sizeof(buf), "\\u%04x", c);
|
|
|
|
escaped += buf;
|
|
|
|
} else {
|
|
|
|
escaped += c;
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
}
|
2024-09-03 17:58:54 +00:00
|
|
|
return escaped;
|
|
|
|
}
|
2023-08-18 10:44:58 +00:00
|
|
|
|
2024-09-03 17:58:54 +00:00
|
|
|
static std::string format_json_value(const std::string & field, const std::string & value) {
|
|
|
|
switch (test::get_field_type(field)) {
|
|
|
|
case test::STRING:
|
|
|
|
return "\"" + escape_json(value) + "\"";
|
|
|
|
case test::BOOL:
|
|
|
|
return value == "0" ? "false" : "true";
|
|
|
|
default:
|
|
|
|
return value;
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
2024-09-03 17:58:54 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
struct json_printer : public printer {
|
|
|
|
bool first = true;
|
2023-08-18 10:44:58 +00:00
|
|
|
|
|
|
|
void print_header(const cmd_params & params) override {
|
|
|
|
fprintf(fout, "[\n");
|
|
|
|
(void) params;
|
|
|
|
}
|
|
|
|
|
|
|
|
void print_fields(const std::vector<std::string> & fields, const std::vector<std::string> & values) {
|
|
|
|
assert(fields.size() == values.size());
|
|
|
|
for (size_t i = 0; i < fields.size(); i++) {
|
2024-11-20 11:57:53 +00:00
|
|
|
fprintf(fout, " \"%s\": %s,\n", fields.at(i).c_str(),
|
|
|
|
format_json_value(fields.at(i), values.at(i)).c_str());
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void print_test(const test & t) override {
|
|
|
|
if (first) {
|
|
|
|
first = false;
|
|
|
|
} else {
|
|
|
|
fprintf(fout, ",\n");
|
|
|
|
}
|
|
|
|
fprintf(fout, " {\n");
|
|
|
|
print_fields(test::get_fields(), t.get_values());
|
|
|
|
fprintf(fout, " \"samples_ns\": [ %s ],\n", join(t.samples_ns, ", ").c_str());
|
|
|
|
fprintf(fout, " \"samples_ts\": [ %s ]\n", join(t.get_ts(), ", ").c_str());
|
|
|
|
fprintf(fout, " }");
|
|
|
|
fflush(fout);
|
|
|
|
}
|
|
|
|
|
2024-11-20 11:57:53 +00:00
|
|
|
void print_footer() override { fprintf(fout, "\n]\n"); }
|
2023-08-18 10:44:58 +00:00
|
|
|
};
|
|
|
|
|
2024-09-03 17:58:54 +00:00
|
|
|
struct jsonl_printer : public printer {
|
|
|
|
void print_fields(const std::vector<std::string> & fields, const std::vector<std::string> & values) {
|
|
|
|
assert(fields.size() == values.size());
|
|
|
|
for (size_t i = 0; i < fields.size(); i++) {
|
|
|
|
fprintf(fout, "\"%s\": %s, ", fields.at(i).c_str(), format_json_value(fields.at(i), values.at(i)).c_str());
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void print_test(const test & t) override {
|
|
|
|
fprintf(fout, "{");
|
|
|
|
print_fields(test::get_fields(), t.get_values());
|
|
|
|
fprintf(fout, "\"samples_ns\": [ %s ],", join(t.samples_ns, ", ").c_str());
|
|
|
|
fprintf(fout, "\"samples_ts\": [ %s ]", join(t.get_ts(), ", ").c_str());
|
|
|
|
fprintf(fout, "}\n");
|
|
|
|
fflush(fout);
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2023-08-18 10:44:58 +00:00
|
|
|
struct markdown_printer : public printer {
|
|
|
|
std::vector<std::string> fields;
|
|
|
|
|
|
|
|
static int get_field_width(const std::string & field) {
|
|
|
|
if (field == "model") {
|
|
|
|
return -30;
|
|
|
|
}
|
|
|
|
if (field == "t/s") {
|
Threadpool: take 2 (#8672)
* Introduce ggml_compute_threadpool
- OpenMP functional: check
- Vanilla ggml functional: Check
- ggml w/threadpool functional: Check
- OpenMP no regression: No glaring problems
- Vanilla ggml no regression: No glaring problems
- ggml w/threadpool no regression: No glaring problems
* Minor fixes
* fixed use after release bug
* fixed a harmless race condition
* Fix Android bulid issue
* fix more race conditions
* fix deadlock for cases where cgraph.n_nodes == 1
and fix --poll case
* threadpool: use cpu_get_num_math to set the default number of threadpool threads
This way we avoid using E-Cores and Hyperthreaded siblings.
* bench: create fresh threadpool for each test
For benchmarking it's better to start a fresh pool for each test with the exact number of threads
needed for that test. Having larger pools is suboptimal (causes more load, etc).
* atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier
This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.
* threadpool: make polling the default to match openmp behavior
All command line args now allow for setting poll to 0 (false).
* threadpool: do not wakeup threads in already paused threadpool
* fix potential race condition in check_for_work
* threadpool: do not create two threadpools if their params are identical
* threadpool: reduce pause/resume/wakeup overhead in common cases
We now start threadpool in paused state only if we have two.
The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.
* threadpool: add support for hybrid polling
poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...
The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
We can tune this further as things evolve.
* threadpool: reduce the number of barrier required
New work is now indicated with an atomic counter that is incremented for
each new graph that needs to be computed.
This removes the need for extra barrier for clearing the "new_work" and
removes the special case for trivial graphs.
* threadpool: remove special-casing for disposable threadpools
With the efficient hybrid polling there is no need to make disposable pools any different.
This simplifies the overall logic and reduces branching.
Include n_threads in debug print for disposable threadpool.
Declare pause and stop flags as atomic_bool
This doesn't actually generate any memory barriers and simply informs
the thread sanitizer that these flags can be written & read by different
threads without locking.
* threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)
This fixes the race condition with very small graphs where the main thread happens to
start a new graph while the workers are just about to exit from barriers.
* threadpool: use relaxed order for chunk sync
Full memory barrier is an overkill for this since each thread works on different chunk
* threadpool: remove abort_callback from threadpool state
* threadpool: better naming for thread/cpumask releated functions
* threadpool: consistent use of int type for n_threads params
* threadpool: add support for ggml_threadpool_params_default/init
Also removes the need for explicit mask_specified param.
all-zero cpumask means use default (usually inherited) cpu affinity mask.
* threadpool: move typedef into ggml.h
* threadpool: fix apply_priority() function name
* threadpool: fix swift wrapper errors due to n_threads int type cleanup
* threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled
* threadpool: replace checks for compute_thread ret code with proper status check
* threadpool: simplify threadpool init logic and fix main thread affinity application
Most of the init code is now exactly the same between threadpool and openmp.
* threadpool: update threadpool resume/pause function names
* threadpool: enable openmp by default for now
* threadpool: don't forget to free workers state when omp is enabled
* threadpool: avoid updating process priority on the platforms that do not require it
On Windows we need to change overall process priority class in order to set thread priorities,
but on Linux, Mac, etc we do not need to touch the overall process settings.
* threadpool: update calling thread prio and affinity only at start/resume
This avoids extra syscalls for each graph_compute()
* llama-bench: turn threadpool params into vectors, add output headers, etc
* llama-bench: add support for cool off between tests --delay
This helps for long running tests on platforms that are thermally limited (phones, laptops, etc).
--delay (disabled by default) introduces the sleep for N seconds before starting each test.
* threadpool: move process priority setting into the apps (bench and cli)
This avoids changing the overall process priority on Windows for the apps
that use ggml/llama.cpp directy.
* threadpool: move all pause/resume logic into ggml
* threadpool: futher api cleanup and prep for future refactoring
All threadpool related functions and structs use ggml_threadpool prefix.
* threadpool: minor indent fixes
* threadpool: improve setprioty error message
* Update examples/llama-bench/llama-bench.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* threadpool: fix indent in set_threadpool call
* use int32_t for n_thread type in public llama.cpp API
* threadpool: use _new and _free instead of _create and _release
* fix two more public APIs to use int32_t for n_threads
* build: set _GNU_SOURCE for Adroid
---------
Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Co-authored-by: fmz <quic_fzaghlou@quic.com>
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2024-08-29 23:20:53 +00:00
|
|
|
return 20;
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
2023-08-25 13:16:19 +00:00
|
|
|
if (field == "size" || field == "params") {
|
|
|
|
return 10;
|
|
|
|
}
|
|
|
|
if (field == "n_gpu_layers") {
|
|
|
|
return 3;
|
|
|
|
}
|
2024-06-11 12:45:40 +00:00
|
|
|
if (field == "n_threads") {
|
|
|
|
return 7;
|
|
|
|
}
|
|
|
|
if (field == "n_batch") {
|
|
|
|
return 7;
|
|
|
|
}
|
|
|
|
if (field == "n_ubatch") {
|
|
|
|
return 8;
|
|
|
|
}
|
|
|
|
if (field == "type_k" || field == "type_v") {
|
|
|
|
return 6;
|
|
|
|
}
|
|
|
|
if (field == "split_mode") {
|
|
|
|
return 5;
|
|
|
|
}
|
|
|
|
if (field == "flash_attn") {
|
|
|
|
return 2;
|
|
|
|
}
|
|
|
|
if (field == "use_mmap") {
|
|
|
|
return 4;
|
|
|
|
}
|
2024-05-10 16:03:54 +00:00
|
|
|
if (field == "test") {
|
|
|
|
return 13;
|
|
|
|
}
|
2023-08-25 13:16:19 +00:00
|
|
|
|
2024-11-20 11:57:53 +00:00
|
|
|
int width = std::max((int) field.length(), 10);
|
2023-08-18 10:44:58 +00:00
|
|
|
|
|
|
|
if (test::get_field_type(field) == test::STRING) {
|
|
|
|
return -width;
|
|
|
|
}
|
|
|
|
return width;
|
|
|
|
}
|
|
|
|
|
2023-08-25 13:16:19 +00:00
|
|
|
static std::string get_field_display_name(const std::string & field) {
|
|
|
|
if (field == "n_gpu_layers") {
|
|
|
|
return "ngl";
|
|
|
|
}
|
2024-01-12 19:07:38 +00:00
|
|
|
if (field == "split_mode") {
|
|
|
|
return "sm";
|
|
|
|
}
|
2023-08-25 13:16:19 +00:00
|
|
|
if (field == "n_threads") {
|
|
|
|
return "threads";
|
|
|
|
}
|
2024-01-07 16:59:01 +00:00
|
|
|
if (field == "no_kv_offload") {
|
|
|
|
return "nkvo";
|
|
|
|
}
|
ggml : add Flash Attention (#5021)
* ggml : add ggml_flash_attn_ext API
* ggml : fix GQA support in ggml_flash_attn_ext
* ggml : online attention (CPU)
* metal : initial implementation
* metal : f16 precision
* metal : reduce branches
* metal : specialize for head size
* wip : 8 rows per simd group
* wip : 4 rows per simd group
* wip : template for rows per warp
* metal : parallelize across KV size
* metal : parallel reduce across heads
* metal : efficient flash_attn_f16 implementation
* metal : avoid redundant loads of the attention
* metal : scale and mask in matrix form
* metal : fix comment
* llama : avoid ggml_cast, use F32 query
* metal : add parallel reduce version (disabled)
* metal : move output into local memory + optimize
- the result from each simdgroup now stays in the registers
- significantly reduced SRAM usage
- more efficient skipping of -INF blocks
- avoid simdgroup barrier in hot loop
- add comments
* metal : add tests, fix scaling, support C > 32
* metal : improve precision
* ggml : fix f16 mad
* metal : minor
* metal : support Q > 8
* tests : add ATTN tests
* metal : disable buffer allocation logs
* tests : more
* metal : faster inner loop for C == 32
* metal : fix array initialization
* tests : ifdef
* ggml : switch to padded F16 mask for ggml_soft_max, ggml_flash_attn_ext
* ggml : fix ggml_soft_max mask requirement
* cuda : fix soft_max to use correct mask size
* cuda : add flash_attn kernel (wip)
* metal : optimize softmax for C > 32
* metal : optimize softmax
* tests : minor fix
* cuda : avoid zeroing fragments
* tests : update dims
* cuda : fix __hisinf() result check
* cuda : avoid warp_reduce for smax
* cuda : use int instead of int64_t
Noticeably improves performance (thanks to Johannes)
* cuda : make loops use the same loop values
Thanks Johannes again for the tip
* cuda : unroll some of the loops
* cuda : avoid __hisinf branches
* cuda : use half2 in softmax
* cuda : switch to 1 warp for bs > 16
* cuda : speed-up reduce part of the kernel
* cuda : unroll Q*K^T loop
* cuda : fix -INF block check
* cuda : simplify softmax
* cuda : fix matrix names
* cuda : minor
* llama : adapt to F16 KQ_pos
* llama : adapt new models to F16 KQ_mask
* ggml : fix F16 store (ARM NEON)
* llama : fix type of KQ_mask and KQ_pos
* ggml : fix CPU soft_max
* tests : add hs=256
* cuda : fix build
* metal : improve perf via smaller int registers
* cuda : adapt soft_max to F16 mask and pos
* CUDA: faster FlashAttention, kernel for bs == 1
* 16 cols for Phi-2
* no vec for hs, no hs==256 ncols==32 for Volta
* adjust kernel selection logic
* 4 warps, 256 stride for all D
* no ncols == 64
* Multiple parallel blocks for batch size 1
* fix compile warnings
* fix excessive KQ_b loads
* fix cmake build
* fix KV cache padding, NaN from INFINITY (#6438)
* llama : flash_attn cparam + fix defrag
* server: support flash_attn param
* server: bench: enable flash_attn param
* CUDA: refactor host code, dyn. par. blocks
* fix flash_attn_vec_f16 race condition
* flush softmax exp below threshold to 0
* store temp KQ in registers
* Calculate KQ as FP32 if KQV has GGML_PREC_F32
* Add __hgt2_mask implementation for CUDA 11
* fix KQ FP32 precision fpr parallel_blocks > 1
* llama-bench : add -fa,--flash-attn arg
* metal : add BS=1 kernel for flash attention (#6508)
* metal : add BS=1 kernel for flash attention (wip)
* metal : support more than 1 warps
* metal : opts
* metal : opt
* metal : switch to parallel reduce
* metal : reduce registers
* metal : simplify
* metal : initial FA vec kernel
* metal : use F32 attention accumulators
* batched-bench : add fattn arg
* llama : simplify llama_build_kv_store
ggml-ci
* llama : adapt build_olmo to changes
* ggml : fix arm fp16 store on windows
* metal : clean-up
* metal : clean-up kernel code
* metal : minor
* tests : remove benchmarks
ggml-ci
* ggml : fix avx512 const correctness
ggml-ci
* ggml : fix soft_max with bias on CPU
ggml-ci
* common : print --flash-attn in help
* ggml : fix num dimensions in ggml_flash_attn_ext
* llama : force disable flash attention for incompatible models
* ggml : ggml_soft_max support F16/F32 mask/pos
ggml-ci
* cuda : uint -> uint32_t
* cuda : "constexpr dim3" -> "const dim3"
ggml-ci
* cuda : try to fix __hgt2_mask
ggml-ci
* ggml : add TODO's for F16/F32 mask/pos support in other backends
* llama : replace bool need_kq_pos with use_alibi
* llama : prep ALiBi support for BERT models
ggml-ci
* llama : fix n_batch requirements
ggml-ci
* cont
* server : add help for --flash-attn arg
* llama : disable FA for AMD
* tests : remove TMP_ATTN_BENCH
ggml-ci
* llama : support save/load state with FA enabled
ggml-ci
* ci : add CUDA save-load-state tests
ggml-ci
* llama : llama_kv_cache_clear zeroes data + fix save-load seq
ggml-ci
* llama : fix copy-paste errors, add TODO
* llama : disallow incompatible states
* llama : update llama_state_get_size after v_trans field
* metal : remove tmp log
* llama : add static reminder for llama_state_get_size
* metal : fix max nsg
ggml-ci
* ci : fix arg order
ggml-ci
---------
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Pierrick HYMBERT <pierrick.hymbert@gmail.com>
2024-04-30 09:16:08 +00:00
|
|
|
if (field == "flash_attn") {
|
|
|
|
return "fa";
|
|
|
|
}
|
2024-02-01 19:48:53 +00:00
|
|
|
if (field == "use_mmap") {
|
|
|
|
return "mmap";
|
|
|
|
}
|
2024-03-07 14:32:38 +00:00
|
|
|
if (field == "embeddings") {
|
|
|
|
return "embd";
|
|
|
|
}
|
2023-08-25 13:16:19 +00:00
|
|
|
if (field == "tensor_split") {
|
|
|
|
return "ts";
|
|
|
|
}
|
|
|
|
return field;
|
|
|
|
}
|
|
|
|
|
2023-08-18 10:44:58 +00:00
|
|
|
void print_header(const cmd_params & params) override {
|
|
|
|
// select fields to print
|
2024-02-03 11:23:37 +00:00
|
|
|
fields.emplace_back("model");
|
|
|
|
fields.emplace_back("size");
|
|
|
|
fields.emplace_back("params");
|
|
|
|
fields.emplace_back("backend");
|
2024-11-14 17:04:35 +00:00
|
|
|
bool is_cpu_backend = test::get_backend().find("CPU") != std::string::npos ||
|
|
|
|
test::get_backend().find("BLAS") != std::string::npos;
|
2023-08-18 10:44:58 +00:00
|
|
|
if (!is_cpu_backend) {
|
2024-02-03 11:23:37 +00:00
|
|
|
fields.emplace_back("n_gpu_layers");
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
2023-08-22 07:56:03 +00:00
|
|
|
if (params.n_threads.size() > 1 || params.n_threads != cmd_params_defaults.n_threads || is_cpu_backend) {
|
2024-02-03 11:23:37 +00:00
|
|
|
fields.emplace_back("n_threads");
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
Threadpool: take 2 (#8672)
* Introduce ggml_compute_threadpool
- OpenMP functional: check
- Vanilla ggml functional: Check
- ggml w/threadpool functional: Check
- OpenMP no regression: No glaring problems
- Vanilla ggml no regression: No glaring problems
- ggml w/threadpool no regression: No glaring problems
* Minor fixes
* fixed use after release bug
* fixed a harmless race condition
* Fix Android bulid issue
* fix more race conditions
* fix deadlock for cases where cgraph.n_nodes == 1
and fix --poll case
* threadpool: use cpu_get_num_math to set the default number of threadpool threads
This way we avoid using E-Cores and Hyperthreaded siblings.
* bench: create fresh threadpool for each test
For benchmarking it's better to start a fresh pool for each test with the exact number of threads
needed for that test. Having larger pools is suboptimal (causes more load, etc).
* atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier
This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.
* threadpool: make polling the default to match openmp behavior
All command line args now allow for setting poll to 0 (false).
* threadpool: do not wakeup threads in already paused threadpool
* fix potential race condition in check_for_work
* threadpool: do not create two threadpools if their params are identical
* threadpool: reduce pause/resume/wakeup overhead in common cases
We now start threadpool in paused state only if we have two.
The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.
* threadpool: add support for hybrid polling
poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...
The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
We can tune this further as things evolve.
* threadpool: reduce the number of barrier required
New work is now indicated with an atomic counter that is incremented for
each new graph that needs to be computed.
This removes the need for extra barrier for clearing the "new_work" and
removes the special case for trivial graphs.
* threadpool: remove special-casing for disposable threadpools
With the efficient hybrid polling there is no need to make disposable pools any different.
This simplifies the overall logic and reduces branching.
Include n_threads in debug print for disposable threadpool.
Declare pause and stop flags as atomic_bool
This doesn't actually generate any memory barriers and simply informs
the thread sanitizer that these flags can be written & read by different
threads without locking.
* threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)
This fixes the race condition with very small graphs where the main thread happens to
start a new graph while the workers are just about to exit from barriers.
* threadpool: use relaxed order for chunk sync
Full memory barrier is an overkill for this since each thread works on different chunk
* threadpool: remove abort_callback from threadpool state
* threadpool: better naming for thread/cpumask releated functions
* threadpool: consistent use of int type for n_threads params
* threadpool: add support for ggml_threadpool_params_default/init
Also removes the need for explicit mask_specified param.
all-zero cpumask means use default (usually inherited) cpu affinity mask.
* threadpool: move typedef into ggml.h
* threadpool: fix apply_priority() function name
* threadpool: fix swift wrapper errors due to n_threads int type cleanup
* threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled
* threadpool: replace checks for compute_thread ret code with proper status check
* threadpool: simplify threadpool init logic and fix main thread affinity application
Most of the init code is now exactly the same between threadpool and openmp.
* threadpool: update threadpool resume/pause function names
* threadpool: enable openmp by default for now
* threadpool: don't forget to free workers state when omp is enabled
* threadpool: avoid updating process priority on the platforms that do not require it
On Windows we need to change overall process priority class in order to set thread priorities,
but on Linux, Mac, etc we do not need to touch the overall process settings.
* threadpool: update calling thread prio and affinity only at start/resume
This avoids extra syscalls for each graph_compute()
* llama-bench: turn threadpool params into vectors, add output headers, etc
* llama-bench: add support for cool off between tests --delay
This helps for long running tests on platforms that are thermally limited (phones, laptops, etc).
--delay (disabled by default) introduces the sleep for N seconds before starting each test.
* threadpool: move process priority setting into the apps (bench and cli)
This avoids changing the overall process priority on Windows for the apps
that use ggml/llama.cpp directy.
* threadpool: move all pause/resume logic into ggml
* threadpool: futher api cleanup and prep for future refactoring
All threadpool related functions and structs use ggml_threadpool prefix.
* threadpool: minor indent fixes
* threadpool: improve setprioty error message
* Update examples/llama-bench/llama-bench.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* threadpool: fix indent in set_threadpool call
* use int32_t for n_thread type in public llama.cpp API
* threadpool: use _new and _free instead of _create and _release
* fix two more public APIs to use int32_t for n_threads
* build: set _GNU_SOURCE for Adroid
---------
Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Co-authored-by: fmz <quic_fzaghlou@quic.com>
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2024-08-29 23:20:53 +00:00
|
|
|
if (params.cpu_mask.size() > 1 || params.cpu_mask != cmd_params_defaults.cpu_mask) {
|
|
|
|
fields.emplace_back("cpu_mask");
|
|
|
|
}
|
|
|
|
if (params.cpu_strict.size() > 1 || params.cpu_strict != cmd_params_defaults.cpu_strict) {
|
|
|
|
fields.emplace_back("cpu_strict");
|
|
|
|
}
|
|
|
|
if (params.poll.size() > 1 || params.poll != cmd_params_defaults.poll) {
|
|
|
|
fields.emplace_back("poll");
|
|
|
|
}
|
2023-08-18 10:44:58 +00:00
|
|
|
if (params.n_batch.size() > 1 || params.n_batch != cmd_params_defaults.n_batch) {
|
2024-02-03 11:23:37 +00:00
|
|
|
fields.emplace_back("n_batch");
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
2024-03-13 17:54:21 +00:00
|
|
|
if (params.n_ubatch.size() > 1 || params.n_ubatch != cmd_params_defaults.n_ubatch) {
|
|
|
|
fields.emplace_back("n_ubatch");
|
|
|
|
}
|
2023-12-07 11:03:17 +00:00
|
|
|
if (params.type_k.size() > 1 || params.type_k != cmd_params_defaults.type_k) {
|
2024-02-03 11:23:37 +00:00
|
|
|
fields.emplace_back("type_k");
|
2023-12-07 11:03:17 +00:00
|
|
|
}
|
|
|
|
if (params.type_v.size() > 1 || params.type_v != cmd_params_defaults.type_v) {
|
2024-02-03 11:23:37 +00:00
|
|
|
fields.emplace_back("type_v");
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
if (params.main_gpu.size() > 1 || params.main_gpu != cmd_params_defaults.main_gpu) {
|
2024-02-03 11:23:37 +00:00
|
|
|
fields.emplace_back("main_gpu");
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
2024-01-12 19:07:38 +00:00
|
|
|
if (params.split_mode.size() > 1 || params.split_mode != cmd_params_defaults.split_mode) {
|
2024-02-03 11:23:37 +00:00
|
|
|
fields.emplace_back("split_mode");
|
2024-01-12 19:07:38 +00:00
|
|
|
}
|
2024-01-07 16:59:01 +00:00
|
|
|
if (params.no_kv_offload.size() > 1 || params.no_kv_offload != cmd_params_defaults.no_kv_offload) {
|
2024-02-03 11:23:37 +00:00
|
|
|
fields.emplace_back("no_kv_offload");
|
2024-01-07 16:59:01 +00:00
|
|
|
}
|
ggml : add Flash Attention (#5021)
* ggml : add ggml_flash_attn_ext API
* ggml : fix GQA support in ggml_flash_attn_ext
* ggml : online attention (CPU)
* metal : initial implementation
* metal : f16 precision
* metal : reduce branches
* metal : specialize for head size
* wip : 8 rows per simd group
* wip : 4 rows per simd group
* wip : template for rows per warp
* metal : parallelize across KV size
* metal : parallel reduce across heads
* metal : efficient flash_attn_f16 implementation
* metal : avoid redundant loads of the attention
* metal : scale and mask in matrix form
* metal : fix comment
* llama : avoid ggml_cast, use F32 query
* metal : add parallel reduce version (disabled)
* metal : move output into local memory + optimize
- the result from each simdgroup now stays in the registers
- significantly reduced SRAM usage
- more efficient skipping of -INF blocks
- avoid simdgroup barrier in hot loop
- add comments
* metal : add tests, fix scaling, support C > 32
* metal : improve precision
* ggml : fix f16 mad
* metal : minor
* metal : support Q > 8
* tests : add ATTN tests
* metal : disable buffer allocation logs
* tests : more
* metal : faster inner loop for C == 32
* metal : fix array initialization
* tests : ifdef
* ggml : switch to padded F16 mask for ggml_soft_max, ggml_flash_attn_ext
* ggml : fix ggml_soft_max mask requirement
* cuda : fix soft_max to use correct mask size
* cuda : add flash_attn kernel (wip)
* metal : optimize softmax for C > 32
* metal : optimize softmax
* tests : minor fix
* cuda : avoid zeroing fragments
* tests : update dims
* cuda : fix __hisinf() result check
* cuda : avoid warp_reduce for smax
* cuda : use int instead of int64_t
Noticeably improves performance (thanks to Johannes)
* cuda : make loops use the same loop values
Thanks Johannes again for the tip
* cuda : unroll some of the loops
* cuda : avoid __hisinf branches
* cuda : use half2 in softmax
* cuda : switch to 1 warp for bs > 16
* cuda : speed-up reduce part of the kernel
* cuda : unroll Q*K^T loop
* cuda : fix -INF block check
* cuda : simplify softmax
* cuda : fix matrix names
* cuda : minor
* llama : adapt to F16 KQ_pos
* llama : adapt new models to F16 KQ_mask
* ggml : fix F16 store (ARM NEON)
* llama : fix type of KQ_mask and KQ_pos
* ggml : fix CPU soft_max
* tests : add hs=256
* cuda : fix build
* metal : improve perf via smaller int registers
* cuda : adapt soft_max to F16 mask and pos
* CUDA: faster FlashAttention, kernel for bs == 1
* 16 cols for Phi-2
* no vec for hs, no hs==256 ncols==32 for Volta
* adjust kernel selection logic
* 4 warps, 256 stride for all D
* no ncols == 64
* Multiple parallel blocks for batch size 1
* fix compile warnings
* fix excessive KQ_b loads
* fix cmake build
* fix KV cache padding, NaN from INFINITY (#6438)
* llama : flash_attn cparam + fix defrag
* server: support flash_attn param
* server: bench: enable flash_attn param
* CUDA: refactor host code, dyn. par. blocks
* fix flash_attn_vec_f16 race condition
* flush softmax exp below threshold to 0
* store temp KQ in registers
* Calculate KQ as FP32 if KQV has GGML_PREC_F32
* Add __hgt2_mask implementation for CUDA 11
* fix KQ FP32 precision fpr parallel_blocks > 1
* llama-bench : add -fa,--flash-attn arg
* metal : add BS=1 kernel for flash attention (#6508)
* metal : add BS=1 kernel for flash attention (wip)
* metal : support more than 1 warps
* metal : opts
* metal : opt
* metal : switch to parallel reduce
* metal : reduce registers
* metal : simplify
* metal : initial FA vec kernel
* metal : use F32 attention accumulators
* batched-bench : add fattn arg
* llama : simplify llama_build_kv_store
ggml-ci
* llama : adapt build_olmo to changes
* ggml : fix arm fp16 store on windows
* metal : clean-up
* metal : clean-up kernel code
* metal : minor
* tests : remove benchmarks
ggml-ci
* ggml : fix avx512 const correctness
ggml-ci
* ggml : fix soft_max with bias on CPU
ggml-ci
* common : print --flash-attn in help
* ggml : fix num dimensions in ggml_flash_attn_ext
* llama : force disable flash attention for incompatible models
* ggml : ggml_soft_max support F16/F32 mask/pos
ggml-ci
* cuda : uint -> uint32_t
* cuda : "constexpr dim3" -> "const dim3"
ggml-ci
* cuda : try to fix __hgt2_mask
ggml-ci
* ggml : add TODO's for F16/F32 mask/pos support in other backends
* llama : replace bool need_kq_pos with use_alibi
* llama : prep ALiBi support for BERT models
ggml-ci
* llama : fix n_batch requirements
ggml-ci
* cont
* server : add help for --flash-attn arg
* llama : disable FA for AMD
* tests : remove TMP_ATTN_BENCH
ggml-ci
* llama : support save/load state with FA enabled
ggml-ci
* ci : add CUDA save-load-state tests
ggml-ci
* llama : llama_kv_cache_clear zeroes data + fix save-load seq
ggml-ci
* llama : fix copy-paste errors, add TODO
* llama : disallow incompatible states
* llama : update llama_state_get_size after v_trans field
* metal : remove tmp log
* llama : add static reminder for llama_state_get_size
* metal : fix max nsg
ggml-ci
* ci : fix arg order
ggml-ci
---------
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Pierrick HYMBERT <pierrick.hymbert@gmail.com>
2024-04-30 09:16:08 +00:00
|
|
|
if (params.flash_attn.size() > 1 || params.flash_attn != cmd_params_defaults.flash_attn) {
|
|
|
|
fields.emplace_back("flash_attn");
|
|
|
|
}
|
2023-08-18 10:44:58 +00:00
|
|
|
if (params.tensor_split.size() > 1 || params.tensor_split != cmd_params_defaults.tensor_split) {
|
2024-02-03 11:23:37 +00:00
|
|
|
fields.emplace_back("tensor_split");
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
2024-02-01 19:48:53 +00:00
|
|
|
if (params.use_mmap.size() > 1 || params.use_mmap != cmd_params_defaults.use_mmap) {
|
2024-02-03 11:23:37 +00:00
|
|
|
fields.emplace_back("use_mmap");
|
2024-02-01 19:48:53 +00:00
|
|
|
}
|
2024-03-07 14:32:38 +00:00
|
|
|
if (params.embeddings.size() > 1 || params.embeddings != cmd_params_defaults.embeddings) {
|
|
|
|
fields.emplace_back("embeddings");
|
|
|
|
}
|
2024-02-03 11:23:37 +00:00
|
|
|
fields.emplace_back("test");
|
|
|
|
fields.emplace_back("t/s");
|
2023-08-18 10:44:58 +00:00
|
|
|
|
|
|
|
fprintf(fout, "|");
|
|
|
|
for (const auto & field : fields) {
|
2023-08-25 13:16:19 +00:00
|
|
|
fprintf(fout, " %*s |", get_field_width(field), get_field_display_name(field).c_str());
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
fprintf(fout, "\n");
|
|
|
|
fprintf(fout, "|");
|
|
|
|
for (const auto & field : fields) {
|
|
|
|
int width = get_field_width(field);
|
|
|
|
fprintf(fout, " %s%s |", std::string(std::abs(width) - 1, '-').c_str(), width > 0 ? ":" : "-");
|
|
|
|
}
|
|
|
|
fprintf(fout, "\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
void print_test(const test & t) override {
|
|
|
|
std::map<std::string, std::string> vmap = t.get_map();
|
|
|
|
|
|
|
|
fprintf(fout, "|");
|
|
|
|
for (const auto & field : fields) {
|
|
|
|
std::string value;
|
2024-11-20 11:57:53 +00:00
|
|
|
char buf[128];
|
2023-08-18 10:44:58 +00:00
|
|
|
if (field == "model") {
|
|
|
|
value = t.model_type;
|
2023-08-25 13:16:19 +00:00
|
|
|
} else if (field == "size") {
|
2024-11-20 11:57:53 +00:00
|
|
|
if (t.model_size < 1024 * 1024 * 1024) {
|
2023-08-25 13:16:19 +00:00
|
|
|
snprintf(buf, sizeof(buf), "%.2f MiB", t.model_size / 1024.0 / 1024.0);
|
|
|
|
} else {
|
|
|
|
snprintf(buf, sizeof(buf), "%.2f GiB", t.model_size / 1024.0 / 1024.0 / 1024.0);
|
|
|
|
}
|
|
|
|
value = buf;
|
|
|
|
} else if (field == "params") {
|
2024-11-20 11:57:53 +00:00
|
|
|
if (t.model_n_params < 1000 * 1000 * 1000) {
|
2023-08-25 13:16:19 +00:00
|
|
|
snprintf(buf, sizeof(buf), "%.2f M", t.model_n_params / 1e6);
|
|
|
|
} else {
|
|
|
|
snprintf(buf, sizeof(buf), "%.2f B", t.model_n_params / 1e9);
|
|
|
|
}
|
|
|
|
value = buf;
|
2023-08-18 10:44:58 +00:00
|
|
|
} else if (field == "backend") {
|
|
|
|
value = test::get_backend();
|
|
|
|
} else if (field == "test") {
|
|
|
|
if (t.n_prompt > 0 && t.n_gen == 0) {
|
2024-05-10 16:03:54 +00:00
|
|
|
snprintf(buf, sizeof(buf), "pp%d", t.n_prompt);
|
2023-08-18 10:44:58 +00:00
|
|
|
} else if (t.n_gen > 0 && t.n_prompt == 0) {
|
2024-05-10 16:03:54 +00:00
|
|
|
snprintf(buf, sizeof(buf), "tg%d", t.n_gen);
|
2023-08-18 10:44:58 +00:00
|
|
|
} else {
|
2024-05-10 16:03:54 +00:00
|
|
|
snprintf(buf, sizeof(buf), "pp%d+tg%d", t.n_prompt, t.n_gen);
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
value = buf;
|
|
|
|
} else if (field == "t/s") {
|
|
|
|
snprintf(buf, sizeof(buf), "%.2f ± %.2f", t.avg_ts(), t.stdev_ts());
|
|
|
|
value = buf;
|
|
|
|
} else if (vmap.find(field) != vmap.end()) {
|
|
|
|
value = vmap.at(field);
|
|
|
|
} else {
|
|
|
|
assert(false);
|
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
|
|
|
|
int width = get_field_width(field);
|
|
|
|
if (field == "t/s") {
|
|
|
|
// HACK: the utf-8 character is 2 bytes
|
|
|
|
width += 1;
|
|
|
|
}
|
|
|
|
fprintf(fout, " %*s |", width, value.c_str());
|
|
|
|
}
|
|
|
|
fprintf(fout, "\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
void print_footer() override {
|
|
|
|
fprintf(fout, "\nbuild: %s (%d)\n", test::build_commit.c_str(), test::build_number);
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
struct sql_printer : public printer {
|
|
|
|
static std::string get_sql_field_type(const std::string & field) {
|
|
|
|
switch (test::get_field_type(field)) {
|
|
|
|
case test::STRING:
|
|
|
|
return "TEXT";
|
|
|
|
case test::BOOL:
|
|
|
|
case test::INT:
|
|
|
|
return "INTEGER";
|
|
|
|
case test::FLOAT:
|
|
|
|
return "REAL";
|
|
|
|
default:
|
|
|
|
assert(false);
|
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void print_header(const cmd_params & params) override {
|
|
|
|
std::vector<std::string> fields = test::get_fields();
|
|
|
|
fprintf(fout, "CREATE TABLE IF NOT EXISTS test (\n");
|
|
|
|
for (size_t i = 0; i < fields.size(); i++) {
|
2024-11-20 11:57:53 +00:00
|
|
|
fprintf(fout, " %s %s%s\n", fields.at(i).c_str(), get_sql_field_type(fields.at(i)).c_str(),
|
|
|
|
i < fields.size() - 1 ? "," : "");
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
fprintf(fout, ");\n");
|
|
|
|
fprintf(fout, "\n");
|
|
|
|
(void) params;
|
|
|
|
}
|
|
|
|
|
|
|
|
void print_test(const test & t) override {
|
|
|
|
fprintf(fout, "INSERT INTO test (%s) ", join(test::get_fields(), ", ").c_str());
|
|
|
|
fprintf(fout, "VALUES (");
|
|
|
|
std::vector<std::string> values = t.get_values();
|
|
|
|
for (size_t i = 0; i < values.size(); i++) {
|
|
|
|
fprintf(fout, "'%s'%s", values.at(i).c_str(), i < values.size() - 1 ? ", " : "");
|
|
|
|
}
|
|
|
|
fprintf(fout, ");\n");
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2024-10-18 21:18:01 +00:00
|
|
|
static void test_prompt(llama_context * ctx, int n_prompt, int n_batch, int n_threads) {
|
2024-03-13 17:54:21 +00:00
|
|
|
llama_set_n_threads(ctx, n_threads, n_threads);
|
|
|
|
|
2024-11-20 11:57:53 +00:00
|
|
|
const llama_model * model = llama_get_model(ctx);
|
|
|
|
const int32_t n_vocab = llama_n_vocab(model);
|
2024-03-15 08:22:24 +00:00
|
|
|
|
|
|
|
std::vector<llama_token> tokens(n_batch);
|
2024-03-13 17:54:21 +00:00
|
|
|
|
2023-08-18 10:44:58 +00:00
|
|
|
int n_processed = 0;
|
2023-09-28 19:42:38 +00:00
|
|
|
|
2023-08-18 10:44:58 +00:00
|
|
|
while (n_processed < n_prompt) {
|
|
|
|
int n_tokens = std::min(n_prompt - n_processed, n_batch);
|
2024-11-20 11:57:53 +00:00
|
|
|
tokens[0] = n_processed == 0 && llama_add_bos_token(model) ? llama_token_bos(model) : std::rand() % n_vocab;
|
2024-03-15 08:22:24 +00:00
|
|
|
for (int i = 1; i < n_tokens; i++) {
|
|
|
|
tokens[i] = std::rand() % n_vocab;
|
|
|
|
}
|
2024-10-18 21:18:01 +00:00
|
|
|
llama_decode(ctx, llama_batch_get_one(tokens.data(), n_tokens));
|
2023-08-18 10:44:58 +00:00
|
|
|
n_processed += n_tokens;
|
|
|
|
}
|
2024-03-13 17:54:21 +00:00
|
|
|
|
|
|
|
llama_synchronize(ctx);
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
|
2024-10-18 21:18:01 +00:00
|
|
|
static void test_gen(llama_context * ctx, int n_gen, int n_threads) {
|
2023-09-28 19:42:38 +00:00
|
|
|
llama_set_n_threads(ctx, n_threads, n_threads);
|
|
|
|
|
2024-11-20 11:57:53 +00:00
|
|
|
const llama_model * model = llama_get_model(ctx);
|
|
|
|
const int32_t n_vocab = llama_n_vocab(model);
|
2024-03-15 08:22:24 +00:00
|
|
|
|
|
|
|
llama_token token = llama_add_bos_token(model) ? llama_token_bos(model) : std::rand() % n_vocab;
|
2024-03-13 17:54:21 +00:00
|
|
|
|
2023-08-18 10:44:58 +00:00
|
|
|
for (int i = 0; i < n_gen; i++) {
|
2024-10-18 21:18:01 +00:00
|
|
|
llama_decode(ctx, llama_batch_get_one(&token, 1));
|
2024-03-13 17:54:21 +00:00
|
|
|
llama_synchronize(ctx);
|
2024-03-15 08:22:24 +00:00
|
|
|
token = std::rand() % n_vocab;
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-09-27 15:48:33 +00:00
|
|
|
static void llama_null_log_callback(enum ggml_log_level level, const char * text, void * user_data) {
|
2023-08-18 10:44:58 +00:00
|
|
|
(void) level;
|
|
|
|
(void) text;
|
|
|
|
(void) user_data;
|
|
|
|
}
|
|
|
|
|
2024-06-04 12:32:42 +00:00
|
|
|
static std::unique_ptr<printer> create_printer(output_formats format) {
|
|
|
|
switch (format) {
|
|
|
|
case NONE:
|
|
|
|
return nullptr;
|
|
|
|
case CSV:
|
|
|
|
return std::unique_ptr<printer>(new csv_printer());
|
|
|
|
case JSON:
|
|
|
|
return std::unique_ptr<printer>(new json_printer());
|
2024-09-03 17:58:54 +00:00
|
|
|
case JSONL:
|
|
|
|
return std::unique_ptr<printer>(new jsonl_printer());
|
2024-06-04 12:32:42 +00:00
|
|
|
case MARKDOWN:
|
|
|
|
return std::unique_ptr<printer>(new markdown_printer());
|
|
|
|
case SQL:
|
|
|
|
return std::unique_ptr<printer>(new sql_printer());
|
|
|
|
}
|
2024-07-27 02:41:55 +00:00
|
|
|
GGML_ABORT("fatal error");
|
2024-06-04 12:32:42 +00:00
|
|
|
}
|
|
|
|
|
2023-08-18 10:44:58 +00:00
|
|
|
int main(int argc, char ** argv) {
|
2023-08-28 17:19:18 +00:00
|
|
|
// try to set locale for unicode characters in markdown
|
|
|
|
setlocale(LC_CTYPE, ".UTF-8");
|
|
|
|
|
2023-08-18 10:44:58 +00:00
|
|
|
#if !defined(NDEBUG)
|
|
|
|
fprintf(stderr, "warning: asserts enabled, performance may be affected\n");
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#if (defined(_MSC_VER) && defined(_DEBUG)) || (!defined(_MSC_VER) && !defined(__OPTIMIZE__))
|
|
|
|
fprintf(stderr, "warning: debug build, performance may be affected\n");
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#if defined(__SANITIZE_ADDRESS__) || defined(__SANITIZE_THREAD__)
|
|
|
|
fprintf(stderr, "warning: sanitizer enabled, performance may be affected\n");
|
|
|
|
#endif
|
|
|
|
|
|
|
|
cmd_params params = parse_cmd_params(argc, argv);
|
|
|
|
|
2024-11-25 14:13:39 +00:00
|
|
|
// initialize backends
|
|
|
|
ggml_backend_load_all();
|
|
|
|
auto * cpu_dev = ggml_backend_dev_by_type(GGML_BACKEND_DEVICE_TYPE_CPU);
|
|
|
|
if (!cpu_dev) {
|
|
|
|
fprintf(stderr, "%s: error: CPU backend is not loaded\n", __func__);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
auto * cpu_reg = ggml_backend_dev_backend_reg(cpu_dev);
|
|
|
|
auto * ggml_threadpool_new_fn = (decltype(ggml_threadpool_new) *) ggml_backend_reg_get_proc_address(cpu_reg, "ggml_threadpool_new");
|
|
|
|
auto * ggml_threadpool_free_fn = (decltype(ggml_threadpool_free) *) ggml_backend_reg_get_proc_address(cpu_reg, "ggml_threadpool_free");
|
|
|
|
|
2023-08-18 10:44:58 +00:00
|
|
|
// initialize llama.cpp
|
|
|
|
if (!params.verbose) {
|
|
|
|
llama_log_set(llama_null_log_callback, NULL);
|
|
|
|
}
|
2024-02-16 09:31:07 +00:00
|
|
|
llama_backend_init();
|
2024-05-05 12:17:47 +00:00
|
|
|
llama_numa_init(params.numa);
|
2023-08-18 10:44:58 +00:00
|
|
|
|
Threadpool: take 2 (#8672)
* Introduce ggml_compute_threadpool
- OpenMP functional: check
- Vanilla ggml functional: Check
- ggml w/threadpool functional: Check
- OpenMP no regression: No glaring problems
- Vanilla ggml no regression: No glaring problems
- ggml w/threadpool no regression: No glaring problems
* Minor fixes
* fixed use after release bug
* fixed a harmless race condition
* Fix Android bulid issue
* fix more race conditions
* fix deadlock for cases where cgraph.n_nodes == 1
and fix --poll case
* threadpool: use cpu_get_num_math to set the default number of threadpool threads
This way we avoid using E-Cores and Hyperthreaded siblings.
* bench: create fresh threadpool for each test
For benchmarking it's better to start a fresh pool for each test with the exact number of threads
needed for that test. Having larger pools is suboptimal (causes more load, etc).
* atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier
This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.
* threadpool: make polling the default to match openmp behavior
All command line args now allow for setting poll to 0 (false).
* threadpool: do not wakeup threads in already paused threadpool
* fix potential race condition in check_for_work
* threadpool: do not create two threadpools if their params are identical
* threadpool: reduce pause/resume/wakeup overhead in common cases
We now start threadpool in paused state only if we have two.
The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.
* threadpool: add support for hybrid polling
poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...
The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
We can tune this further as things evolve.
* threadpool: reduce the number of barrier required
New work is now indicated with an atomic counter that is incremented for
each new graph that needs to be computed.
This removes the need for extra barrier for clearing the "new_work" and
removes the special case for trivial graphs.
* threadpool: remove special-casing for disposable threadpools
With the efficient hybrid polling there is no need to make disposable pools any different.
This simplifies the overall logic and reduces branching.
Include n_threads in debug print for disposable threadpool.
Declare pause and stop flags as atomic_bool
This doesn't actually generate any memory barriers and simply informs
the thread sanitizer that these flags can be written & read by different
threads without locking.
* threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)
This fixes the race condition with very small graphs where the main thread happens to
start a new graph while the workers are just about to exit from barriers.
* threadpool: use relaxed order for chunk sync
Full memory barrier is an overkill for this since each thread works on different chunk
* threadpool: remove abort_callback from threadpool state
* threadpool: better naming for thread/cpumask releated functions
* threadpool: consistent use of int type for n_threads params
* threadpool: add support for ggml_threadpool_params_default/init
Also removes the need for explicit mask_specified param.
all-zero cpumask means use default (usually inherited) cpu affinity mask.
* threadpool: move typedef into ggml.h
* threadpool: fix apply_priority() function name
* threadpool: fix swift wrapper errors due to n_threads int type cleanup
* threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled
* threadpool: replace checks for compute_thread ret code with proper status check
* threadpool: simplify threadpool init logic and fix main thread affinity application
Most of the init code is now exactly the same between threadpool and openmp.
* threadpool: update threadpool resume/pause function names
* threadpool: enable openmp by default for now
* threadpool: don't forget to free workers state when omp is enabled
* threadpool: avoid updating process priority on the platforms that do not require it
On Windows we need to change overall process priority class in order to set thread priorities,
but on Linux, Mac, etc we do not need to touch the overall process settings.
* threadpool: update calling thread prio and affinity only at start/resume
This avoids extra syscalls for each graph_compute()
* llama-bench: turn threadpool params into vectors, add output headers, etc
* llama-bench: add support for cool off between tests --delay
This helps for long running tests on platforms that are thermally limited (phones, laptops, etc).
--delay (disabled by default) introduces the sleep for N seconds before starting each test.
* threadpool: move process priority setting into the apps (bench and cli)
This avoids changing the overall process priority on Windows for the apps
that use ggml/llama.cpp directy.
* threadpool: move all pause/resume logic into ggml
* threadpool: futher api cleanup and prep for future refactoring
All threadpool related functions and structs use ggml_threadpool prefix.
* threadpool: minor indent fixes
* threadpool: improve setprioty error message
* Update examples/llama-bench/llama-bench.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* threadpool: fix indent in set_threadpool call
* use int32_t for n_thread type in public llama.cpp API
* threadpool: use _new and _free instead of _create and _release
* fix two more public APIs to use int32_t for n_threads
* build: set _GNU_SOURCE for Adroid
---------
Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Co-authored-by: fmz <quic_fzaghlou@quic.com>
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2024-08-29 23:20:53 +00:00
|
|
|
set_process_priority(params.prio);
|
|
|
|
|
2023-08-18 10:44:58 +00:00
|
|
|
// initialize printer
|
2024-11-20 11:57:53 +00:00
|
|
|
std::unique_ptr<printer> p = create_printer(params.output_format);
|
2024-06-04 12:32:42 +00:00
|
|
|
std::unique_ptr<printer> p_err = create_printer(params.output_format_stderr);
|
|
|
|
|
|
|
|
if (p) {
|
|
|
|
p->fout = stdout;
|
|
|
|
p->print_header(params);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (p_err) {
|
|
|
|
p_err->fout = stderr;
|
|
|
|
p_err->print_header(params);
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
std::vector<cmd_params_instance> params_instances = get_cmd_params_instances(params);
|
|
|
|
|
2024-11-20 11:57:53 +00:00
|
|
|
llama_model * lmodel = nullptr;
|
2023-09-28 19:42:38 +00:00
|
|
|
const cmd_params_instance * prev_inst = nullptr;
|
|
|
|
|
2024-11-20 11:57:53 +00:00
|
|
|
int params_idx = 0;
|
2024-09-06 21:03:01 +00:00
|
|
|
auto params_count = params_instances.size();
|
2023-08-18 10:44:58 +00:00
|
|
|
for (const auto & inst : params_instances) {
|
2024-11-20 11:57:53 +00:00
|
|
|
params_idx++;
|
2024-09-06 21:03:01 +00:00
|
|
|
if (params.progress) {
|
|
|
|
fprintf(stderr, "llama-bench: benchmark %d/%ld: starting\n", params_idx, params_count);
|
|
|
|
}
|
2023-09-28 19:42:38 +00:00
|
|
|
// keep the same model between tests when possible
|
|
|
|
if (!lmodel || !prev_inst || !inst.equal_mparams(*prev_inst)) {
|
|
|
|
if (lmodel) {
|
|
|
|
llama_free_model(lmodel);
|
|
|
|
}
|
2023-08-18 10:44:58 +00:00
|
|
|
|
2023-09-28 19:42:38 +00:00
|
|
|
lmodel = llama_load_model_from_file(inst.model.c_str(), inst.to_llama_mparams());
|
|
|
|
if (lmodel == NULL) {
|
|
|
|
fprintf(stderr, "%s: error: failed to load model '%s'\n", __func__, inst.model.c_str());
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
prev_inst = &inst;
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
|
2023-09-28 19:42:38 +00:00
|
|
|
llama_context * ctx = llama_new_context_with_model(lmodel, inst.to_llama_cparams());
|
2023-08-18 10:44:58 +00:00
|
|
|
if (ctx == NULL) {
|
|
|
|
fprintf(stderr, "%s: error: failed to create context with model '%s'\n", __func__, inst.model.c_str());
|
|
|
|
llama_free_model(lmodel);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
test t(inst, lmodel, ctx);
|
|
|
|
|
2023-10-29 17:31:40 +00:00
|
|
|
llama_kv_cache_clear(ctx);
|
2023-09-28 16:04:36 +00:00
|
|
|
|
Threadpool: take 2 (#8672)
* Introduce ggml_compute_threadpool
- OpenMP functional: check
- Vanilla ggml functional: Check
- ggml w/threadpool functional: Check
- OpenMP no regression: No glaring problems
- Vanilla ggml no regression: No glaring problems
- ggml w/threadpool no regression: No glaring problems
* Minor fixes
* fixed use after release bug
* fixed a harmless race condition
* Fix Android bulid issue
* fix more race conditions
* fix deadlock for cases where cgraph.n_nodes == 1
and fix --poll case
* threadpool: use cpu_get_num_math to set the default number of threadpool threads
This way we avoid using E-Cores and Hyperthreaded siblings.
* bench: create fresh threadpool for each test
For benchmarking it's better to start a fresh pool for each test with the exact number of threads
needed for that test. Having larger pools is suboptimal (causes more load, etc).
* atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier
This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.
* threadpool: make polling the default to match openmp behavior
All command line args now allow for setting poll to 0 (false).
* threadpool: do not wakeup threads in already paused threadpool
* fix potential race condition in check_for_work
* threadpool: do not create two threadpools if their params are identical
* threadpool: reduce pause/resume/wakeup overhead in common cases
We now start threadpool in paused state only if we have two.
The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.
* threadpool: add support for hybrid polling
poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...
The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
We can tune this further as things evolve.
* threadpool: reduce the number of barrier required
New work is now indicated with an atomic counter that is incremented for
each new graph that needs to be computed.
This removes the need for extra barrier for clearing the "new_work" and
removes the special case for trivial graphs.
* threadpool: remove special-casing for disposable threadpools
With the efficient hybrid polling there is no need to make disposable pools any different.
This simplifies the overall logic and reduces branching.
Include n_threads in debug print for disposable threadpool.
Declare pause and stop flags as atomic_bool
This doesn't actually generate any memory barriers and simply informs
the thread sanitizer that these flags can be written & read by different
threads without locking.
* threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)
This fixes the race condition with very small graphs where the main thread happens to
start a new graph while the workers are just about to exit from barriers.
* threadpool: use relaxed order for chunk sync
Full memory barrier is an overkill for this since each thread works on different chunk
* threadpool: remove abort_callback from threadpool state
* threadpool: better naming for thread/cpumask releated functions
* threadpool: consistent use of int type for n_threads params
* threadpool: add support for ggml_threadpool_params_default/init
Also removes the need for explicit mask_specified param.
all-zero cpumask means use default (usually inherited) cpu affinity mask.
* threadpool: move typedef into ggml.h
* threadpool: fix apply_priority() function name
* threadpool: fix swift wrapper errors due to n_threads int type cleanup
* threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled
* threadpool: replace checks for compute_thread ret code with proper status check
* threadpool: simplify threadpool init logic and fix main thread affinity application
Most of the init code is now exactly the same between threadpool and openmp.
* threadpool: update threadpool resume/pause function names
* threadpool: enable openmp by default for now
* threadpool: don't forget to free workers state when omp is enabled
* threadpool: avoid updating process priority on the platforms that do not require it
On Windows we need to change overall process priority class in order to set thread priorities,
but on Linux, Mac, etc we do not need to touch the overall process settings.
* threadpool: update calling thread prio and affinity only at start/resume
This avoids extra syscalls for each graph_compute()
* llama-bench: turn threadpool params into vectors, add output headers, etc
* llama-bench: add support for cool off between tests --delay
This helps for long running tests on platforms that are thermally limited (phones, laptops, etc).
--delay (disabled by default) introduces the sleep for N seconds before starting each test.
* threadpool: move process priority setting into the apps (bench and cli)
This avoids changing the overall process priority on Windows for the apps
that use ggml/llama.cpp directy.
* threadpool: move all pause/resume logic into ggml
* threadpool: futher api cleanup and prep for future refactoring
All threadpool related functions and structs use ggml_threadpool prefix.
* threadpool: minor indent fixes
* threadpool: improve setprioty error message
* Update examples/llama-bench/llama-bench.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* threadpool: fix indent in set_threadpool call
* use int32_t for n_thread type in public llama.cpp API
* threadpool: use _new and _free instead of _create and _release
* fix two more public APIs to use int32_t for n_threads
* build: set _GNU_SOURCE for Adroid
---------
Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Co-authored-by: fmz <quic_fzaghlou@quic.com>
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2024-08-29 23:20:53 +00:00
|
|
|
// cool off before the test
|
|
|
|
if (params.delay) {
|
|
|
|
std::this_thread::sleep_for(std::chrono::seconds(params.delay));
|
|
|
|
}
|
|
|
|
|
|
|
|
struct ggml_threadpool_params tpp = ggml_threadpool_params_default(t.n_threads);
|
|
|
|
if (!parse_cpu_mask(t.cpu_mask, tpp.cpumask)) {
|
2024-09-06 21:03:01 +00:00
|
|
|
fprintf(stderr, "%s: failed to parse cpu-mask: %s\n", __func__, t.cpu_mask.c_str());
|
Threadpool: take 2 (#8672)
* Introduce ggml_compute_threadpool
- OpenMP functional: check
- Vanilla ggml functional: Check
- ggml w/threadpool functional: Check
- OpenMP no regression: No glaring problems
- Vanilla ggml no regression: No glaring problems
- ggml w/threadpool no regression: No glaring problems
* Minor fixes
* fixed use after release bug
* fixed a harmless race condition
* Fix Android bulid issue
* fix more race conditions
* fix deadlock for cases where cgraph.n_nodes == 1
and fix --poll case
* threadpool: use cpu_get_num_math to set the default number of threadpool threads
This way we avoid using E-Cores and Hyperthreaded siblings.
* bench: create fresh threadpool for each test
For benchmarking it's better to start a fresh pool for each test with the exact number of threads
needed for that test. Having larger pools is suboptimal (causes more load, etc).
* atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier
This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.
* threadpool: make polling the default to match openmp behavior
All command line args now allow for setting poll to 0 (false).
* threadpool: do not wakeup threads in already paused threadpool
* fix potential race condition in check_for_work
* threadpool: do not create two threadpools if their params are identical
* threadpool: reduce pause/resume/wakeup overhead in common cases
We now start threadpool in paused state only if we have two.
The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.
* threadpool: add support for hybrid polling
poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...
The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
We can tune this further as things evolve.
* threadpool: reduce the number of barrier required
New work is now indicated with an atomic counter that is incremented for
each new graph that needs to be computed.
This removes the need for extra barrier for clearing the "new_work" and
removes the special case for trivial graphs.
* threadpool: remove special-casing for disposable threadpools
With the efficient hybrid polling there is no need to make disposable pools any different.
This simplifies the overall logic and reduces branching.
Include n_threads in debug print for disposable threadpool.
Declare pause and stop flags as atomic_bool
This doesn't actually generate any memory barriers and simply informs
the thread sanitizer that these flags can be written & read by different
threads without locking.
* threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)
This fixes the race condition with very small graphs where the main thread happens to
start a new graph while the workers are just about to exit from barriers.
* threadpool: use relaxed order for chunk sync
Full memory barrier is an overkill for this since each thread works on different chunk
* threadpool: remove abort_callback from threadpool state
* threadpool: better naming for thread/cpumask releated functions
* threadpool: consistent use of int type for n_threads params
* threadpool: add support for ggml_threadpool_params_default/init
Also removes the need for explicit mask_specified param.
all-zero cpumask means use default (usually inherited) cpu affinity mask.
* threadpool: move typedef into ggml.h
* threadpool: fix apply_priority() function name
* threadpool: fix swift wrapper errors due to n_threads int type cleanup
* threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled
* threadpool: replace checks for compute_thread ret code with proper status check
* threadpool: simplify threadpool init logic and fix main thread affinity application
Most of the init code is now exactly the same between threadpool and openmp.
* threadpool: update threadpool resume/pause function names
* threadpool: enable openmp by default for now
* threadpool: don't forget to free workers state when omp is enabled
* threadpool: avoid updating process priority on the platforms that do not require it
On Windows we need to change overall process priority class in order to set thread priorities,
but on Linux, Mac, etc we do not need to touch the overall process settings.
* threadpool: update calling thread prio and affinity only at start/resume
This avoids extra syscalls for each graph_compute()
* llama-bench: turn threadpool params into vectors, add output headers, etc
* llama-bench: add support for cool off between tests --delay
This helps for long running tests on platforms that are thermally limited (phones, laptops, etc).
--delay (disabled by default) introduces the sleep for N seconds before starting each test.
* threadpool: move process priority setting into the apps (bench and cli)
This avoids changing the overall process priority on Windows for the apps
that use ggml/llama.cpp directy.
* threadpool: move all pause/resume logic into ggml
* threadpool: futher api cleanup and prep for future refactoring
All threadpool related functions and structs use ggml_threadpool prefix.
* threadpool: minor indent fixes
* threadpool: improve setprioty error message
* Update examples/llama-bench/llama-bench.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* threadpool: fix indent in set_threadpool call
* use int32_t for n_thread type in public llama.cpp API
* threadpool: use _new and _free instead of _create and _release
* fix two more public APIs to use int32_t for n_threads
* build: set _GNU_SOURCE for Adroid
---------
Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Co-authored-by: fmz <quic_fzaghlou@quic.com>
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2024-08-29 23:20:53 +00:00
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
tpp.strict_cpu = t.cpu_strict;
|
|
|
|
tpp.poll = t.poll;
|
|
|
|
tpp.prio = params.prio;
|
|
|
|
|
2024-11-25 14:13:39 +00:00
|
|
|
struct ggml_threadpool * threadpool = ggml_threadpool_new_fn(&tpp);
|
Threadpool: take 2 (#8672)
* Introduce ggml_compute_threadpool
- OpenMP functional: check
- Vanilla ggml functional: Check
- ggml w/threadpool functional: Check
- OpenMP no regression: No glaring problems
- Vanilla ggml no regression: No glaring problems
- ggml w/threadpool no regression: No glaring problems
* Minor fixes
* fixed use after release bug
* fixed a harmless race condition
* Fix Android bulid issue
* fix more race conditions
* fix deadlock for cases where cgraph.n_nodes == 1
and fix --poll case
* threadpool: use cpu_get_num_math to set the default number of threadpool threads
This way we avoid using E-Cores and Hyperthreaded siblings.
* bench: create fresh threadpool for each test
For benchmarking it's better to start a fresh pool for each test with the exact number of threads
needed for that test. Having larger pools is suboptimal (causes more load, etc).
* atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier
This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.
* threadpool: make polling the default to match openmp behavior
All command line args now allow for setting poll to 0 (false).
* threadpool: do not wakeup threads in already paused threadpool
* fix potential race condition in check_for_work
* threadpool: do not create two threadpools if their params are identical
* threadpool: reduce pause/resume/wakeup overhead in common cases
We now start threadpool in paused state only if we have two.
The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.
* threadpool: add support for hybrid polling
poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...
The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
We can tune this further as things evolve.
* threadpool: reduce the number of barrier required
New work is now indicated with an atomic counter that is incremented for
each new graph that needs to be computed.
This removes the need for extra barrier for clearing the "new_work" and
removes the special case for trivial graphs.
* threadpool: remove special-casing for disposable threadpools
With the efficient hybrid polling there is no need to make disposable pools any different.
This simplifies the overall logic and reduces branching.
Include n_threads in debug print for disposable threadpool.
Declare pause and stop flags as atomic_bool
This doesn't actually generate any memory barriers and simply informs
the thread sanitizer that these flags can be written & read by different
threads without locking.
* threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)
This fixes the race condition with very small graphs where the main thread happens to
start a new graph while the workers are just about to exit from barriers.
* threadpool: use relaxed order for chunk sync
Full memory barrier is an overkill for this since each thread works on different chunk
* threadpool: remove abort_callback from threadpool state
* threadpool: better naming for thread/cpumask releated functions
* threadpool: consistent use of int type for n_threads params
* threadpool: add support for ggml_threadpool_params_default/init
Also removes the need for explicit mask_specified param.
all-zero cpumask means use default (usually inherited) cpu affinity mask.
* threadpool: move typedef into ggml.h
* threadpool: fix apply_priority() function name
* threadpool: fix swift wrapper errors due to n_threads int type cleanup
* threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled
* threadpool: replace checks for compute_thread ret code with proper status check
* threadpool: simplify threadpool init logic and fix main thread affinity application
Most of the init code is now exactly the same between threadpool and openmp.
* threadpool: update threadpool resume/pause function names
* threadpool: enable openmp by default for now
* threadpool: don't forget to free workers state when omp is enabled
* threadpool: avoid updating process priority on the platforms that do not require it
On Windows we need to change overall process priority class in order to set thread priorities,
but on Linux, Mac, etc we do not need to touch the overall process settings.
* threadpool: update calling thread prio and affinity only at start/resume
This avoids extra syscalls for each graph_compute()
* llama-bench: turn threadpool params into vectors, add output headers, etc
* llama-bench: add support for cool off between tests --delay
This helps for long running tests on platforms that are thermally limited (phones, laptops, etc).
--delay (disabled by default) introduces the sleep for N seconds before starting each test.
* threadpool: move process priority setting into the apps (bench and cli)
This avoids changing the overall process priority on Windows for the apps
that use ggml/llama.cpp directy.
* threadpool: move all pause/resume logic into ggml
* threadpool: futher api cleanup and prep for future refactoring
All threadpool related functions and structs use ggml_threadpool prefix.
* threadpool: minor indent fixes
* threadpool: improve setprioty error message
* Update examples/llama-bench/llama-bench.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* threadpool: fix indent in set_threadpool call
* use int32_t for n_thread type in public llama.cpp API
* threadpool: use _new and _free instead of _create and _release
* fix two more public APIs to use int32_t for n_threads
* build: set _GNU_SOURCE for Adroid
---------
Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Co-authored-by: fmz <quic_fzaghlou@quic.com>
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2024-08-29 23:20:53 +00:00
|
|
|
if (!threadpool) {
|
2024-09-06 21:03:01 +00:00
|
|
|
fprintf(stderr, "%s: threadpool create failed : n_threads %d\n", __func__, tpp.n_threads);
|
Threadpool: take 2 (#8672)
* Introduce ggml_compute_threadpool
- OpenMP functional: check
- Vanilla ggml functional: Check
- ggml w/threadpool functional: Check
- OpenMP no regression: No glaring problems
- Vanilla ggml no regression: No glaring problems
- ggml w/threadpool no regression: No glaring problems
* Minor fixes
* fixed use after release bug
* fixed a harmless race condition
* Fix Android bulid issue
* fix more race conditions
* fix deadlock for cases where cgraph.n_nodes == 1
and fix --poll case
* threadpool: use cpu_get_num_math to set the default number of threadpool threads
This way we avoid using E-Cores and Hyperthreaded siblings.
* bench: create fresh threadpool for each test
For benchmarking it's better to start a fresh pool for each test with the exact number of threads
needed for that test. Having larger pools is suboptimal (causes more load, etc).
* atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier
This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.
* threadpool: make polling the default to match openmp behavior
All command line args now allow for setting poll to 0 (false).
* threadpool: do not wakeup threads in already paused threadpool
* fix potential race condition in check_for_work
* threadpool: do not create two threadpools if their params are identical
* threadpool: reduce pause/resume/wakeup overhead in common cases
We now start threadpool in paused state only if we have two.
The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.
* threadpool: add support for hybrid polling
poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...
The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
We can tune this further as things evolve.
* threadpool: reduce the number of barrier required
New work is now indicated with an atomic counter that is incremented for
each new graph that needs to be computed.
This removes the need for extra barrier for clearing the "new_work" and
removes the special case for trivial graphs.
* threadpool: remove special-casing for disposable threadpools
With the efficient hybrid polling there is no need to make disposable pools any different.
This simplifies the overall logic and reduces branching.
Include n_threads in debug print for disposable threadpool.
Declare pause and stop flags as atomic_bool
This doesn't actually generate any memory barriers and simply informs
the thread sanitizer that these flags can be written & read by different
threads without locking.
* threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)
This fixes the race condition with very small graphs where the main thread happens to
start a new graph while the workers are just about to exit from barriers.
* threadpool: use relaxed order for chunk sync
Full memory barrier is an overkill for this since each thread works on different chunk
* threadpool: remove abort_callback from threadpool state
* threadpool: better naming for thread/cpumask releated functions
* threadpool: consistent use of int type for n_threads params
* threadpool: add support for ggml_threadpool_params_default/init
Also removes the need for explicit mask_specified param.
all-zero cpumask means use default (usually inherited) cpu affinity mask.
* threadpool: move typedef into ggml.h
* threadpool: fix apply_priority() function name
* threadpool: fix swift wrapper errors due to n_threads int type cleanup
* threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled
* threadpool: replace checks for compute_thread ret code with proper status check
* threadpool: simplify threadpool init logic and fix main thread affinity application
Most of the init code is now exactly the same between threadpool and openmp.
* threadpool: update threadpool resume/pause function names
* threadpool: enable openmp by default for now
* threadpool: don't forget to free workers state when omp is enabled
* threadpool: avoid updating process priority on the platforms that do not require it
On Windows we need to change overall process priority class in order to set thread priorities,
but on Linux, Mac, etc we do not need to touch the overall process settings.
* threadpool: update calling thread prio and affinity only at start/resume
This avoids extra syscalls for each graph_compute()
* llama-bench: turn threadpool params into vectors, add output headers, etc
* llama-bench: add support for cool off between tests --delay
This helps for long running tests on platforms that are thermally limited (phones, laptops, etc).
--delay (disabled by default) introduces the sleep for N seconds before starting each test.
* threadpool: move process priority setting into the apps (bench and cli)
This avoids changing the overall process priority on Windows for the apps
that use ggml/llama.cpp directy.
* threadpool: move all pause/resume logic into ggml
* threadpool: futher api cleanup and prep for future refactoring
All threadpool related functions and structs use ggml_threadpool prefix.
* threadpool: minor indent fixes
* threadpool: improve setprioty error message
* Update examples/llama-bench/llama-bench.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* threadpool: fix indent in set_threadpool call
* use int32_t for n_thread type in public llama.cpp API
* threadpool: use _new and _free instead of _create and _release
* fix two more public APIs to use int32_t for n_threads
* build: set _GNU_SOURCE for Adroid
---------
Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Co-authored-by: fmz <quic_fzaghlou@quic.com>
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2024-08-29 23:20:53 +00:00
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
|
|
|
|
llama_attach_threadpool(ctx, threadpool, NULL);
|
|
|
|
|
2023-08-18 10:44:58 +00:00
|
|
|
// warmup run
|
2023-09-07 13:52:34 +00:00
|
|
|
if (t.n_prompt > 0) {
|
2024-09-06 21:03:01 +00:00
|
|
|
if (params.progress) {
|
|
|
|
fprintf(stderr, "llama-bench: benchmark %d/%ld: warmup prompt run\n", params_idx, params_count);
|
|
|
|
}
|
2024-03-13 17:54:21 +00:00
|
|
|
//test_prompt(ctx, std::min(t.n_batch, std::min(t.n_prompt, 32)), 0, t.n_batch, t.n_threads);
|
2024-10-18 21:18:01 +00:00
|
|
|
test_prompt(ctx, t.n_prompt, t.n_batch, t.n_threads);
|
2023-09-07 13:52:34 +00:00
|
|
|
}
|
|
|
|
if (t.n_gen > 0) {
|
2024-09-06 21:03:01 +00:00
|
|
|
if (params.progress) {
|
|
|
|
fprintf(stderr, "llama-bench: benchmark %d/%ld: warmup generation run\n", params_idx, params_count);
|
|
|
|
}
|
2024-10-18 21:18:01 +00:00
|
|
|
test_gen(ctx, 1, t.n_threads);
|
2023-09-07 13:52:34 +00:00
|
|
|
}
|
2023-08-18 10:44:58 +00:00
|
|
|
|
|
|
|
for (int i = 0; i < params.reps; i++) {
|
2023-10-29 17:31:40 +00:00
|
|
|
llama_kv_cache_clear(ctx);
|
2023-09-28 16:04:36 +00:00
|
|
|
|
2023-08-18 10:44:58 +00:00
|
|
|
uint64_t t_start = get_time_ns();
|
2024-05-10 16:03:54 +00:00
|
|
|
|
2023-08-18 10:44:58 +00:00
|
|
|
if (t.n_prompt > 0) {
|
2024-09-06 21:03:01 +00:00
|
|
|
if (params.progress) {
|
2024-11-20 11:57:53 +00:00
|
|
|
fprintf(stderr, "llama-bench: benchmark %d/%ld: prompt run %d/%d\n", params_idx, params_count,
|
|
|
|
i + 1, params.reps);
|
2024-09-06 21:03:01 +00:00
|
|
|
}
|
2024-10-18 21:18:01 +00:00
|
|
|
test_prompt(ctx, t.n_prompt, t.n_batch, t.n_threads);
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
if (t.n_gen > 0) {
|
2024-09-06 21:03:01 +00:00
|
|
|
if (params.progress) {
|
2024-11-20 11:57:53 +00:00
|
|
|
fprintf(stderr, "llama-bench: benchmark %d/%ld: generation run %d/%d\n", params_idx, params_count,
|
|
|
|
i + 1, params.reps);
|
2024-09-06 21:03:01 +00:00
|
|
|
}
|
2024-10-18 21:18:01 +00:00
|
|
|
test_gen(ctx, t.n_gen, t.n_threads);
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
2024-03-13 17:54:21 +00:00
|
|
|
|
2023-08-18 10:44:58 +00:00
|
|
|
uint64_t t_ns = get_time_ns() - t_start;
|
|
|
|
t.samples_ns.push_back(t_ns);
|
|
|
|
}
|
|
|
|
|
2024-06-04 12:32:42 +00:00
|
|
|
if (p) {
|
|
|
|
p->print_test(t);
|
|
|
|
fflush(p->fout);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (p_err) {
|
|
|
|
p_err->print_test(t);
|
|
|
|
fflush(p_err->fout);
|
|
|
|
}
|
2023-08-18 10:44:58 +00:00
|
|
|
|
2024-09-13 06:53:38 +00:00
|
|
|
llama_perf_context_print(ctx);
|
2023-08-18 10:44:58 +00:00
|
|
|
|
|
|
|
llama_free(ctx);
|
Threadpool: take 2 (#8672)
* Introduce ggml_compute_threadpool
- OpenMP functional: check
- Vanilla ggml functional: Check
- ggml w/threadpool functional: Check
- OpenMP no regression: No glaring problems
- Vanilla ggml no regression: No glaring problems
- ggml w/threadpool no regression: No glaring problems
* Minor fixes
* fixed use after release bug
* fixed a harmless race condition
* Fix Android bulid issue
* fix more race conditions
* fix deadlock for cases where cgraph.n_nodes == 1
and fix --poll case
* threadpool: use cpu_get_num_math to set the default number of threadpool threads
This way we avoid using E-Cores and Hyperthreaded siblings.
* bench: create fresh threadpool for each test
For benchmarking it's better to start a fresh pool for each test with the exact number of threads
needed for that test. Having larger pools is suboptimal (causes more load, etc).
* atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier
This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.
* threadpool: make polling the default to match openmp behavior
All command line args now allow for setting poll to 0 (false).
* threadpool: do not wakeup threads in already paused threadpool
* fix potential race condition in check_for_work
* threadpool: do not create two threadpools if their params are identical
* threadpool: reduce pause/resume/wakeup overhead in common cases
We now start threadpool in paused state only if we have two.
The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.
* threadpool: add support for hybrid polling
poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...
The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
We can tune this further as things evolve.
* threadpool: reduce the number of barrier required
New work is now indicated with an atomic counter that is incremented for
each new graph that needs to be computed.
This removes the need for extra barrier for clearing the "new_work" and
removes the special case for trivial graphs.
* threadpool: remove special-casing for disposable threadpools
With the efficient hybrid polling there is no need to make disposable pools any different.
This simplifies the overall logic and reduces branching.
Include n_threads in debug print for disposable threadpool.
Declare pause and stop flags as atomic_bool
This doesn't actually generate any memory barriers and simply informs
the thread sanitizer that these flags can be written & read by different
threads without locking.
* threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)
This fixes the race condition with very small graphs where the main thread happens to
start a new graph while the workers are just about to exit from barriers.
* threadpool: use relaxed order for chunk sync
Full memory barrier is an overkill for this since each thread works on different chunk
* threadpool: remove abort_callback from threadpool state
* threadpool: better naming for thread/cpumask releated functions
* threadpool: consistent use of int type for n_threads params
* threadpool: add support for ggml_threadpool_params_default/init
Also removes the need for explicit mask_specified param.
all-zero cpumask means use default (usually inherited) cpu affinity mask.
* threadpool: move typedef into ggml.h
* threadpool: fix apply_priority() function name
* threadpool: fix swift wrapper errors due to n_threads int type cleanup
* threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled
* threadpool: replace checks for compute_thread ret code with proper status check
* threadpool: simplify threadpool init logic and fix main thread affinity application
Most of the init code is now exactly the same between threadpool and openmp.
* threadpool: update threadpool resume/pause function names
* threadpool: enable openmp by default for now
* threadpool: don't forget to free workers state when omp is enabled
* threadpool: avoid updating process priority on the platforms that do not require it
On Windows we need to change overall process priority class in order to set thread priorities,
but on Linux, Mac, etc we do not need to touch the overall process settings.
* threadpool: update calling thread prio and affinity only at start/resume
This avoids extra syscalls for each graph_compute()
* llama-bench: turn threadpool params into vectors, add output headers, etc
* llama-bench: add support for cool off between tests --delay
This helps for long running tests on platforms that are thermally limited (phones, laptops, etc).
--delay (disabled by default) introduces the sleep for N seconds before starting each test.
* threadpool: move process priority setting into the apps (bench and cli)
This avoids changing the overall process priority on Windows for the apps
that use ggml/llama.cpp directy.
* threadpool: move all pause/resume logic into ggml
* threadpool: futher api cleanup and prep for future refactoring
All threadpool related functions and structs use ggml_threadpool prefix.
* threadpool: minor indent fixes
* threadpool: improve setprioty error message
* Update examples/llama-bench/llama-bench.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* threadpool: fix indent in set_threadpool call
* use int32_t for n_thread type in public llama.cpp API
* threadpool: use _new and _free instead of _create and _release
* fix two more public APIs to use int32_t for n_threads
* build: set _GNU_SOURCE for Adroid
---------
Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Co-authored-by: fmz <quic_fzaghlou@quic.com>
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2024-08-29 23:20:53 +00:00
|
|
|
|
2024-11-25 14:13:39 +00:00
|
|
|
ggml_threadpool_free_fn(threadpool);
|
2023-08-18 10:44:58 +00:00
|
|
|
}
|
|
|
|
|
2023-09-28 19:42:38 +00:00
|
|
|
llama_free_model(lmodel);
|
|
|
|
|
2024-06-04 12:32:42 +00:00
|
|
|
if (p) {
|
|
|
|
p->print_footer();
|
|
|
|
}
|
|
|
|
|
|
|
|
if (p_err) {
|
|
|
|
p_err->print_footer();
|
|
|
|
}
|
2023-08-18 10:44:58 +00:00
|
|
|
|
|
|
|
llama_backend_free();
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|