@simonw
One of the quickest ways to start playing with a good local LLM on macOS (if you have ~12GB of free disk space and RAM) - using llama-server and gpt-oss-20b: brew install llama.cpp llama-server -hf ggml-org/gpt-oss-20b-GGUF \ --ctx-size 0 --jinja -ub 2048 -b 2048 -ngl 99 -fa https://t.co/xsvruU8hDD