@gerardsans
@jdegoes If you integrate this into the initial point: LLMs handle extensions in dense ecosystems much better than in sparse ones. Extending Python? AI will usually do fine. The training distribution is huge: libraries, examples, Stack Overflow, docs, tutorials. Extending Rust? Much harder. The base coverage in training data is already thinner, so you leave the model’s support much faster. So the real constraint isn’t model capacity or new language extensions, it’s existing training distribution density.