@charles_irl
Nice insights in this article from @trailofbits on evaluating open & proprietary LLMs for Copilot-style autocompletion in Solidity. > a larger model quantized to 4-bit quantization is better at code completion than a smaller model of the same variety https://t.co/lsUC0QGT8K