@BenMildenhall
We don't expect LLMs to multiply numbers or sort lists directly within their output token stream. Instead, we ask them emit code and execute it in a separate runtime. Why predict the opposite outcome for simulating interactive worlds? https://t.co/b2QNOBTWjN