@emollick
Well, @random_walker proved the point that he often makes: it is really hard to rule out what LLMs can or can't do. Good prompting can get the AI to solve problems that looked impossible. Or, alternately, it is easy to be fooled by the AI seeming to solve problems it didn't, https://t.co/UWIcTnyoHb