

Here’s the thing: They’re actually a natural fit for it, because if anyone ought to understand the use cases, strengths, weaknesses, and implications of a technology, it would be a university that’s centered around research on technology.
So they looked carefully at this guy’s paper, realized he was making outrageous and unsupportable claims about what AI could do, failed to reproduce his results, and concluded he was full of shit. That’s what we really should be able to expect from MIT.
Thank you. As useful as LLMs can be under certain circumstances, they are not the only type of AI.