

it is plausible that, stumbles aside, AI achieves superintelligence in the very near future.
No, it absolutely isn’t. AI hasn’t even hit the slightest sign of actual intelligence so far so there is no reason to assume that it will get super-intelligent any time soon without some major revolutionary break-through (those are by definition unpredictable and can not be extrapolated from prior developments).
I am not talking about degrees of intelligence at all. Measuring LLMs in IQ makes no sense because they literally have no model of the world, all they do is reproduce language by statistically analyzing how those same words appeared in the input data. That is the reason for all those inconsistencies in their output, they literally have no understanding at all of what they are saying.
An LLM e.g. can’t tell that it is inconsistent to talk about someone losing their right arm in one paragraph and then talking about them performing an activity that requires both hands in the next.