

This is a great example - it kinda makes sense if you skim read it but butterflies have nothing to do with butter, just like hotdogs have nothing to do with dogs.
Mostly a backup account for now, other @Deebster
s are available.
This is a great example - it kinda makes sense if you skim read it but butterflies have nothing to do with butter, just like hotdogs have nothing to do with dogs.
FiveSixElevenSeventeen downvotes and counting…
LLMs are already being used for policy making, business decisions, software creation and the like. The issue is bigger than summarisers, and “hallucinations” are a real problem when they lead to real decisions and real consequences.
If you can’t imagine why this is bad, maybe read some Kafka or watch some Black Mirror.
My friends would probably say something like “I’ve never heard that one, but I guess it means something like …”
The problem is, these LLMs don’t give any indication when they’re making stuff up versus when repeating an incontrovertible truth. Lots of people don’t understand the limitations of things like Google’s AI summary* so they will trust these false answers. Harmless here, but often not.
* I’m not counting the little disclaimer because we’ve been taught to ignore smallprint from being faced with so much of it
I found that trying “some-nonsense-phrase meaning” won’t always trigger the idiom interpretation, but you can often change it to something more saying-like.
I also found that trying in incognito mode had better results, so perhaps it’s also affected by your settings. Maybe it’s regional as well, or based on your search result. And, as AI’s non-deterministic, you can’t expect it to always work.
I’m assuming that these uninhabited islands don’t have a port and so this is an impossible scenario, but I don’t know enough about international shipping to be sure.
I’m not understanding why that’s an appropriate name, but maybe I need to learn more about butterflies.