You could try something like a network filter that is out of the control of the user (e.g. on the router or something like a Raspberry Pi running Pihole), but you’d probably have to curate the blocklist manually, unless somebody else has published an anti-LLM list somewhere. And of course, it will only be as effective as the user’s ability to route around that blocklist dictates.
LLMs can also be run locally, so blocking all known network services that provide access still won’t prevent a dedicated user talking to an AI.
If one’s at the point where one runs local LLM’s, I would assume one is smart enough to explore the capabilities (or lack thereof) pretty quickly.
Took me less than a week to probe various models myself, concluding with “anybody considering AI’s to be oracles of objective truth have no contact with reality”.
If they are a threat to themselves or others, they can be put on a several day watch at a mental facility. 72hrs? 48hrs? Then they aren’t released until they aren’t a threat to themselves or others. They are usually medicated and go through some sort of therapy.
The obvioius cure to this is better education and mental health services. Better education about A.I. will help people understand what an A.I. is, and what it is not. More mentally stable people will mean less mentally unstable people falling into this area. Oversight on A.I. may be necessary for this type of problem, though I think everyone is just holding their breath, hoping it’ll fix itself as it becomes smarter.
When you’re released though, you’re released right back to the environment that you left (in the US anyway). There’s the ol computer waiting for you before the meds have reached efficacy. Square one and a half.
Is there any way to forcibly prevent a person from using a service like this, other than confiscating their devices?
Had this exact thought. But number must go up. Hell, for the suits, addiction and dependence on AI just guarantees the ability to charge more.
Currently no, if you are asking for suggestions maybe a black list like most countries have for gambling will be an option.
Of maybe just destroy all AI…
You could try something like a network filter that is out of the control of the user (e.g. on the router or something like a Raspberry Pi running Pihole), but you’d probably have to curate the blocklist manually, unless somebody else has published an anti-LLM list somewhere. And of course, it will only be as effective as the user’s ability to route around that blocklist dictates.
LLMs can also be run locally, so blocking all known network services that provide access still won’t prevent a dedicated user talking to an AI.
If one’s at the point where one runs local LLM’s, I would assume one is smart enough to explore the capabilities (or lack thereof) pretty quickly.
Took me less than a week to probe various models myself, concluding with “anybody considering AI’s to be oracles of objective truth have no contact with reality”.
If they are a threat to themselves or others, they can be put on a several day watch at a mental facility. 72hrs? 48hrs? Then they aren’t released until they aren’t a threat to themselves or others. They are usually medicated and go through some sort of therapy.
The obvioius cure to this is better education and mental health services. Better education about A.I. will help people understand what an A.I. is, and what it is not. More mentally stable people will mean less mentally unstable people falling into this area. Oversight on A.I. may be necessary for this type of problem, though I think everyone is just holding their breath, hoping it’ll fix itself as it becomes smarter.
When you’re released though, you’re released right back to the environment that you left (in the US anyway). There’s the ol computer waiting for you before the meds have reached efficacy. Square one and a half.
This sounds like a job for an AI shrink!