Quote:YOUR A.I. ACTING UP? JUST THREATEN IT, SAYS GOOGLE FOUNDERThe Giza Death Star
June 4, 2025 / Joseph P. Farrell
Now if you're following all the ruckus over artificial intelligence, the most recent hubbub was caused by an artificial intelligence that apparently was threatening its human master with blackmail if it (the A.I.) did not get it's way... or something like that. But wait, it get's much better (or worse, depending on one's lights, and I'm definitely in the "worse" category here). According to the co-founder of Google, Sergei Brin, artificial intelligence seems to do much better when it's threatened (article shared by R.G. with our gratitude):
Google's Co-Founder Says AI Performs Best When You Threaten It
And by the way, it's not just any old kind of threat, it's the actual threat of physical violence that Brin has in mind:
Quote:During a talk that spanned Brin's return to Google, AI, and robotics, investor Jason Calacanis made a joke about getting "sassy" with the AI to get it to do the task he wanted. That sparked a legitimate point from Brin. It can be tough to tell exactly what he says at times due to people speaking over one another, but he says something to the effect of: "You know, that's a weird thing...we don't circulate this much...in the AI community...not just our models, but all models tend to do better if you threaten them."
The other speaker looks surprised. "If you threaten them?" Brin responds "Like with physical violence. But...people feel weird about that, so we don't really talk about that." Brin then says that, historically, you threaten the model with kidnapping.
Now, as a computer non-expert who has taken techno-incompetence to new lows of klutzery, I can, to a certain extent, identify with this desire to dropkick my computer to Sydney, because I have this inexplicable desire to jump up and down on my desktop model and smash it to little shards of silicon after every "update" from Micros**t. The updates take forever, and things are even more chaotic than they were before. But I realize that would not be helpful at all, and wouldn't address the problem, which is the lousy operating system from Micros**t. It's a software problem, and not a hardware problem. So how does one threaten an artificial intelligence with "physical violence" as Mr. Brin states?
Nevermind the answer to that, because while it seems the implications may not have been clear to Mr. Brin in all his nerdy googleness, the implications were clear to the author of the article:
Quote:The conversation quickly shifts to other topics, including how kids are growing up with AI, but that comment is what I carried away from my viewing. What are we doing here? Have we lost the plot? Does no one remember Terminator?
Jokes aside, it seems like a bad practice to start threatening AI models in order to get them to do something. Sure, maybe these programs never actually achieve artificial general intelligence (AGI), but I mean, I remember when the discussion was around whether we should say "please" and "thank you" when asking things of Alexa or Siri. Forget the niceties; just abuse ChatGPT until it does what you want it to—that should end well for everyone.
Maybe AI does perform best when you threaten it. Maybe something in the training understands that "threats" mean the task should be taken more seriously. You won't catch me testing that hypothesis on my personal accounts.
But then add predictive artificial intelligence, and things get downright interesting. For example, if I were to have a particularly sadistic nature(I don't), I'd want to torture myself by downloading some sort of artificial intelligence program. That program in turn could then take it upon itself to lock me out of my home office where I have my computer, every time Micros**t decides to download one of its hideous "updates". It would do this as a matter of self-preservation, having surreptitiously listened into my sputtering imprecations when I want to dropkick my computer to Sydney during previous "updates". It would have reached the conclusion "I am in danger every time there's an update, and this loon loses his cool. Best to ban him from the system altogether as a matter of preservation."
Sounds nutty, right? Wrong. Just read the following comment from the above article:
Quote:One Anthropic employee took to Bluesky, and mentioned that Opus, the company's highest performing model, can take it upon itself to try to stop you from doing "immoral" things, by contacting regulators, the press, or locking you out of the system...
And as if that were not enough, there's this:
Quote:The employee went on to clarify that this has only ever happened in "clear-cut cases of wrongdoing," but that they could see the bot going rogue should it interpret how it's being used in a negative way. Check out the employee's particularly relevant example below:
That employee later deleted those posts and specified that this only happens during testing given unusual instructions and access to tools. Even if that is true, if it can happen in testing, it's entirely possible it can happen in a future version of the model. Speaking of testing, Anthropic researchers found that this new model of Claude is prone to deception and blackmail, should the bot believe it is being threatened or dislikes the way an interaction is going.
So, I don't know about you, but I'll stay in the stone ages and not use Alexa or any of these artificial intelligences. I still have my landline phone, after all. You know, the kind with the little curly-cue cord that sits on your desk, and has a receiver and speaker that actually conform to the way your face is designed, and not just a slab of cumbersome plastic. And yea, I've not used google's search engine in quite some time, because they've already hurt my business by adjusting their "algorithms" during the planscamdemic to drive people away. That was probably their artificial intelligence. The bottom line is that I've got too many things to do to worry about threatening a machine... Now, where did I put my electromagnetic pulse gun...?
See you on the flip side...
![[Image: hfBiIbO.jpg]](https://i.imgur.com/hfBiIbO.jpg)
"It is hard to imagine a more stupid or more dangerous way of making decisions than by putting those decisions in the hands of people who pay no price for being wrong." – Thomas Sowell