I wasn't sure which forum category this nutty subject matter (it's a doozy) would best fit, so dropping here.
Aside from "biologics" (a term I'm still unsure about), over in the related fantasy realm I learned a crazy, wild, new acronym. Are you a "TESCREAList"?
As you've noticed the essay has many embedded links to explore and I only quoted about 30%.
Link to full piece: The Acronym Behind Our Wildest AI Dreams and Nightmares
He's since removed the acronym from his 'X' profile. However, he branded himself with "e/acc"...
...which I think is an abbreviation for "Effective Acceleration" and he too has written an essay stating: "AI will not destroy the world, but in fact may save it."
h+ pedia
Anyways, if you punch in "TESCREAL beliefs" into the machine you'll get a cauldron brew of links.
AI and the threat of "human extinction": What are the tech-bros worried about? It's not you and me
I ran the acro through the anagram machine and got: "Care Lest"...
"He who fights with monsters might take care lest he thereby become a monster. And if you gaze for long into an abyss, the abyss gazes also into you." — Friedrich Nietzsche's book "Beyond Good and Evil"
So, in brief reading of that quote, it's generally understood to convey the idea that when we engage in battle with evil forces or individuals, we must be careful not to become consumed by the same evil that we are fighting against. The quote also suggests that when we stare into the darkness or the abyss of evil, it has the power to consume us as well. In short, the quote is a warning against becoming too deeply involved in the struggle against evil, as it can lead to becoming consumed by it. On a related note don't take Globalist der Hochklaus von Blohschwab Freiherr von Bomburst und Bloviation and his lackeys at Davos too seriously, else be eaten alive, literally. That club is a round-table of existential nihilists.
Quote of the day, directly relevant to the TESCREAL/Technocrat movement, from the philosopher Yehoshua Bar-Hillel (1968, pg 372):
"I think that all that talk about the destiny or goals of humanity is seductive talk which scientists should try to oppose. Any such talk will quickly lead to the recognition of somebody who is setting these goals and of a privileged class of people who know from the horse's mouth what these goals are." — "Induction, Physics and Ethics: Proceedings and Discussions of the 1968 Salzburg Colloquium in the Philosophy of Science" by P. Weingartner, G. Zecha.
Quote from Frank Herbert's God Emperor of Dune:
"What do such machines [AI] really do? They increase the number of things we can do without thinking. Things we do without thinking - there’s the real danger."
Ok, I'm now exiting the TESCREAL realm and will leave it to you to dive deeper...or not...and if you do find or trip across anything new, interesting, alarming, bizarre on this AI soup, please add it here to the "Care Lest" thread.
Thanks!
Aside from "biologics" (a term I'm still unsure about), over in the related fantasy realm I learned a crazy, wild, new acronym. Are you a "TESCREAList"?
Quote:TESCREAL—pronounced “tess-cree-all.” It’s a strange word that you may have seen pop up over the past few months. The renowned computer scientist Dr. Timnit Gebru frequently mentions the “TESCREAL” ideologies on social media, and for a while the Twitter profile of billionaire venture capitalist Marc Andreessen read: “cyberpunk activist; embracer of variance; TESCREAList.” The Financial Times, Business Insider and VentureBeat have all used or investigated the word. And The Washington Spectator published an article by Dave Troy titled, “Understanding TESCREAL—The Weird Ideologies Behind Silicon Valley’s Rightward Turn.”
My guess is that the acronym will gain more attention as the topic of artificial intelligence becomes more widely discussed, along with questions about the strange beliefs of its most powerful Silicon Valley proponents and critics. But what the heck does “TESCREAL” mean and why does it matter?
I have thought a lot about these questions, as I coined the term in an as-yet unpublished academic paper, co-written with Gebru, tracing the influence of a small constellation of interrelated and overlapping ideologies within the contemporary field of AI. Those ideologies, we believe, are a central reason why companies like OpenAI, funded primarily by Microsoft, and its competitor, Google DeepMind, are trying to create “artificial general intelligence” in the first place.
The problem that Gebru and I encountered when writing our paper is that discussing the constellation of ideologies behind the current race to create AGI, and the dire warnings of “human extinction” that have emerged alongside it, can get messy real fast. The story of why AGI is the ultimate goal— with some seeing ChatGPT and GPT-4 as big steps in this direction — requires talking about a lot of long, polysyllabic words: transhumanism, Extropianism, singularitarianism, cosmism, Rationalism, Effective Altruism and longtermism. I have written about the last two of these in previous articles for Truthdig, which probed how they have become massively influential within Silicon Valley. But you don’t have to look very hard to see their impact, which is pretty much everywhere. “TESCREAL” is one solution to the problem of talking about this cluster of ideologies without a cluttering repetition of almost-impossible-to-pronounce words. John Lennon captured the problem when he sang, “This-ism, that-ism, is-m, is-m, is-m.”
To minimize the “is-m, is-m, is-m,” I proposed the acronym “TESCREAL,” which combines the first letter of the ideologies listed above, in roughly the same order they appeared over the past three and a half decades. Gebru and I thus began to reference the “TESCREAL bundle of ideologies” to streamline our discussion, which gave rise to the terms “TESCREALism” (a reference to the bundle as a whole) and “TESCREAList” (someone who endorses most or all of this bundle). So, we traded a messy list of words for a single clunky term; not a perfect fix, but given the options, a solution we were happy with.
Little that’s going on right now with AI makes sense outside the TESCREAL framework. The overlapping and interconnected ideologies that the “TESCREAL” acronym captures are integral to understanding why billions of dollars are being poured into the creation of increasingly powerful AI systems, and why organizations like the Future of Life Institute are frantically calling for “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.” They also explain the recent emergence of “AI doomerism,” led by the TESCREAList Eliezer Yudkowsky, who in a recent TIME op-ed endorsed the use of military strikes against data centers to delay the creation of AGI, including at the risk of triggering an all-out thermonuclear war.
At the heart of TESCREALism is a “techno-utopian” vision of the future. It anticipates a time when advanced technologies enable humanity to accomplish things like: producing radical abundance, reengineering ourselves, becoming immortal, colonizing the universe and creating a sprawling “post-human” civilization among the stars full of trillions and trillions of people. The most straightforward way to realize this utopia is by building superintelligent AGI.
[skipping down to last two paragraphs of essay:]
If TESCREALism was not an ascendant ideology within some of the most powerful sectors of society, we might chuckle at all of this, or just roll our eyes. But the frightening fact is that the TESCREAL bundle is already shaping our world, and the world of our children, in profound ways. Right now, the media, the public, policymakers and our political leaders know little about these ideologies. As someone who participated in the TESCREAL movement over the past decade, but who now views it as a destructive and dangerous force in the world, I feel a moral obligation to educate people about what’s going on. Although the term “TESCREAL” is strange and clunky, it holds the keys to making sense of the “accelerationist” push to develop AGI as well as the “doomer” backlash against recent advancements, driven by fears that AGI — if created soon — might annihilate humanity rather than ushering in a utopian paradise.
If we are to have any hope of counteracting this behemoth, it is critical that we understand what “TESCREAL” means and how these ideologies have infiltrated the highest echelons of power. To date, the TESCREAL movement has been subject to precious little critical inquiry, let alone resistance. It’s time for that to change.
As you've noticed the essay has many embedded links to explore and I only quoted about 30%.
Link to full piece: The Acronym Behind Our Wildest AI Dreams and Nightmares
He's since removed the acronym from his 'X' profile. However, he branded himself with "e/acc"...
...which I think is an abbreviation for "Effective Acceleration" and he too has written an essay stating: "AI will not destroy the world, but in fact may save it."
h+ pedia
Anyways, if you punch in "TESCREAL beliefs" into the machine you'll get a cauldron brew of links.
AI and the threat of "human extinction": What are the tech-bros worried about? It's not you and me
I ran the acro through the anagram machine and got: "Care Lest"...
"He who fights with monsters might take care lest he thereby become a monster. And if you gaze for long into an abyss, the abyss gazes also into you." — Friedrich Nietzsche's book "Beyond Good and Evil"
So, in brief reading of that quote, it's generally understood to convey the idea that when we engage in battle with evil forces or individuals, we must be careful not to become consumed by the same evil that we are fighting against. The quote also suggests that when we stare into the darkness or the abyss of evil, it has the power to consume us as well. In short, the quote is a warning against becoming too deeply involved in the struggle against evil, as it can lead to becoming consumed by it. On a related note don't take Globalist der Hochklaus von Blohschwab Freiherr von Bomburst und Bloviation and his lackeys at Davos too seriously, else be eaten alive, literally. That club is a round-table of existential nihilists.
Quote of the day, directly relevant to the TESCREAL/Technocrat movement, from the philosopher Yehoshua Bar-Hillel (1968, pg 372):
"I think that all that talk about the destiny or goals of humanity is seductive talk which scientists should try to oppose. Any such talk will quickly lead to the recognition of somebody who is setting these goals and of a privileged class of people who know from the horse's mouth what these goals are." — "Induction, Physics and Ethics: Proceedings and Discussions of the 1968 Salzburg Colloquium in the Philosophy of Science" by P. Weingartner, G. Zecha.
Quote from Frank Herbert's God Emperor of Dune:
"What do such machines [AI] really do? They increase the number of things we can do without thinking. Things we do without thinking - there’s the real danger."
Ok, I'm now exiting the TESCREAL realm and will leave it to you to dive deeper...or not...and if you do find or trip across anything new, interesting, alarming, bizarre on this AI soup, please add it here to the "Care Lest" thread.
Thanks!
"It is hard to imagine a more stupid or more dangerous way of making decisions than by putting those decisions in the hands of people who pay no price for being wrong." – Thomas Sowell