I’ve been thinking for a while about how those seeking and achieving leadership roles of nations and massive corporations deal with the enormity of the role. Putting aside pre-existing mental health concerns it must take an enormous ambition and sense of agency to take up such roles. My suspicion is that such roles are actually characterised by a lack of agency coupled of course with all the responsibility. There are many links in the chain from the conception to the manifestation of an idea and your personal influence does not extend far down that chain. The bigger the entity the greater the complexity and to my mind this too great a burden for a single person. The sane answer would be to reduce the size of the entity and so reduce the complexity. The insane response is to increase the complexity and hand it over to a machine.
Just a note to say ~ Your writing is superb. I recently did a search of The Guardian webpage for the words "ecological overshoot", as it is a verboten term in the media. So it pops up a few times, but the most recent is a story written by a woman....but her name doesn't have the "click-on" function that so many authors do on their webpage, only I know I know it from somewhere! Of course it's your story I was reading. But I'm an old man (turn 65 this Friday), and my memory is blah. Keep writing, you say things here I've been dancing with for decades, thanks to the writings of others...Lewis Mumford's classic two volume "The Myth of the Machine", Norbert Wiener's fascinating "God & Golem, Inc.", and one of my favorite rereads for almost forty years, Marilyn French's "Beyond Power". Stay in the fight, the boys clubs are dangerously dumb, and are behind the wars and all the mistakes....say it in words those of who read will appreciate, and take it to the corridors of power, and shut them down, while at the same time you're busy building new corridors...of kindness.
While there are weirdos in every movement, not all are that far-fetched. Sure Ai has its Ray Kurzweil’s and other Singularity Saviour’s preaching a message of staying alive just long enough to reach “Escape Velocity” so that if you live one year, you get another 1.5 back. They dream of Ai improving medicine to the point that one day they inject the medical nano-droids that can repair any DNA damage and telomeres and we’ll live forever. Or upload our minds into a Dematerialised Matrix “Heaven.” Weirdly our vastly different worldviews have a similar reaction at this point - where we both seem to shudder at the thought. But where this podcast interview really went off the rails was when you failed to interview someone that actually knew anything about it! You both seemed so dumbfounded about how Ai could possibly help the ‘real world’ - and sort of scoffed and laughed that maybe Ai designers believed their code would somehow magically impact the "real world".
First, I winced when you and Paul lamented that Ai was writing more efficient mining systems to get the rare earths. What is wrong with that? Better mining in one area might mean less mining in another. I know you don't like the term 'sacrifice zones' - but I think that is a more helpful concept if it helps us limit the number of them, and ultimately limits our amount of environmental impact.
Second, have you not heard that Ai invented a series of EV batteries that require 70% less lithium? So that makes our existing lithium reserves last 3 times longer - or might indeed limit how much lithium mining we need to do. Ai also invented a permanent magnet that does not NEED rare earths in the first place. Let alone countless new medicines we are about to trial.
Third - and this is where the whole interview really went off the rails!
You wanted to know how Ai was going to impact the "real world". Have you not been watching the news? Not heard what Elon Musk, Open Ai, and many others are pouring billions into? Not heard of Optimus, Unitree, Figure, Ameca, Alter 3, Amar 6, Apollo, Atlas, Beomni, Digit, Jiajia, Kime, Nadine, Nao, Oceanone, Pepper, Robonaut 2, Phenonix, and Eve?
This Ai business? It’s mostly about giving 'minds' to ROBOTS! That is how they plan to impact the real world.
I want to say up front - I don't know if anything I'm going to describe below can happen. No one does. I'm Bright Green - and accept the peer-reviewed Energy Transition papers that say we can have all the clean energy we need to run 95% of what we're running today without cooking the planet. There's also a bunch of political changes I would like to see - and I'm not saying that the energy transition alone is all we need. But if this Robot Revolution happens as described below - EVERYTHING changes.
We don't need it to prevent climate change and save the biosphere - but it sure might help.
So what does the Robot Revolution look like?
In the land of "Once upon a time" - let's describe my Solarpunk dream!
It all starts with their goal of AGI - "Artificial General Intelligence" possible. That means Ai will be able to navigate robots through our human world doing more and more of our human jobs. Not just like an autistic savant in certain specialized areas – but having “General” intelligence. Able to clean homes, do plumbing, work in factories, be accountants – everything. In the proverbial land of “One day.”
The experts like Ilya Sutskever (who figured out how to run deep learning on GPU’s) estimate that because GPU’s increase in speed MUCH faster than Moore’s Law - they’ll hit 1 trillion calculations per second around 2029. That's when some estimate we'll hit AGI.
Ilya's working to go straight to ASI - which is "Artificial Super Intelligence." That's beyond Star Trek's post-scarcity - and moving into "The Culture" from Ian M Banks where demigod like "Minds" run enormous orbital ring space stations as artificial worlds with 50 billion people in each and nature thriving. But AGI? What does that look like?
Elon recently claimed future Optimus sales could “One day” make Tesla worth $25 TRILLION - because of his ROBOTS - not his EV’s.
The Robot story IS about utterly changing the physical world and how we do everything. It really kicks in when Robots start making Robots - from the mining and smelting through to the extra wind and solar they’ll need through to the factory floor. That’s when the future REALLY goes exponential - in the REAL world.
That's when stock market valuations about being worth $25 TRILLION become meaningless - and the government may just have to nationalise EVERYTHING to try and pass laws and direct it all. It's when we end up accidentally living in post-scarcity Techno-Utopian Socialism.
Again - it's like something out of Star Trek. Or "I Robot". Because when labour is free because it is no longer human, and these droids are self-replicating - everything changes. Crazy dreams become possible. A global Universal Basic Income of $200,000 just for being a citizen of this planet becomes possible. And the free labour to run everything on low impact technologies, and restore every mine site, and rescue every ecosystem, and collect every piece of plastic and garbage from every land and every sea, and eradicate every pest and pull up every weed - all this becomes possible when labour is free and you have an army of eco-bots restoring everything.
Nature and humanity could thrive like never before. Education for children might feel like an indigenous village gathering rather than being fodder for the industrial machine. School "holidays" might be much longer, giving families time to fly around the world - visiting new ecosystems they care about and maybe adopting some droids they can log into when back home to get the latest video update on that Californian Redwood planting project in Australia's outback. Whatever.
"One day" all our food will come from a mix of Precision Fermentation, giant seaweed farms making seaweed protein powder, and Permaculture farms.
Biologists will draw up maps of biomes to restore, and give us free reign in some 'theme parks' where we try new ones!
And 'One day' - those Mensa types may just work with the Ai to fire self-replicating factory droid ships at the asteroid belt and Mars - so that 'One day' gifts started raining down from the sky. Will they be rare earths - or the latest droids? Will all manufacturing move to space - apart from some fantastic recycling centres here on earth? Will Earth be zoned "Parks and residential" while most industry ends up in space? How big can the droids build O'Neil cylinders for those of us that wish to drift off to Mars? Will we finally terraform Mars - and 'one day' see whales swimming in oceans on Mars. Now that's saving the whales!
Of course - I've drifted off into Sci-Fi hyperbole. Or have I?
We don't know. You don't know. Ilya Sutskevar does not know. No one does.
We've never created AGI before.
But all this becomes possible just with AGI. Let alone ASI.
My point? Try not to scoff about things you don't know much about. Or at least - try to interview someone who does know something about it - so that your audience can know as well.
Maybe it's time to interview someone environmental who CAN explain how 'coding can impact the real world' - so you at least know what the claims are? Try this guy from Re-think X.
That's who I would platform if I were on your show. (Winks)
Nothing wrong with pointing out the benefits of AI. The current capacities of AI technologies tend to function as a cognitive prosthesis or amplifier for human projects and desires, and it’s quite reasonable to indicate where there are some laudable ‘real world’ outputs.
However, with multiple high-profile AI insiders voicing worries and sounding alarms (e.g. Mo Gawdat, Geoffrey Hinton, Eliezer Yudkowsky), one must rationally also consider the dangers arising from (a) the current direction of AI development and (b) the practical difficulties, or logical impossibilities, of adequately aligning these technologies with human intentions and values, not to mention ecological and environmental systems. Unlike any other technology in human history, AI technologies have an exceptionally high level of optionality, rendering them qualitatively dissimilar to anything else, and able to amplify human impacts across multiple domains. Recent large-scale surveys of AI specialists have identified significant (i.e. circa 50% of respondents) worries of a non-trivial (i.e. 10%+) risk of existentially catastrophic outcomes arising from the technologies. Some AI insiders, such as Yudkowsky, consider an AI catastrophe to be virtually unavoidable, while even AI security moderates, such as Paul Christiano, are on record as saying that they think they are more likely to die from an AI-related risk than anything else in their lives.
Even for those who argue for the many advantages, affordances and benefits of AI technologies, the potential risks and possibilities of harms should at least warrant the very careful application of the precautionary principle and/or a risk management approach to their design and implementation. Do we need to utilise these technologies in order to actualise ecological values or human social flourishing? What could follow socially, politically, economically and existentially from the widespread implementation of AI technologies? Lest this sounds like a slippery slope argument (i.e. that starting to make use of these technologies will necessarily lead to some future catastrophe), arguably when moving along any potentially slippery surface, one probably ought to proceed with the greatest of care. As with much of our thinking about climate change, is a one-in-six or one-in-ten chance of an existentially catastrophic outcome worth taking a risk on? Even if AI seems to offer a future post-scarcity utopia, I think some would say that rolling those dice isn’t worth the risk (If 1-2 = utopia, 2-5 = mixed outcomes, 6 = annihilation, should "we" roll the dice?).
While one can imagine – often through the lens of Science Fiction – the benefits of AI (e.g. you mention the benevolent Minds of Banks’ Culture, or perhaps one might also note the AIs of Neil Asher’s Polity universe, who engineered a Quiet Revolution and now efficiently and semi-benevolently manage human affairs from the background), there is a certain wishful thinking here, projecting the ‘better angels of our natures’ onto AI and then amplifying them a hundred-fold. However, I think in the short or intermediate term, AI is probably better thought of as something that can and will multiply human impacts a hundred or thousand-fold. The rapidly increasing energy and resource hungry demands for Big Data and AI are increasingly well-documented (and provide little indication of a post-scarcity future but rather an even more ecologically depleted world), the more efficient mining of rare earth minerals example from the interview simply evidences the potential for Jevon’s Paradox on steroids (i.e. efficiency producing more GDP growth), plus – crucially I think – AI provides countless opportunities for human ‘bad actors’ to produce even worse outcomes (one could simply provide an AI with a synthetic biology task to produce an uber-virus or perhaps to hack some financial or energy infrastructure) or for the unintentional and unforseen consequences of the tasks assigned to AIs to be performed “too efficiently” (from the infamous example of maximising paper-clip production to a more realistic but potentially disastrous task of something like ‘maximise my wealth’).
I’m not denying the potential benefits of AI, and I agree that we don’t know what will happen. However, an existential risk-management approach seems to be warranted for a technology that seems likely to accelerate the human-all-too-human transformation of the ‘real world’.
P.S. Thanks for identifying yourself as Bright Green, it’s very helpful. I probably self-label as Dark Green, so I see a lot of these technologies as progress traps and amplifying a human, ecological and entropic ‘race to the bottom’. That said, I still read my Sci Fi (Banks, Asher and many others (including Frank Herbert's Butlerian Jyhad from Dune)), although conflating much of it with fantasy these days, rather than speculative possibilities. I’d like it to be true, but I’d similarly like magic, dragons and many other things to be true too.
Excellent post - and of course there's much here I agree with. But the main difference in your reply and my reply to this episode of Planet Critical?
An awareness that AI isn't just code running on a server somewhere. That it could have real world outcomes.
And I agree that they are not always good! For instance - in the (rather awful) Terminator 3 movie - we meet a top Pentagon General trying to save the American Military from some Ai worm burrowing through all their systems. It comes down to a Y/N prompt for "Would you like to upload Skynet and end the world?"
Now, I always thought that was incredibly exaggerated. As if! But note that Ai doesn't even have to become a self-aware "Skynet" to be very dangerous. Something might INDICATE an attack to Ai - and it would just respond without questioning it.
What if there's an impetus to gradually hand over military control - including ICBM's - to Ai because human reaction times are too slow? If Russia hands it over to Ai - will we be at a severe disadvantage? What could possibly go wrong? My son is studying a Masters in computer science, and I'm fast reaching a point where I need to set a rule to have a curfew on certain conversations with him or I will not sleep! This next episode handles exactly the Terminator 3 scenario - and why there might be an irresistible temptation to hand these things over to Ai.
If I want ANYTHING controlled by humans it's ICBM's. Everyone should memorize the name Stanislav Petrov - as he literally saved us from WW3 in 1983. Something INDICATED an attack - but he just wanted to wait and double check before ending the world.
Yeah, there have been a few events like the 1983 Abel Archer incident that you mention, where a human on the ground may have prevented nuclear war or escalation; although I daresay a few events have been human triggered too. Scary how we have become desensitized to the risk of nuclear war (I think risk homeostasis is the term for the process, rather like the apocryphal frog in the heating saucepan), especially when multiple studies suggest that the likelihood of nuclear war during the last fifty years has been somewhere between 40 and 60%. Have a look at the Pristina Airport incident in 1999 if you like scaring yourself, only Captain James Blunt's refusal to follow orders and "start the third world war" prevented a rather nasty turn of events getting much worse.
There's an irony in the science of all this. The global Warming studies into the Australian mega-fires of 2019 - and how the smoke particulates lofted so high in pyrocumulonimbus clouds - helped refine the data for nuclear Winter climate models. The outcome for the Northern Hemisphere is NOT good! The world loses 360 million in the first hours of the war - but then up to 5 BILLION starve to death in the nuclear winter. Check the horrible map here.
This all seems spot on Rachel. I remember a lot of these ideas from back in the early 2000s when I was teaching a course of Religion and the Media. There were a variety of books reflecting on the kind of points you are making here (and this was all pre-Social Media, ChatGPT etc). Eric Davis’ Techgnosis, Willian Stahl’s God and the Chip, David Noble’s The Religion of Technology, Doug Groothius’ The Soul in Cyberspace, Jennifer Cobb’s Cybergrace, Maragret Wertheim’s The Pearly Gates of Cyberspace. Like I say, there was a lot of this stuff published in the late 1990s and early 2000. Basically, a bunch of old and often rather male existential anxieties, fears and spiritual longings (around death, mortality, transcendence) finding new outlets (e.g. with AI and Hi-Tech replacing God and the Singularity replacing an other-worldly salvation). I think you may like Jane Caputi’s recent Call Your “Mutha” – A Deliberately Dirty-Minded Manifesto for the Earth Mother in the Anthropocene as a salve against some of these narratives (although here it is the Earth Mother who is likely to punish us, whereas – I agree – many tech-bros seem to be subconsciously desiring an AI dominatrix to provide some discipline!). Good luck with the book.
I expect the benefits, like the harms, will come from the uses humans put the AI's to, rather than anything intrinsic to the "nature" of AI. Used to identify and target people's vulnerabilities, to influence and persuade, to track and target them politically, to nullify dissent and protect the corrupt they could prove deeply harmful. Handing real time warfare to AI, to control of weapons systems could reduce collateral damage... or take the restraint option out of the hands of people who know the costs of war.
To optimize critical technologies like superior batteries by identifying promising chemistries by modeling them or running complex power grids or improve productivity they can be beneficial.
The lead us to the promised land/species immortality Longtermist kinds of ideology are in my view a distraction; even where those involved Believe wholeheartedly and fervently in such objectives they are more likely to prove an ongoing waste of resources, trying to force a future that I think can only ever be possible as an emergent outcome of a healthy, wealthy Earth economy doing things in space because they are cost effective and beneficial. Popularized by.... targeting those innate human vulnerabilities to false hopes and unrealistic expectations. Only Earth has all the crucial ingredients but has to endure for such grand dreams to ever become possible, a prerequisite for them, not a means of escape from an Earth that is deeply dis-functional.
A whole, fully functional advanced industrial economy - the barest minimum just for basic survival - on Mars? I find that nonsensical, given our most advanced industries are dependent on global supply chains, on economies large enough and varied enough to support advanced specialist materials, equipment and skills. I have yet to see even a list of essential minerals for such an economy, let alone maps of reserves and plans for how to exploit them. Going there and working it out after won't work.
I'm not even convinced that a well resourced "colony" with every Mars colonist working to their fullest abilities will manage just the barest basic essentials out of local resources - air, water, energy, food.
The problem isn't AI, it is people. Can AI work to identify and target corrupt and undue influence? That could be good. But would leaders in commerce, industry and government that treat such influence as intrinsic to doing business even want that? Will they prefer AI used to target "activists" to prevent it?
Why is Ai worth it? Because IF - and I'm not saying it can - but IF AGI arrives - we are in a different world! It may solve ALL of our global and local environmental challenges by giving us ALL the free labour we could want. Robots today are stupid brain-damaged creatures compared to what may be coming by around 2029 IF AGI actually is possible. I don't know if it is. I don't think ANYONE knows, even Ai designer Ilya Sutskevar who first put deep learning into GPU's.
But if AGI DOES arrive - we are in a different kind of world. Something from "I robot". Because if we put it in robots and get them to build the Energy Transition, Eco-cities, other Robots and massive forests of the future - anything goes.
MARS: If AGI arrives - Space X could fire a few self-replication factory ships at the asteroid belt and the droids would build a whole civilisation supply chain in zero g. Then future massive cargo-ships would tug CO2 from Titan - delivering it at high Delta V to the Martian Poles to cause heat and thicken the atmosphere. Armies of free labour would build all kinds of O'Neil Cylinders for us to live in in space, and armies of droids would gradually terraform Mars and prepare a new home for both us and a new biosphere.
IF AGI arrives. Again - we don't know. Rachel doesn't know. I don't know. Ilya doesn't know.
But wouldn't it be better to actually hear from these dreamers than scoff at what they're saying without hearing their argument first?
As we approach 1 trillion calculations per second by around 2029, it might just arrive.
I’ve been thinking for a while about how those seeking and achieving leadership roles of nations and massive corporations deal with the enormity of the role. Putting aside pre-existing mental health concerns it must take an enormous ambition and sense of agency to take up such roles. My suspicion is that such roles are actually characterised by a lack of agency coupled of course with all the responsibility. There are many links in the chain from the conception to the manifestation of an idea and your personal influence does not extend far down that chain. The bigger the entity the greater the complexity and to my mind this too great a burden for a single person. The sane answer would be to reduce the size of the entity and so reduce the complexity. The insane response is to increase the complexity and hand it over to a machine.
Exactly. 💯
Fun fact: since Alphabet has been trying to work out the kinks in their garbage AI, their CO2 emissions have risen by 50%.
Well put, Rachel. Spot on. Cyber- theology with its pants down.
Just a note to say ~ Your writing is superb. I recently did a search of The Guardian webpage for the words "ecological overshoot", as it is a verboten term in the media. So it pops up a few times, but the most recent is a story written by a woman....but her name doesn't have the "click-on" function that so many authors do on their webpage, only I know I know it from somewhere! Of course it's your story I was reading. But I'm an old man (turn 65 this Friday), and my memory is blah. Keep writing, you say things here I've been dancing with for decades, thanks to the writings of others...Lewis Mumford's classic two volume "The Myth of the Machine", Norbert Wiener's fascinating "God & Golem, Inc.", and one of my favorite rereads for almost forty years, Marilyn French's "Beyond Power". Stay in the fight, the boys clubs are dangerously dumb, and are behind the wars and all the mistakes....say it in words those of who read will appreciate, and take it to the corridors of power, and shut them down, while at the same time you're busy building new corridors...of kindness.
Thank you!
🎯
Hi again Rachel,
While there are weirdos in every movement, not all are that far-fetched. Sure Ai has its Ray Kurzweil’s and other Singularity Saviour’s preaching a message of staying alive just long enough to reach “Escape Velocity” so that if you live one year, you get another 1.5 back. They dream of Ai improving medicine to the point that one day they inject the medical nano-droids that can repair any DNA damage and telomeres and we’ll live forever. Or upload our minds into a Dematerialised Matrix “Heaven.” Weirdly our vastly different worldviews have a similar reaction at this point - where we both seem to shudder at the thought. But where this podcast interview really went off the rails was when you failed to interview someone that actually knew anything about it! You both seemed so dumbfounded about how Ai could possibly help the ‘real world’ - and sort of scoffed and laughed that maybe Ai designers believed their code would somehow magically impact the "real world".
First, I winced when you and Paul lamented that Ai was writing more efficient mining systems to get the rare earths. What is wrong with that? Better mining in one area might mean less mining in another. I know you don't like the term 'sacrifice zones' - but I think that is a more helpful concept if it helps us limit the number of them, and ultimately limits our amount of environmental impact.
Second, have you not heard that Ai invented a series of EV batteries that require 70% less lithium? So that makes our existing lithium reserves last 3 times longer - or might indeed limit how much lithium mining we need to do. Ai also invented a permanent magnet that does not NEED rare earths in the first place. Let alone countless new medicines we are about to trial.
Third - and this is where the whole interview really went off the rails!
You wanted to know how Ai was going to impact the "real world". Have you not been watching the news? Not heard what Elon Musk, Open Ai, and many others are pouring billions into? Not heard of Optimus, Unitree, Figure, Ameca, Alter 3, Amar 6, Apollo, Atlas, Beomni, Digit, Jiajia, Kime, Nadine, Nao, Oceanone, Pepper, Robonaut 2, Phenonix, and Eve?
This Ai business? It’s mostly about giving 'minds' to ROBOTS! That is how they plan to impact the real world.
I want to say up front - I don't know if anything I'm going to describe below can happen. No one does. I'm Bright Green - and accept the peer-reviewed Energy Transition papers that say we can have all the clean energy we need to run 95% of what we're running today without cooking the planet. There's also a bunch of political changes I would like to see - and I'm not saying that the energy transition alone is all we need. But if this Robot Revolution happens as described below - EVERYTHING changes.
We don't need it to prevent climate change and save the biosphere - but it sure might help.
So what does the Robot Revolution look like?
In the land of "Once upon a time" - let's describe my Solarpunk dream!
It all starts with their goal of AGI - "Artificial General Intelligence" possible. That means Ai will be able to navigate robots through our human world doing more and more of our human jobs. Not just like an autistic savant in certain specialized areas – but having “General” intelligence. Able to clean homes, do plumbing, work in factories, be accountants – everything. In the proverbial land of “One day.”
The experts like Ilya Sutskever (who figured out how to run deep learning on GPU’s) estimate that because GPU’s increase in speed MUCH faster than Moore’s Law - they’ll hit 1 trillion calculations per second around 2029. That's when some estimate we'll hit AGI.
Ilya's working to go straight to ASI - which is "Artificial Super Intelligence." That's beyond Star Trek's post-scarcity - and moving into "The Culture" from Ian M Banks where demigod like "Minds" run enormous orbital ring space stations as artificial worlds with 50 billion people in each and nature thriving. But AGI? What does that look like?
Elon recently claimed future Optimus sales could “One day” make Tesla worth $25 TRILLION - because of his ROBOTS - not his EV’s.
The Robot story IS about utterly changing the physical world and how we do everything. It really kicks in when Robots start making Robots - from the mining and smelting through to the extra wind and solar they’ll need through to the factory floor. That’s when the future REALLY goes exponential - in the REAL world.
That's when stock market valuations about being worth $25 TRILLION become meaningless - and the government may just have to nationalise EVERYTHING to try and pass laws and direct it all. It's when we end up accidentally living in post-scarcity Techno-Utopian Socialism.
Again - it's like something out of Star Trek. Or "I Robot". Because when labour is free because it is no longer human, and these droids are self-replicating - everything changes. Crazy dreams become possible. A global Universal Basic Income of $200,000 just for being a citizen of this planet becomes possible. And the free labour to run everything on low impact technologies, and restore every mine site, and rescue every ecosystem, and collect every piece of plastic and garbage from every land and every sea, and eradicate every pest and pull up every weed - all this becomes possible when labour is free and you have an army of eco-bots restoring everything.
Nature and humanity could thrive like never before. Education for children might feel like an indigenous village gathering rather than being fodder for the industrial machine. School "holidays" might be much longer, giving families time to fly around the world - visiting new ecosystems they care about and maybe adopting some droids they can log into when back home to get the latest video update on that Californian Redwood planting project in Australia's outback. Whatever.
"One day" all our food will come from a mix of Precision Fermentation, giant seaweed farms making seaweed protein powder, and Permaculture farms.
Biologists will draw up maps of biomes to restore, and give us free reign in some 'theme parks' where we try new ones!
And 'One day' - those Mensa types may just work with the Ai to fire self-replicating factory droid ships at the asteroid belt and Mars - so that 'One day' gifts started raining down from the sky. Will they be rare earths - or the latest droids? Will all manufacturing move to space - apart from some fantastic recycling centres here on earth? Will Earth be zoned "Parks and residential" while most industry ends up in space? How big can the droids build O'Neil cylinders for those of us that wish to drift off to Mars? Will we finally terraform Mars - and 'one day' see whales swimming in oceans on Mars. Now that's saving the whales!
Of course - I've drifted off into Sci-Fi hyperbole. Or have I?
We don't know. You don't know. Ilya Sutskevar does not know. No one does.
We've never created AGI before.
But all this becomes possible just with AGI. Let alone ASI.
My point? Try not to scoff about things you don't know much about. Or at least - try to interview someone who does know something about it - so that your audience can know as well.
Maybe it's time to interview someone environmental who CAN explain how 'coding can impact the real world' - so you at least know what the claims are? Try this guy from Re-think X.
That's who I would platform if I were on your show. (Winks)
https://www.youtube.com/watch?v=sT6WfUZp8es
Nothing wrong with pointing out the benefits of AI. The current capacities of AI technologies tend to function as a cognitive prosthesis or amplifier for human projects and desires, and it’s quite reasonable to indicate where there are some laudable ‘real world’ outputs.
However, with multiple high-profile AI insiders voicing worries and sounding alarms (e.g. Mo Gawdat, Geoffrey Hinton, Eliezer Yudkowsky), one must rationally also consider the dangers arising from (a) the current direction of AI development and (b) the practical difficulties, or logical impossibilities, of adequately aligning these technologies with human intentions and values, not to mention ecological and environmental systems. Unlike any other technology in human history, AI technologies have an exceptionally high level of optionality, rendering them qualitatively dissimilar to anything else, and able to amplify human impacts across multiple domains. Recent large-scale surveys of AI specialists have identified significant (i.e. circa 50% of respondents) worries of a non-trivial (i.e. 10%+) risk of existentially catastrophic outcomes arising from the technologies. Some AI insiders, such as Yudkowsky, consider an AI catastrophe to be virtually unavoidable, while even AI security moderates, such as Paul Christiano, are on record as saying that they think they are more likely to die from an AI-related risk than anything else in their lives.
Even for those who argue for the many advantages, affordances and benefits of AI technologies, the potential risks and possibilities of harms should at least warrant the very careful application of the precautionary principle and/or a risk management approach to their design and implementation. Do we need to utilise these technologies in order to actualise ecological values or human social flourishing? What could follow socially, politically, economically and existentially from the widespread implementation of AI technologies? Lest this sounds like a slippery slope argument (i.e. that starting to make use of these technologies will necessarily lead to some future catastrophe), arguably when moving along any potentially slippery surface, one probably ought to proceed with the greatest of care. As with much of our thinking about climate change, is a one-in-six or one-in-ten chance of an existentially catastrophic outcome worth taking a risk on? Even if AI seems to offer a future post-scarcity utopia, I think some would say that rolling those dice isn’t worth the risk (If 1-2 = utopia, 2-5 = mixed outcomes, 6 = annihilation, should "we" roll the dice?).
While one can imagine – often through the lens of Science Fiction – the benefits of AI (e.g. you mention the benevolent Minds of Banks’ Culture, or perhaps one might also note the AIs of Neil Asher’s Polity universe, who engineered a Quiet Revolution and now efficiently and semi-benevolently manage human affairs from the background), there is a certain wishful thinking here, projecting the ‘better angels of our natures’ onto AI and then amplifying them a hundred-fold. However, I think in the short or intermediate term, AI is probably better thought of as something that can and will multiply human impacts a hundred or thousand-fold. The rapidly increasing energy and resource hungry demands for Big Data and AI are increasingly well-documented (and provide little indication of a post-scarcity future but rather an even more ecologically depleted world), the more efficient mining of rare earth minerals example from the interview simply evidences the potential for Jevon’s Paradox on steroids (i.e. efficiency producing more GDP growth), plus – crucially I think – AI provides countless opportunities for human ‘bad actors’ to produce even worse outcomes (one could simply provide an AI with a synthetic biology task to produce an uber-virus or perhaps to hack some financial or energy infrastructure) or for the unintentional and unforseen consequences of the tasks assigned to AIs to be performed “too efficiently” (from the infamous example of maximising paper-clip production to a more realistic but potentially disastrous task of something like ‘maximise my wealth’).
I’m not denying the potential benefits of AI, and I agree that we don’t know what will happen. However, an existential risk-management approach seems to be warranted for a technology that seems likely to accelerate the human-all-too-human transformation of the ‘real world’.
P.S. Thanks for identifying yourself as Bright Green, it’s very helpful. I probably self-label as Dark Green, so I see a lot of these technologies as progress traps and amplifying a human, ecological and entropic ‘race to the bottom’. That said, I still read my Sci Fi (Banks, Asher and many others (including Frank Herbert's Butlerian Jyhad from Dune)), although conflating much of it with fantasy these days, rather than speculative possibilities. I’d like it to be true, but I’d similarly like magic, dragons and many other things to be true too.
Excellent post - and of course there's much here I agree with. But the main difference in your reply and my reply to this episode of Planet Critical?
An awareness that AI isn't just code running on a server somewhere. That it could have real world outcomes.
And I agree that they are not always good! For instance - in the (rather awful) Terminator 3 movie - we meet a top Pentagon General trying to save the American Military from some Ai worm burrowing through all their systems. It comes down to a Y/N prompt for "Would you like to upload Skynet and end the world?"
Now, I always thought that was incredibly exaggerated. As if! But note that Ai doesn't even have to become a self-aware "Skynet" to be very dangerous. Something might INDICATE an attack to Ai - and it would just respond without questioning it.
What if there's an impetus to gradually hand over military control - including ICBM's - to Ai because human reaction times are too slow? If Russia hands it over to Ai - will we be at a severe disadvantage? What could possibly go wrong? My son is studying a Masters in computer science, and I'm fast reaching a point where I need to set a rule to have a curfew on certain conversations with him or I will not sleep! This next episode handles exactly the Terminator 3 scenario - and why there might be an irresistible temptation to hand these things over to Ai.
https://www.humanetech.com/podcast/war-is-a-laboratory-for-ai-with-paul-scharre
If I want ANYTHING controlled by humans it's ICBM's. Everyone should memorize the name Stanislav Petrov - as he literally saved us from WW3 in 1983. Something INDICATED an attack - but he just wanted to wait and double check before ending the world.
https://www.washingtonpost.com/podcasts/retropod/the-soviet-officer-who-stopped-world-war-iii-1/
I don't want an Ai Stanslav - or we might not be here having this conversation!
Yeah, there have been a few events like the 1983 Abel Archer incident that you mention, where a human on the ground may have prevented nuclear war or escalation; although I daresay a few events have been human triggered too. Scary how we have become desensitized to the risk of nuclear war (I think risk homeostasis is the term for the process, rather like the apocryphal frog in the heating saucepan), especially when multiple studies suggest that the likelihood of nuclear war during the last fifty years has been somewhere between 40 and 60%. Have a look at the Pristina Airport incident in 1999 if you like scaring yourself, only Captain James Blunt's refusal to follow orders and "start the third world war" prevented a rather nasty turn of events getting much worse.
There's an irony in the science of all this. The global Warming studies into the Australian mega-fires of 2019 - and how the smoke particulates lofted so high in pyrocumulonimbus clouds - helped refine the data for nuclear Winter climate models. The outcome for the Northern Hemisphere is NOT good! The world loses 360 million in the first hours of the war - but then up to 5 BILLION starve to death in the nuclear winter. Check the horrible map here.
https://eclipsenow.wordpress.com/nuclear-war/
This all seems spot on Rachel. I remember a lot of these ideas from back in the early 2000s when I was teaching a course of Religion and the Media. There were a variety of books reflecting on the kind of points you are making here (and this was all pre-Social Media, ChatGPT etc). Eric Davis’ Techgnosis, Willian Stahl’s God and the Chip, David Noble’s The Religion of Technology, Doug Groothius’ The Soul in Cyberspace, Jennifer Cobb’s Cybergrace, Maragret Wertheim’s The Pearly Gates of Cyberspace. Like I say, there was a lot of this stuff published in the late 1990s and early 2000. Basically, a bunch of old and often rather male existential anxieties, fears and spiritual longings (around death, mortality, transcendence) finding new outlets (e.g. with AI and Hi-Tech replacing God and the Singularity replacing an other-worldly salvation). I think you may like Jane Caputi’s recent Call Your “Mutha” – A Deliberately Dirty-Minded Manifesto for the Earth Mother in the Anthropocene as a salve against some of these narratives (although here it is the Earth Mother who is likely to punish us, whereas – I agree – many tech-bros seem to be subconsciously desiring an AI dominatrix to provide some discipline!). Good luck with the book.
I expect the benefits, like the harms, will come from the uses humans put the AI's to, rather than anything intrinsic to the "nature" of AI. Used to identify and target people's vulnerabilities, to influence and persuade, to track and target them politically, to nullify dissent and protect the corrupt they could prove deeply harmful. Handing real time warfare to AI, to control of weapons systems could reduce collateral damage... or take the restraint option out of the hands of people who know the costs of war.
To optimize critical technologies like superior batteries by identifying promising chemistries by modeling them or running complex power grids or improve productivity they can be beneficial.
The lead us to the promised land/species immortality Longtermist kinds of ideology are in my view a distraction; even where those involved Believe wholeheartedly and fervently in such objectives they are more likely to prove an ongoing waste of resources, trying to force a future that I think can only ever be possible as an emergent outcome of a healthy, wealthy Earth economy doing things in space because they are cost effective and beneficial. Popularized by.... targeting those innate human vulnerabilities to false hopes and unrealistic expectations. Only Earth has all the crucial ingredients but has to endure for such grand dreams to ever become possible, a prerequisite for them, not a means of escape from an Earth that is deeply dis-functional.
A whole, fully functional advanced industrial economy - the barest minimum just for basic survival - on Mars? I find that nonsensical, given our most advanced industries are dependent on global supply chains, on economies large enough and varied enough to support advanced specialist materials, equipment and skills. I have yet to see even a list of essential minerals for such an economy, let alone maps of reserves and plans for how to exploit them. Going there and working it out after won't work.
I'm not even convinced that a well resourced "colony" with every Mars colonist working to their fullest abilities will manage just the barest basic essentials out of local resources - air, water, energy, food.
The problem isn't AI, it is people. Can AI work to identify and target corrupt and undue influence? That could be good. But would leaders in commerce, industry and government that treat such influence as intrinsic to doing business even want that? Will they prefer AI used to target "activists" to prevent it?
Correct in some areas. It has ALREADY invented amazing new stuff like permanent magnets that don't need ANY rare earths... https://www.popularmechanics.com/science/green-tech/a61147476/ai-developed-magnet-free-of-rare-earth-metals/
…and an EV battery that uses 70% LESS lithium - stretching existing supplies on earth out to 3 times what we originally had.
https://news.microsoft.com/source/features/ai/how-ai-and-hpc-are-speeding-up-scientific-discovery/
Why is Ai worth it? Because IF - and I'm not saying it can - but IF AGI arrives - we are in a different world! It may solve ALL of our global and local environmental challenges by giving us ALL the free labour we could want. Robots today are stupid brain-damaged creatures compared to what may be coming by around 2029 IF AGI actually is possible. I don't know if it is. I don't think ANYONE knows, even Ai designer Ilya Sutskevar who first put deep learning into GPU's.
But if AGI DOES arrive - we are in a different kind of world. Something from "I robot". Because if we put it in robots and get them to build the Energy Transition, Eco-cities, other Robots and massive forests of the future - anything goes.
https://www.youtube.com/watch?v=sT6WfUZp8es
MARS: If AGI arrives - Space X could fire a few self-replication factory ships at the asteroid belt and the droids would build a whole civilisation supply chain in zero g. Then future massive cargo-ships would tug CO2 from Titan - delivering it at high Delta V to the Martian Poles to cause heat and thicken the atmosphere. Armies of free labour would build all kinds of O'Neil Cylinders for us to live in in space, and armies of droids would gradually terraform Mars and prepare a new home for both us and a new biosphere.
IF AGI arrives. Again - we don't know. Rachel doesn't know. I don't know. Ilya doesn't know.
But wouldn't it be better to actually hear from these dreamers than scoff at what they're saying without hearing their argument first?
As we approach 1 trillion calculations per second by around 2029, it might just arrive.
Check it out.
https://www.youtube.com/watch?v=sT6WfUZp8es