I sort of agree about the hype aspect of GenAI. I work in tech, and the push to use "tools" such as Code Pilot to generate code smells like everyone being fearful of losing whatever edge there might be for whoever adopts this technology. I'm avoiding it partly because I think it's not that useful, partly because I think it will actually make people worse at their jobs, but mostly because of the envronmental impact that Ed Zitron highlights here.
I'm not so sanguine about AGI however. Not because I have any deep technical insight about it, but because of the people who are deeply concerned about it's potential, such as Daniel Schmachtenberger, Nick Bostrom and Tristan Harris. None of these minds, who have a good track record of understanding the dangers of exponential technology, have skin in the game, as far as I know.
Hello fellow code monkey 👋 I'm not familiar with the worries of Daniel et al about AGI, but I wonder if the concerns are more philosophical, theoretical, or practical.
I definitely feel philosophical unease about actively trying to create an AGI. It's driven by and driving a nationalist arms race, a winner-take-all mentality, religious fanaticism, capitalist hype, and techno-solutionism. All of those goals / mindsets are actively making the future worse.
The theoretical, which I suspect is where much of these thinkers' concerns lie, is also easy to connect with. If AGI-developers succeed, it could be awful: summoning a greater-than-human intelligence that executes tasks at processor-speed to a fragile civilization that is increasingly reliant on networked services is a horrible idea. Any misalignment between a "goal" set for the AGI and the needs of our civilization would be devastating.
On the practical front, which Ed and Rachel focus on during this conversation, LLMs are an extremely unlikely way to create an AGI. As mentioned, LLMs are a party trick: creatively applied statistics built on the largest collection of stolen data ever collected. It has no capacity for developing it's own desires, only for regurgitating what we ask from it a little bit better each time. To this point, Ed's pod recently covered how OpenAI is more tightly defining the term "AGI" over time so they can claim to have achieved it at some point. Not because they have an AGI in the self-directed, near-all-knowing, fully-independent AGI of sci-fi but because they haven't found a way over the edge of what LLMs can deliver.
Barring a radical new approach to creating an AGI, the biggest threat we of AI is already here: trust. The world economy has entrusted AI as THE hyper-growth investment to such a degree that we have a catastrophically large bubble alongside increasing wealth disparity and an employment crisis; the glut of AI-generated content has eroded trust in online information and social interactions; and (perhaps most importantly) several governments have trusted AI to deliver on military goals that outweigh the social, economical, and environmental cost of training and using these models.
Hope this provided a little extra food for thought (from an avid reader, but no authority) on the topic :)
It was good to have a laugh and put things in proportion even if that amounts to a lot of shitty stuff for the rest of us. However, like Tim says, I think there are uses of forms of AI that are scary and one of them is automated military equipment that decides on the basis of an algorithm whether to kill someone or blow something up. The technology is not that advanced - at least in terms of the use of algorithms - but it can still be used in ways that exercise power over others. The present is always more scary than the future.
Loved this. I think the reason behind average people accepting Musk etc as geniuses is because of the celebrity effect, like a confirmation bias - would he be so rich and famous if he wasn't a genius? Therefore he must be a genius. As Ed points out though, these people are geniuses at accumulating money and focusing only on the line going up.
The generative AI world is just throwing good money after bad. The original LLM breakthrough from a few years ago wowed so many people, but the exponential money (and energy, resources) that is needed for linear progression is nuts. The false promise of AGI just being over the horizon ... well it's a race to see what happens first: humans on Mars, artificial general intelligence, fully self-driving Teslas, quantum computers, small modular reactors, or a non-sociopath billionaire.
Thanks, wow, read the viewer comments. Amazing. Everyone has bought this hook, line and sinker. Another billionaire egomaniac is going to shut down government agencies (consumer protection, education, etc) and as Musk says, privatize them, to make things "more efficient"? Sure, the US gov is inefficient, but it is not profit-driven either, and for good reason! And the stream of people applauding this...just amazing.
I heard Ed Zitron make more than one remark about the sex-realist position taken by Trump in his inaugural Executive Order. One remark was at 25:32: “I think America is going to be very dark for LGBTQ people, especially trans people and women”.
I wish you would challenge people when they say things like this. We have a problem in the West that the only politicians who seem to know what a woman is – or be willing to tell the truth about this – are the ones on the right.
We are at a pretty pass when the politician doing the most to defend the rights of women in the USA to single-sex spaces and sports is an otherwise misogynist-seeming man. But there we are.
What I’d like from you, Rachel, when you interview people who imply that so-called trans rights are under attack, is that you challenge them on the science of biology just as you would challenge them on the science of the environment when they speak falsely about that.
Sad to see that there has been no reply to my comment - taking a stand against anti-scientific wokey woo woo on gender is a matter that is critical to me - planet critical, if you will, given that it's why I and about 60 others have been thrown out of or suspended from the Green Party. See https://greensinexile.org.uk
Great episode :) Melanie Rieback's YouTube series on post-growth business, and her motto "Capital is a distraction" comes to mind here! I especially liked her analogy for Silicon Valley's capital-fueled hypergrowth and race for market dominance, which is essentially throwing a dart from far away and praying you hit bullseye.
I'm not a tech person, and have never heard of Ed, but I have noticed a barrage of "help" coming my way every time I try to do simple things with my notebook, like look at a pdf ("Would you like to try our AI assistant here?"), write a document, use a spreadsheet... I did some looking, just a bit, and didn't find any "ED's AI Story" takedowns. So maybe he's onto something.
Many businesses are loss-making at first but this AI industry sounds like next level dotcom bubble. What surprises me listening to Ed is all the fear from the Existential Threat folks in academia who think AI is our number 1 killer, real soon. Where are the real scientists?
His view of things seems plausible enough, even inevitable, given US rabid capitalism. Tech has gone crazy before, again, dotcom bubble, right? Despite all the "brains" in the room. I've never really trusted Silicon Valley brains, to be honest. Fancy gadgets, but gadgets nonetheless. Something to buy. Like "smart" phones turning everyone into zombies.
I am against all these Microsoft like companies forcing our businesses onto their platforms. Companies should be moving from SaaS to PaaS. That’s our mission.
Too much fun, I enjoyed that. : )
So interesting!
I sort of agree about the hype aspect of GenAI. I work in tech, and the push to use "tools" such as Code Pilot to generate code smells like everyone being fearful of losing whatever edge there might be for whoever adopts this technology. I'm avoiding it partly because I think it's not that useful, partly because I think it will actually make people worse at their jobs, but mostly because of the envronmental impact that Ed Zitron highlights here.
I'm not so sanguine about AGI however. Not because I have any deep technical insight about it, but because of the people who are deeply concerned about it's potential, such as Daniel Schmachtenberger, Nick Bostrom and Tristan Harris. None of these minds, who have a good track record of understanding the dangers of exponential technology, have skin in the game, as far as I know.
Hello fellow code monkey 👋 I'm not familiar with the worries of Daniel et al about AGI, but I wonder if the concerns are more philosophical, theoretical, or practical.
I definitely feel philosophical unease about actively trying to create an AGI. It's driven by and driving a nationalist arms race, a winner-take-all mentality, religious fanaticism, capitalist hype, and techno-solutionism. All of those goals / mindsets are actively making the future worse.
The theoretical, which I suspect is where much of these thinkers' concerns lie, is also easy to connect with. If AGI-developers succeed, it could be awful: summoning a greater-than-human intelligence that executes tasks at processor-speed to a fragile civilization that is increasingly reliant on networked services is a horrible idea. Any misalignment between a "goal" set for the AGI and the needs of our civilization would be devastating.
On the practical front, which Ed and Rachel focus on during this conversation, LLMs are an extremely unlikely way to create an AGI. As mentioned, LLMs are a party trick: creatively applied statistics built on the largest collection of stolen data ever collected. It has no capacity for developing it's own desires, only for regurgitating what we ask from it a little bit better each time. To this point, Ed's pod recently covered how OpenAI is more tightly defining the term "AGI" over time so they can claim to have achieved it at some point. Not because they have an AGI in the self-directed, near-all-knowing, fully-independent AGI of sci-fi but because they haven't found a way over the edge of what LLMs can deliver.
Barring a radical new approach to creating an AGI, the biggest threat we of AI is already here: trust. The world economy has entrusted AI as THE hyper-growth investment to such a degree that we have a catastrophically large bubble alongside increasing wealth disparity and an employment crisis; the glut of AI-generated content has eroded trust in online information and social interactions; and (perhaps most importantly) several governments have trusted AI to deliver on military goals that outweigh the social, economical, and environmental cost of training and using these models.
Hope this provided a little extra food for thought (from an avid reader, but no authority) on the topic :)
It was good to have a laugh and put things in proportion even if that amounts to a lot of shitty stuff for the rest of us. However, like Tim says, I think there are uses of forms of AI that are scary and one of them is automated military equipment that decides on the basis of an algorithm whether to kill someone or blow something up. The technology is not that advanced - at least in terms of the use of algorithms - but it can still be used in ways that exercise power over others. The present is always more scary than the future.
it doesn’t make people happier - tech- that’s the biggest lie in human history.
I think money came first... but yeah
Loved this. I think the reason behind average people accepting Musk etc as geniuses is because of the celebrity effect, like a confirmation bias - would he be so rich and famous if he wasn't a genius? Therefore he must be a genius. As Ed points out though, these people are geniuses at accumulating money and focusing only on the line going up.
The generative AI world is just throwing good money after bad. The original LLM breakthrough from a few years ago wowed so many people, but the exponential money (and energy, resources) that is needed for linear progression is nuts. The false promise of AGI just being over the horizon ... well it's a race to see what happens first: humans on Mars, artificial general intelligence, fully self-driving Teslas, quantum computers, small modular reactors, or a non-sociopath billionaire.
I soaked this all in, hoping not just because it is confirming my suspicions but is an informed and prescient rundown of the reality.
"I've yet to meet someone whose life has been measurably improved by it [AI], other than someone raising money off the back of it."
Just wondering if someone could testify to the contrary?
Elon...madness on display, World Government Forum... The video posted today: https://www.youtube.com/watch?v=-4LOoxK4j4A
Thanks, wow, read the viewer comments. Amazing. Everyone has bought this hook, line and sinker. Another billionaire egomaniac is going to shut down government agencies (consumer protection, education, etc) and as Musk says, privatize them, to make things "more efficient"? Sure, the US gov is inefficient, but it is not profit-driven either, and for good reason! And the stream of people applauding this...just amazing.
https://www.facebook.com/reel/473264232407285
I heard Ed Zitron make more than one remark about the sex-realist position taken by Trump in his inaugural Executive Order. One remark was at 25:32: “I think America is going to be very dark for LGBTQ people, especially trans people and women”.
I wish you would challenge people when they say things like this. We have a problem in the West that the only politicians who seem to know what a woman is – or be willing to tell the truth about this – are the ones on the right.
I write about this on my blog, here, for example: https://roadlesstraveller.substack.com/p/progress-pride-flag-and-council-planning
We are at a pretty pass when the politician doing the most to defend the rights of women in the USA to single-sex spaces and sports is an otherwise misogynist-seeming man. But there we are.
What I’d like from you, Rachel, when you interview people who imply that so-called trans rights are under attack, is that you challenge them on the science of biology just as you would challenge them on the science of the environment when they speak falsely about that.
Sad to see that there has been no reply to my comment - taking a stand against anti-scientific wokey woo woo on gender is a matter that is critical to me - planet critical, if you will, given that it's why I and about 60 others have been thrown out of or suspended from the Green Party. See https://greensinexile.org.uk
I regret to say that I'm unsubscribing.
Great episode :) Melanie Rieback's YouTube series on post-growth business, and her motto "Capital is a distraction" comes to mind here! I especially liked her analogy for Silicon Valley's capital-fueled hypergrowth and race for market dominance, which is essentially throwing a dart from far away and praying you hit bullseye.
https://www.youtube.com/watch?v=iVCPqQ0bZx0&list=PL14vcCXv7XVONAwzNv0ApYwZ5iepLzz3S&index=4
I'm not a tech person, and have never heard of Ed, but I have noticed a barrage of "help" coming my way every time I try to do simple things with my notebook, like look at a pdf ("Would you like to try our AI assistant here?"), write a document, use a spreadsheet... I did some looking, just a bit, and didn't find any "ED's AI Story" takedowns. So maybe he's onto something.
Many businesses are loss-making at first but this AI industry sounds like next level dotcom bubble. What surprises me listening to Ed is all the fear from the Existential Threat folks in academia who think AI is our number 1 killer, real soon. Where are the real scientists?
His view of things seems plausible enough, even inevitable, given US rabid capitalism. Tech has gone crazy before, again, dotcom bubble, right? Despite all the "brains" in the room. I've never really trusted Silicon Valley brains, to be honest. Fancy gadgets, but gadgets nonetheless. Something to buy. Like "smart" phones turning everyone into zombies.
Help me empower firms to build their own open source platforms. www.culebrapartners.com
I am against all these Microsoft like companies forcing our businesses onto their platforms. Companies should be moving from SaaS to PaaS. That’s our mission.
“This is insanely expensive, does not even come close… it’s not autonomous, it’s a generator.”
“They are acting in evil ways, destroying the environment”, harming people, burning millions of dollars to build something they aren’t building.
“The average person” who may think Elon is a genius are being sorely mislead by media.