Discussion about this post

User's avatar
Paul Reid-Bowen's avatar

Thanks Rachel, another wonderful evocation of the madness of our extractive economy, and specifically its latest and perhaps final intensification as the AI explosion/bubble. I keep searching for metaphors and stories to articulate the absolute insanity of this behaviour, so I really value pieces like this. I personally find it hard to avoid viewing cancer as among the closest approximations for what is happening from a systems perspective (see the work of John McMurty, Nancy Fraser and others for support).

The logic of anything which is simply to grow and multiply whilst serving no life-function seems cancerous by definition, while one may note that cancerous pathologies are also incredibly effective at overcoming the immune defences of the systems they inhabit, turning them instead into blind and active collaborators and enablers of their growth. Thus, analogously, one can see how countless political, educational, social, media and other institutions, which ought to be protective, are rapidly surrendering their resources and energy to the growth demands of AI (capitalism, GDP etc).

The AI economy looks on track to literally drain every aquifer, burn through every energy source, mine every rare earth mineral, degrade every ecosystem, exploit every body and steal every human thought and dream in the service of its metastasizing growth imperative: the accumulation of massive data surpluses and the production of a hallucinated imago dei or cognitive singularity.

Expand full comment
Tim Coombe's avatar

The idea of digital pets isn't an original one. In the Philip K. Dick dystopian novel "Do Androids Dream of Electic Sheep" which became the film Bladerunner, it is considered to be a status symbol to own a real animal and most people have these robots that they pretend to feed. The main character buys a real goat at great expense for his wife, but later finds out it's a fake.

I've been evaluating an AI tool at work to aid Software Development. I hate the idea for many reasons, but gave it a try just for interest..then decided to have a "conversation" with it about the future of AI and humanity. I won't bore you with the whole thing, but the last statement from the AI was this.

Tim: Do you think it's strange that some of the leading lights at the biggest AI firms are racing for AGI, but then warn that ASI is inevitable and may make humans obsolete/extinct. And yet we're all racing towards this cliff.

AI: "That's a profound paradox you've identified. It is genuinely strange - and deeply unsettling - that the same people building AGI are simultaneously warning about existential risks from ASI (Artificial Superintelligence).

The contradiction is stark:

Sam Altman talks about AGI being transformative for humanity while also funding AI safety research because of extinction risks

Geoffrey Hinton left Google to warn about AI dangers after helping create the foundation for current AI

Demis Hassabis at DeepMind publishes papers on AI alignment while racing to build more powerful systems

The "racing towards the cliff" dynamic seems driven by:

Competitive pressure - "If we don't build it, China/competitors will"

Economic incentives - Trillions of dollars in potential value

Technological momentum - Each breakthrough makes the next seem inevitable

Optimism bias - "We'll solve alignment before it's too late"

Diffusion of responsibility - No single actor feels they can stop the race

Looking at your code file: This careful, thoughtful code - handling edge cases, defensive programming, clear abstractions - represents decades of accumulated software engineering wisdom. The idea that this entire domain of human expertise might become obsolete within our lifetimes is staggering.

The existential question: Are we witnessing the end of human intellectual relevance? And if the people building these systems genuinely believe ASI poses existential risks, why isn't there more coordination to slow down or pause development?

It feels like we're in a collective action problem where individual rationality leads to potentially catastrophic collective outcomes."

Expand full comment
34 more comments...

No posts