Last week, Elon Musk tweeted a screenshot of a 2021 political theory posted by an anonymous user on 4Chan, a website which often acts as an incubator for extremist, illogical thought. The theory posited that only a Republic ruled by “aneurotypical” and high testosterone males would be a true democracy, because only these individuals can think freely, as only they can defend themselves physically, thus limiting their dependence on group consensus. Women and low testosterone people depend on the group for survival and thus, the user states, uninterested in understanding if something is true, but prefer instead to ask: “will others be ok with me thinking this is true”.
Replying to Elon, Yann LeCunn, Professor at NYU and Chief AI Scientist at Meta, detailed at length why this theory was nonsense, citing studies in evolutionary psychology and political ideology. He took a few swipes at Elon’s physical and intellectual prowess, and then finished with the assurance that “superhuman AI” won’t run into these problems:
The first thing that came to mind upon reading Yann’s post-script was: Is Silicon Valley engineering governance AI that will do away with democracy? Personally, I haven’t consented to the development of a political AI which will make decisions on behalf of netizens all around the world, and certainly would never consent to a Republic run by a computer rather than political representatives, no matter the serious flaws in the system. I can see the benefit of AI advisers, AI synthesisers, AI forecasters—but decision makers? No. Yet, as AI engineer Deep Dhillon and I discussed on last week’s podcast, the problem with the exponentially accelerating development of technology is that many of us have no choice but to opt-in just to maintain lives in the minority world. It is increasingly difficult to navigate the external environment without the library of apps with which we bank, buy our tickets, find our way and check bus times—let alone stay connected to friends. Most corporate work these days would be impossible without a smartphone to field emails and alerts every second of the day. The treadmill we exist on is not just economic force; it is engineered to accelerate by technology. Just because I personally would not consent to an AI leader does not ensure I would be insulated from such a future. These technologies are literally getting away from us, and we are forced to run behind them to stay connected and relevant. This is the problem with such vast wealth and power inequity; a couple of venture capitalists can throw money at a start-up and just a few years later my grandmother can’t get an appointment with a bank teller, and instead gets gifted an iPad for her 90th because it’s the only screen big enough for her to use the banking app she must now route all her requests through. It is no longer impossible to imagine the same trajectory for our democracies, given the unrelenting cyber-attacks they endure from different parties, both as outside influence throw bots at algorithms and political entrepreneurs sow disinformation online.
The second thing I stumbled over was the reference to AI being testosterone-free. In August, I thoroughly enjoyed the book Woman, by Natalie Angier, which gives a far-reaching and minutely detailed overview of the female body. In the chapter on hormones, Angier digs into this cultural myth that testosterone is the male hormone and oestrogen the female hormone, explaining that both sexes have both hormones, and the differing amounts are not as marked as we like to think. In fact, she goes on to explain, it is often high androgen levels and malfunctioning androgen inhibitors which give female bodies intersex organs or masculine features. More myth-busting has been done on the effects of testosterone on human bodies since that book was published in 1999, and a 2019 meta study found that the causal effects of testosterone on human aggression are weak. However, fascinatingly, whatever slight changes testosterone had on aggression in men were not present in women. Women’s levels of aggression were not impacted by changes in testosterone. If Yann’s argument is that a superhuman AI would be better than a human at governance because of its lack of testosterone, perhaps, before asking a giant computer to take over, we could give the sex who is impervious to testosterone a shot at making decisions.
Yann then goes on to say: “Whatever influence [AI] will have will be because of positive intellectual effects on society.” This also left me scratching my head. Technology is not morally agnostic, nor does it exist in a political vacuum. AI will have influence because the people profiting from it will increase their influence. It will have influence because this already very fast world will get faster, and it will become harder to keep up, forcing the vulnerable and the poor even further down the social ladder as they either cannot afford to or cannot adapt to the latest technologies. And we don’t even have to look to hypotheticals and theoreticals. AI has been used to locate members of Hamas and launch rockets at family homes, indiscriminately killing tens of victims for every alleged terrorist. The results of intellectual problems and questions are often not intellectual themselves. They are material. It seems wildly naive to assume that a tool unleashed in a profit-driven economic system with increasing power and wealth inequality would somehow only work for the benefit of society as a whole, let alone one coded with potential bias. Society as a whole is deeply fractured, the fault lines extending beyond political debate. The majority are caught in cross-fires of the warring interests of those with access to resources. In what world would they build the very tool to dethrone the illusion of their seat of power?
Finally, Yann concludes that AI would not have “a desire to dominate”. I can imagine that being true, but aren’t we anthropomorphising a complex algorithm? Would it have desires? Or would it have solutions for problems? Answers to questions? Surely a thing without desire would only do as it was asked to do, thus maintaining power at someone’s, or a small group’s, keystrokes? Or perhaps it would have desires, desires we couldn’t possibly foresee, and one of those would be to dominate? Or perhaps to not exist at all? Or perhaps to truly do good? Or perhaps it would become bored, and cause problems just to have something new to solve?
I can agree with Yann on one thing, that it looks like Elon has gone “full-throttle sexist”. Beyond that, I’d love to understand why these men think they will know what superhuman machines think when we barely understand what we ourselves think, let alone our neighbours.
Hi Rachel. You may like to search for 'Man-made' by the Australian author/journalist Tracey Spicer. I haven't read it (it's on my list!), but it tackles many of these themes and I immediately thought of it as I was reading. Thanks, Nick
The whole premise of this AI conversation is flawed. It implies that good decision-making is solely rational. It is correct of course that AI does not have testosterone, because it doesn't have any hormones at all - male, female, neither or both. It is disembodied. It cannot feel. It has no mirror neurons and no empathy. This is not a gender issue.
Good human decision-making is not driven only by factual data but by context, by psychology and much more. Would an AI have come up with the understanding of the play "Caucasian Chalk Circle", where the wise ruler knows that a real mother would not allow her child to come to harm?
Data-driven decision-making implies that the same decision can be made each time, if the facts are the same. It assumes that all the data is known, when it doesn't have access to context, culture, ecological relationships and much besides. It cannot know what has changed. The problem with people like Musk is that feelings are irrelevant. That is sociopathic, as is evident to anyone who doesn't share his blindness.