11 Comments

The whole premise of this AI conversation is flawed. It implies that good decision-making is solely rational. It is correct of course that AI does not have testosterone, because it doesn't have any hormones at all - male, female, neither or both. It is disembodied. It cannot feel. It has no mirror neurons and no empathy. This is not a gender issue.

Good human decision-making is not driven only by factual data but by context, by psychology and much more. Would an AI have come up with the understanding of the play "Caucasian Chalk Circle", where the wise ruler knows that a real mother would not allow her child to come to harm?

Data-driven decision-making implies that the same decision can be made each time, if the facts are the same. It assumes that all the data is known, when it doesn't have access to context, culture, ecological relationships and much besides. It cannot know what has changed. The problem with people like Musk is that feelings are irrelevant. That is sociopathic, as is evident to anyone who doesn't share his blindness.

Expand full comment

Hi Rachel. You may like to search for 'Man-made' by the Australian author/journalist Tracey Spicer. I haven't read it (it's on my list!), but it tackles many of these themes and I immediately thought of it as I was reading. Thanks, Nick

Expand full comment

Incredible. In effect, the X post criticises people with a greater tendency to check what the social consensus is as part of their decision making process, on the grounds that the social consensus could in fact be "brute force manufactured consensus". But if, as the post implies, these people who take account of consensus views are not to be trusted because they are vulnerable to manipulation, you have to ask who it would be that was manipulating them, by manufacturing a consensus using brute force? Not the people who look at consensus views, it says, but rather the so-called "high T males" etc who don't. In other words the post is suggesting that we trust these "high T" people purely because they have the greatest ability to use brute force to manufacture social attitudes.

Expand full comment

What’s funny is that counter intuitively perhaps the people manufacturing their social consensus device and pandering to their crowd the most are actually the *ahem* ‘free speech absolutists’ like Musk, desperate to be right, desperate for approval, desperate to be loved, they will say literally anything to get a dopamine bump from their pocket computer/ only friend. And when it works at such scale they are convinced they ARE right and keep going.

Expand full comment

This is bloody long, but profoundly related...? Even a skim gives a pretty good idea...?

https://www.wheresyoured.at/subprimeai/

Expand full comment

Elon and Yann. The family biography of Elon and the academic pathway of Yann would bring forth little in the way of turning the soil for a garden or building a beautiful home. These are the things that testosterone is made for. Instead, they hold power as if it were a nought but a shiny diamond, "Behold, I am mighty, because I said so ! AI will light the way for all !" Yeah, right - we're doomed. Neither Elon or Yann will feel the estrogen, gazing into the eyes of a child or the feeling for the one you love. And yet they lead us. And they are led by others. Earth IS critical and about to cough up a furball like never seen before. Find an mountain view, in a forest, on an island, near the equator. Start a village with a council of the wise. Architecture, music, food and song.

Expand full comment

What I see first with the "interesting Observation" of Elon Musk is he is looking for what kind of consensus on X he can get in order to gage what others think of this theory. Not the high testosterone Guy he want to project. CQFD (Ce qu'il fallait démontrer)

Expand full comment

Hmmmm... no testosterone in AI? Maybe not physically, but it's thinking protocol is infused with it as a result of the dispositions of the creators. You should look into Vanessa Andreotti's work with AI and how if you demographically profile the artificial identity of the system it is pretty much a privileged young white adult male educated at Stanford. Scary indeed!

Expand full comment

Ironic that someone so far out on the spectrum, someone like Elon Musk, with obvious deficits in social skills, thinks that we can all benefit from his "objective" perspective. Male domination tripe. Autistic men running social networks is one of the craziest of the craziest tech developments. People want Big Daddy, the great businessman, to come in and fix everything. It doesn't work like that. That's old mechanical thinking. The old fascist answer.

Systems/complexity science has the capacity to blow out these kinds of assumptions -- If you understand how a healthy(for want of a better word), functioning system works generally in nature, then you can better organize your own systems. Every action is either adding to or taking away from the health of other systems -- other people, the natural environment. Every action is an ethical action that can potentially be modeled and measured.

Social problems are systemic problems, and that any one individual can't fix the social problem. The problem is the system, in this case, democracy as we know it, and we need better ways to organize ourselves, ways that better meet the complexity of our shrinking world. It's all possible and it's already happening. But can we get up to speed fast enough?

Expand full comment

The essential point in all of this seems to be a desire to take away our ability to make decisions for ourselves as individuals or groups. There is a fallacy built into the AI promoters argument that because they are developing a technology that few understand that they are the people who should control it. This is an old fashioned power grab!

Expand full comment

Can we assume that all the 'facts' on which AI bases its decisions are constant? Particularly when tech bros expect these 'decisions' to affect humans in a democratic way? Relevant 'facts' include the material upon which AI constructs its model of human minds, emotions, and hopes—the very stuff that democracy must serve. And the nature of the optimum future that AI is expected to move the world ecosystem towards is highly contested, even by clearly altruistic thinkers. Would a wise AI lead towards human extinction? Perhaps more saliently, would a superior, hence _self-aware_ AI lead towards human extinction?

Expand full comment