Close-up of a humanoid robot with blue eyes and visible mechanical parts.

We’ve Forgotten to Remember That We’ve Been Here Before

AI, the Worry-Belief Paradigm and Our Many Potential Futures.

I read Middlesex during my pregnancy and then when my youngest told me they were trans there was a split second where I was like, “..welp, there we have it.â€. Ridiculous, I know, but if you read Israeli historian Yuval Noah Harari you know that his two basic themes make this make sense, sorta. First, he points out that humans are inveterate worriers, it is our way of getting emotionally ready to hear, understand and then work through or around negative news and 2) he believes that if a thing has a cultural/societal power, it is our collective belief in those things that gives them that power.

What does all of this have to do with AI? Well, we are in what I call the Worry-Belief Paradigm—something we go through/have gone through in varying intensities since the beginning of time as we confront rapid and barely understood change. We are worrying about AI because we are realizing that we are becoming last few generations before AI turns us into something less than human. And the collective belief about the outcome of that worry has given power to this (so far) fictional outcome: i.e., we’ll become obsolete on our journey into something less than human.

Close-up of a humanoid robot with blue eyes and visible mechanical parts.

The first part of the Worry-Belief Paradigm is the Worry Gulch and it goes from “Eh, comme si comme sa†through “Oh fuck, we are completely irrelevant and doomed!†to “Oh, hey, I’m not obsolete! And I love my wired eye†You can guess where we are right now. But here is the thing, we’ve been going through the Worry Gulch since one of our ancestors unveiled the wheel and none of the worst case scenario worries have come true. And, much like we women do with the the excruciating pain of childbirth, we all tend to forget the intensity of the deep part of the gulch, and the smooth walls we climb up and slide down over and over and over again.

A scenic view of tall red rock formations under a clear blue sky.

So standing at the bottom of the gully we worry that this new thing will make us partially, physically obsolete in some way—if we put in the right prompts, it will do the thing for us. The climbing up and out of the gulch is because we are even more worried that it will make us emotionally and intellectually obsolete, by eventually providing it’s own, better prompts—not only doing the thinking for us, but doing it better!

In Middlesex Eugenides writes, ““Biology gives you a brain. Life turns it into a mind.†For me, this is what we all need to remember as we move forward—there is an collective sense of inevitability and doom that is pervading a lot of the discussion around AI and in the midst of this worry we have to keep reminding ourselves that emotion cannot be created or felt via a series of 1’s and 0’s. We can’t let our worry be the impetus for a self-fulling prophecy…worry creating a power around AI that leads to us essentially creating our own obsolescence.

AI might be able to write a decent love song, but can it inspire one? Can it feel what I feel when I smell cut grass, or when I touch a beautiful stone or fiber, when the sun shifts and the color of the river changes. Can it recreate what these mean to me and how they affect what I then create? Can it have different relationships with words and meaning? With a pinch-pot made by your 7-year old? We cannot afford to look away from those things that turn our brain into a mind, because if we don’t, Harari says it best: “In the end, it’s a simple empirical matter: if the algorithms indeed understand what’s happening within you better than you understand it, authority will shift to them.â€