Will AI become God? That’s the wrong question.
It’s hard to know what to think about AI.
It’s easy to imagine a future in which chatbots and research assistants make almost everything we do faster and smarter. It’s equally easy to imagine a world in which those same tools take our jobs and upend society. Which is why, depending on who you ask, AI is either going to save the world or destroy it.
What are we to make of that uncertainty?
Jaron Lanier is a digital philosopher and the author of several bestselling books on technology. Among the many voices in this space, Lanier stands out. He’s been writing about AI for decades and he’s argued, somewhat controversially, that the way we talk about AI is both wrong and intentionally misleading.
I invited him onto The Gray Area for a series on AI because he’s uniquely positioned to speak both to the technological side of AI and to the human side. Lanier is a computer scientist who loves technology. But at his core, he’s a humanist who’s always thinking about what technologies are doing to us and how our understanding of these tools will inevitably determine how they’re used.
We talk about the questions we ought to be asking about AI at this moment, why we need a new business model for the internet, and how descriptive language can change how we think about these technologies — especially when that language treats AI as some kind of god-like entity.
As always, there’s much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday.
This interview has been edited for length and clarity.
What do you mean when you say that the whole technical field of AI is “defined by an almost metaphysical assertion”?
The metaphysical assertion is that we are creating intelligence. Well, what is intelligence? Something human. The whole field was founded by Alan Turing’s thought experiment called the Turing test, where if you can fool a human into thinking you’ve made a human, then you might as well have made a human because what other tests could there be? Which is fair enough. On the other hand, what other scientific field — other than maybe supporting stage magicians — is entirely based on being able to fool people? I mean, it’s stupid. Fooling people in itself accomplishes nothing. There’s no productivity, there’s no insight unless you’re studying the cognition of being fooled of course.
There’s an alternative way to think about what we do with what we call AI, which is that there’s no new entity, there’s nothing intelligent there. What there is a new, and in my opinion, sometimes quite useful, form of collaboration between people.
What’s the harm if we do?
That’s a fair question. Who cares if somebody wants to think of it as a new type of person or even a new type of God or whatever? What’s wrong with that? Potentially nothing. People believe all kinds of things all the time.
But in the case of our technology, let me put it this way, if you are a mathematician or a scientist, you can do what you do in a kind of an abstract way. You can say, “I’m furthering math. And in a way that’ll be true even if nobody else ever even perceives that I’ve done it. I’ve written down this proof.” But that’s not true for technologists. Technologists only make sense if there’s a designated beneficiary. You have to make technology for someone, and as soon as you say the technology itself is a new someone, you stop making sense as a technologist.
If we make the mistake, which is now common, and insist that AI is in fact some kind of god or creature or entity or oracle, instead of a tool, as you define it, the implication is that would be a very consequential mistake, right?
That’s right. When you treat the technology as its own beneficiary, you miss a lot of opportunities to make it better. I see this in AI all the time. I see people saying, “Well, if we did this, it would pass the Turing test better, and if we did that, it would seem more like it was an independent mind.”
But those are all goals that are different from it being economically useful. They’re different from it being useful to any particular user. They’re just these weird, almost religious, ritual goals. So every time you’re devoting yourself to that, it means you’re not devoting yourself to making it better.
One example is that we’ve deliberately designed large-model AI to obscure the original human sources of the data that the AI is trained on to help create this illusion of the new entity. But when we do that, we make it harder to do quality control. We make it harder to do authentication and to detect malicious uses of the model because we can’t tell what the intent is, what data it’s drawing upon. We’re sort of willfully making ourselves blind in a way that we probably don’t really need to.
I really want to emphasize, from a metaphysical point of view, I can’t prove, and neither can anyone else, that a computer is alive or not, or conscious or not, or whatever. All that stuff is always going to be a matter of faith. That’s just the way it is. But what I can say is that this emphasis on trying to make the models seem like they’re freestanding new entities does blind us to some ways we could make them better.
So does all the anxiety, including from serious people in the world of AI, about human extinction feel like religious hysteria to you?
What drives me crazy about this is that this is my world. I talk to the people who believe that stuff all the time, and increasingly, a lot of them believe that it would be good to wipe out people and that the AI future would be a better one, and that we should wear a disposable temporary container for the birth of AI. I hear that opinion quite a lot.
Wait, that’s a real opinion held by real people?
Many, many people. Just the other day I was at a lunch in Palo Alto and there were some young AI scientists there who were saying that they would never have a “bio baby” because as soon as you have a “bio baby,” you get the “mind virus” of the [biological] world. And when you have the mind virus, you become committed to your human baby. But it’s much more important to be committed to the AI of the future. And so to have human babies is fundamentally unethical.
Now, in this particular case, this was a young man with a female partner who wanted a kid. And what I’m thinking is this is just another variation of the very, very old story of young men attempting to put off the baby thing with their sexual partner as long as possible. So in a way I think it’s not anything new and it’s just the old thing. But it’s a very common attitude, not the dominant one.
I would say the dominant one is that the super AI will turn into this God thing that’ll save us and will either upload us to be immortal or solve all our problems and create superabundance at the very least. I have to say there’s a bit of an inverse proportion here between the people who directly work in making AI systems and then the people who are adjacent to them who have these various beliefs. My own opinion is that the people who are able to be skeptical and a little bored and dismissive of the technology they’re working on tend to improve it more than the people who worship it too much. I’ve seen that a lot in a lot of different things, not just computer science.
One thing I worry about is AI accelerating a trend that digital tech in general — and social media in particular — has already started, which is to pull us away from the physical world and encourage us to constantly perform versions of ourselves in the virtual world. And because of how it’s designed, it has this habit of reducing other people to crude avatars, which is why it’s so easy to be cruel and vicious online and why people who are on social media too much start to become mutually unintelligible to each other. Do you worry about AI supercharging this stuff? Am I right to be thinking of AI as a potential accelerant of these trends?
It’s arguable and actually consistent with the way the [AI] community speaks internally to say that the algorithms that have been driving social media up to now are a form of AI, if that’s the term you wish to use. And what the algorithms do is they attempt to predict human behavior based on the stimulus given to the human. By putting that in an adaptive loop, they hope to drive attention and an obsessive attachment to a platform. Because these algorithms can’t tell whether something’s being driven because of things that we might think are positive or things that we might think are negative.
I call this the life of the parity, this notion that you can’t tell if a bit is one or zero, it doesn’t matter because it’s an arbitrary designation in a digital system. So if somebody’s getting attention by being a dick, that works just as well as if they’re offering lifesaving information or helping people improve themselves. But then the peaks that are good are really good, and I don’t want to deny that. I love dance culture on TikTok. Science bloggers on YouTube have achieved a level that’s astonishingly good and so on. There’s all these really, really positive good spots. But then overall, there’s this loss of truth and political paranoia and unnecessary confrontation between arbitrarily created cultural groups and so on and that’s really doing damage.
So yeah, could better AI algorithms make that worse? Plausibly. It’s possible that it’s already bottomed out and if the algorithms themselves get more sophisticated, it won’t really push it that much further.
But I actually think it can and I’m worried about it because we so much want to pass the Turing test and make people think our programs are people. We’re moving to this so-called agentic era where it’s not just that you have a chat interface with the thing, but the chat interface gets to know you through years at a time and gets a so-called personality and all this. And then the idea is that people then fall in love with these. And we’re already seeing examples of this here and there, and this notion of a whole generation of young people falling in love with fake avatars. I mean, people talk about AI as if it’s just like this yeast in the air. It’s like, oh, AI will appear and people will fall in love with AI avatars, but it’s not. AI is always run by companies, so they’re going to be falling in love with something from Google or Meta or whatever.
The advertising model was sort of the original sin of the internet in lots of ways. I’m wondering how we avoid repeating those mistakes with AI. How do we get it right this time? What’s a better model?
This question is the central question of our time in my view. The central question of our time isn’t, how are we able to scale AI more? That’s an important question and I get that. And most people are focused on that. And dealing with the climate is an important question. But in terms of our own survival, coming up with a business model for civilization that isn’t self-destructive is, in a way, our most primary problem and challenge right now.
Because the way we’re doing it, we went through this thing in the earlier phase of the internet of “information should be free,” and then the only business model that’s left is paying for influence. And so then all of the platforms look free or very cheap to the user, but then actually the real customer is trying to influence the user. And you end up with what’s essentially a stealthy form of manipulation being the central project of civilization.
We can only get away with that for so long. At some point, that bites us and we become too crazy to survive. So we must change the business model of civilization. How to get from here to there is a bit of a mystery, but I continue to work on it. I think we should incentivize people to put great data into the AI programs of the future. And I’d like people to be paid for data used by AI models and also to be celebrated and made visible and known. I think it’s just a big collaboration and our collaborators should be valued.
How easy would it be to do that? Do you think we can or will?
There’s still some unsolved technical questions about how to do it. I’m very actively working on those and I believe it’s doable. There’s a whole research community devoted to exactly that distributed around the world. And I think it’ll make better models. Better data makes better models, and there’s a lot of people who dispute that and they say, “No, it’s just better algorithms. We already have enough data for the rest of all time.” But I disagree with that.
I don’t think we’re the smartest people who will ever live, and there might be new creative things that happen in the future that we don’t foresee and the models we’ve currently built might not extend into those things. Having some open system where people can contribute to new models and new ways is a more expansive and just kind of a spiritually optimistic way of thinking about the deep future.
Is there a fear of yours, something you think we could get terribly wrong, that’s not currently something we hear much about?
God, I don’t even know where to start. One of the things I worry about is we’re gradually moving education into an AI model, and the motivations for that are often very good because in a lot of places on earth, it’s just been impossible to come up with an economics of supporting and training enough human teachers. And a lot of cultural issues in changing societies make it very, very hard to make schools that work and so on. There’s a lot of issues, and in theory, a self-adapting AI tutor could solve a lot of problems at a low cost.
But then the issue with that is, once again, creativity. How do you keep people who learn in a system like that, how do you train them so that they’re able to step outside of what the system was trained on? There’s this funny way that you’re always retreading and recombining the training data in any AI system, and you can address that to a degree with constant fresh input and this and that. But I am a little worried about people being trained in a closed system that makes them a little less than they might otherwise have been and have a little less faith in themselves.