Sep 20, 2023

AI Safety Panel

AI Safety Panel with Elon Musk, Max Tegmark, Greg Brockman, Benjamin Netanyahu Transcript
RevBlogTranscriptsAI SafetyAI Safety Panel

AI Safety Panel with Elon Musk, Max Tegmark, Greg Brockman, and Benjamin Netanyahu. Read the transcript here.

 

Elon Musk (00:00):

Some great guests here. I think you guys know me and obviously the Prime Minister, but if you guys could introduce yourselves and say a bit about you, and you’ve both done amazing things, so please don’t be shy.

Greg Brockman (00:12):

Sure thing. So I’m Greg Brockman. I’m the president and co-founder of OpenAI. We started OpenAI out of my apartment in 2015, 2016. So yeah, it’s been a really exciting time to just be able to help not just move this field forward, but try to help steer it in a positive direction. I think that’s really foundational to how I think about AI. What got me excited about this field is the fact it’s going to be so transformative, and I think it can benefit everyone in huge ways, but we also really need to mitigate the downsides. And so I think that’s something that really resonated with me about the earlier conversation. I hope we’ll talk about a lot in the upcoming minutes.

Max Tegmark (00:46):

And since Greg was too modest to say, so I’m going to add something to your introduction of yourself there, which is just a few years ago when Greg and his colleagues were saying in a few years, we believe we can actually build AI that can master human language and common knowledge. A lot of my colleagues in AI were like, That’s crazy science fiction. It’s going to take decades, but you did it.

Greg Brockman (01:12):

We literally had an intern in 2016, so our very first summer who we had this conversation about how to solve the Turing test, and we had all these ideas and whatever, and we were like, all right, three years we’ll solve the Turing test. And that intern told me, “I’m out. I’m not joining full-time. You guys are crazy.” And three years later, we had GPT-2, so we didn’t quite solve the Turing test. So he was right. But I think that we had far more progress than I think most people thought possible.

Max Tegmark (01:36):

So I’m Max Tegmark. I’m a professor doing AI research at MIT, and I think it was spot on what we heard earlier here from you, that artificial intelligence will be both a blessing and a curse. For a long time, I’ve been very excited about the blessing part, how we can use it to solve all of the problems that we haven’t been smart enough to solve before and go far beyond and help life flourish for billions of years on earth and beyond. And I also, though, I’m very concerned about the curse part. So nine years ago, I founded this nonprofit called The Future of Life Institute together with Elon. We launched the first ever attempt to mainstream AI safety by launching a worldwide grants program, and educate, and do conferences and so on that you, Greg, participated in also, to make sure we can steer this ever-more powerful technology away from the curse towards the blessing direction.

Benjamin Netanyahu (02:31):

Well, can I say that I want to plug this book. You wrote this book and I read it.

Elon Musk (02:36):

It’s a good book.

Benjamin Netanyahu (02:37):

It’s a very good book. Was it ChatGPT or?

Max Tegmark (02:40):

I have an alibi because not even Greg was fast enough to have it written by the time the book-

Benjamin Netanyahu (02:46):

ChatGPT ain’t good enough yet to do this, but from what I hear you guys saying, it’s good. Yeah, give it a minute. 30 seconds. Well, you see, let’s talk about that because I think that the kind of things that you’ve written here are, I think, are monumental. When I see the talent on this stage, excluding myself, I’m reminded of what JFK, John Kennedy, said when he brought his gifted team into the White House. He said, “This is the most gifted team ever since, that sat here, ever since Thomas Jefferson dined alone.” So I think you’re about to surpass that, and you are changing history, and I’m sure with such power comes enormous responsibility. And I think that’s the crux of what we’re talking about here is how do we inject a measure of responsibility and ethics into this exponentially changing development. How do you do that? That’s really what I’d like to know from you.

Elon Musk (03:39):

I think that this should be a very interesting discussion, and I think I know almost everyone in AI, and I think I really can’t think of anyone who’s got bad motivations in AI. I think everyone I know in AI has good motivations. They want to create technology that helps the world. But one could also say, if you think of someone like Einstein, who was obviously a very peace-loving person, and he didn’t think that there would be a nuclear bomb created as a result of his discoveries in physics.

(04:09)
He wasn’t thinking, let’s make super weapons. He was thinking, let’s try to understand the truth of the universe. So it’s just important to, I always want to just preface any criticism of AI development with it not being a criticism of the people doing it, but just to bear in mind that just as Einstein didn’t expect his work in physics to lead to nuclear weapons, we need to be cautious that even with good intention, the best of intentions, that we could create something bad. That’s one of the possible outcomes. And we want to try to take whatever actions we can to ensure that the future of humanity is good and that AI is much more of a blessing than a curse.

Greg Brockman (04:49):

That’s definitely why I work on it. I think that-

Elon Musk (04:52):

Not for evil?

Greg Brockman (04:53):

It turns out, but I think that balance is really key. And I think that one of the things that is important is to really calibrate the choices with respect to the technology, right? Because I think this technology, it has this way of being very surprising. The way that people pictured, like you read Asimov, you read all the science fiction, you read Eliezer Yudkowsk, and all this thinking people are doing about very smart machines that are smarter than humans. And somehow none of it really predicted GPT-3. Somehow, the technology that we’re building, it’s just surprising. It has very different properties, and I think that even from, you think about chess. People used to think chess would be, we solve that. We solve human intelligence, and it turns out that’s the first thing to go.

(05:31)
And so I think that what we’re really seeing is that we need to be calibrated and see not just what the current systems are doing. And this is one of the reasons we deployed ChatGPT, is it’s just really important to not get in our heads and think, oh, this is how it’s all going to play out. This is how people are going to misuse it. But you need to actually see that. You need to actually have people spending a lot of time thinking about where this slots in, where it works, where it doesn’t, but also to really try to peer ahead. And so we’ve been putting a lot of effort into capability prediction.

(05:55)
One of the things that I think was undersung in our GPT-4 release is that we actually had very good predictions of exactly how good GPT-4 would be before we even trained it. And that’s something we can never do with previous models. And I think that this and doing that at societal scale, I think that’s one answer for how do we actually get this right. Because if you can see around the corner, then you have an answer. And in AI, it’s always been the opposite. We’ve always been taken by surprise.

Benjamin Netanyahu (06:18):

Well, I want to ask you because there are sort of layers of questions here. Max’s book takes you to the existential question of whether you project basically machine intelligence or human intelligence into the cosmos. Human intelligence turned into machine intelligence, into the cosmos, and so on. That’s a big philosophical question. I’d like to think we have about six years for that. But the other questions that I’m dealing with as a leader of a country that is blessed with a lot of AI talent, rated about number two in talent relative to population size in AI.

(06:53)
But I want it to be among the top three because it’s important to advance the country, but I think also it raises questions that we raised before, but here’s another one. Okay, as ChatGP-4 or ChatGPT-8 comes into being and you have Hollywood strikes already about screenwriters, and directors, and actors, okay. What do you think will happen to the job market? I mean, we had this conversation; what do you think will happen? I mean, you can tell me, you can comfort me with all previous revolutions that there were more jobs, many more jobs created than lost, but do you really believe that now?

Greg Brockman (07:28):

Yeah, I think it’s far from obvious. I think it is a correct question. And I also think that it’s important to look at the whole picture of, for sure, there are jobs that are going to go away. What’s going to be the balance of marginal job creation? And you can tell the story of, oh, it’s going to be very comforting. You look at every previous technology. I do think AI is not like every previous technology. Usually, we are seeing automation of mechanical parts of creation, and here, it’s almost this creative aspect of creation. But I think that we want to even go one step further from where we’ve been and not just say, okay, I could have been an artist, and now an AI is a competing artist, but you want AI that can help us come up with the ideas and solve problems that we just couldn’t before. And so what does that world look like? And your post-scarcity world, where people don’t have to work to survive, what set of people, because people identify themselves. Your whole identity is around your work.

Benjamin Netanyahu (08:18):

You identify yourself by your job.

Greg Brockman (08:20):

Yes, exactly. And so what will happen then, and what does that look like? And so I don’t come with an answer to this question of exactly what the balance will be like. I think there will be fundamental changes in how we even relate to work and to meaning that will happen and may result in a landscape that just looks very different. That the answer to that question is, if you were a hunter-gatherer and someone’s like, what’s going to happen to our hunter-gatherer job post-agricultural revolution? It’s just a weird question. It’s like the framework changes on you. And so that’s not a cop-out. I think that it might be the case that, actually, many more jobs go away than get created, and there’s a lot of chaos and turmoil. But I think we’ve got to be ahead of it. And the more that we can predict it, the more that we can have concrete answers. And we have teams that try, but I think that we’re just getting started, and we need help.

Benjamin Netanyahu (09:04):

Well, it’ll change our economic models. Okay, I’m a clear free market guy. I liberated and help liberate the Israeli economy to make it a sort of a technological powerhouse. But that’s because of free markets. And free markets means open competition, and it’s fine, but you stop at the monopolies’ edge. Okay, well, you do that. I had a conversation with one of your colleagues, Peter Thiel, and he said to me, “Oh, it’s all scale advantages. It’s all monopolies.” I said, well, yeah, I believe that, but I think we have to stop it at a certain point because we don’t want to depress competition. Well, AI is producing this wall, and you have these trillion-dollar companies that are produced, what, overnight, and they concentrate enormous wealth and power with smaller and smaller number of people. And the question is, what do you do about that monopoly power?

(09:55)
What do you do about competition? And if you are going to cannibalize a lot more jobs than you create, then we have to change the structure of our economic policies and our political policies to take care of the people who are not going to find jobs, who are not going to contribute added value to the economy. We have to make sure that they have a living, a decent one, and they have all the services. We’ll probably have the money to do that, but it requires a challenging model, certainly for a free market guy, free market disciple like me, and that’s coming very fast. It’s not going to come slowly, and we have time to adjust.

Max Tegmark (10:29):

I like how you challenged the question, what’s going to happen? I like to continue on that theme because I think if we ask this very passive kind of question, what’s going to happen to us? Let’s just sit here and eat popcorn and wait for the future to happen. Then we’re heading straight for the curse, I think. This is a sort of very passive approach, which I think is just going to… We’ve done this experiment before when a new species showed up on the planet that was smarter than all the other ones. And what happened was the Neanderthals went extinct. If we just very passively go into this, I think it’s very likely humans will go extinct also. I think that’s the wrong attitude, even in economics. And you run a country, you don’t just sit back and wait to see what happens and get child labor and the super monopoly of doom.

(11:15)
No, you ask yourself, what do you want to happen? And then you put in place institutions, you ban child labor, you put an antitrust. And in this case, I think the analogous question is: how do we make sure that with this evermore powerful tech, it’s we who control the AI rather than the other way around? So I was honored that you had this book, if this book still hasn’t put you to sleep. This is a technical nerd paper I wrote with my-

Benjamin Netanyahu (11:40):

This one’s written with-

Max Tegmark (11:41):

… technological approach to how we can control machines that are even smarter than us by actually having other AIs formally prove that the AI is safe. And the reason why I’m so excited about the proving part is the only way you can trust something much smarter than you to still do what you want is if you can prove it. Because no matter how clever it is, it can never do what’s provably impossible. And it turns out it’s much harder to come up with proofs about math or about what a program is going to do than it is to check that the proof is correct.

(12:12)
You can write a 300-line computer program, which will check any proof, claiming anything about anything else. We can understand that. But if we force AI to prove things to ourselves, to us humans, that’s going to meet our specifications, then we actually have a reason, a way we can control things that are smarter than us.

Benjamin Netanyahu (12:32):

How do you know that the proof is not disputable?

Max Tegmark (12:34):

That’s the beauty of mathematics. You can actually check, even if it’s a brilliant proof that I couldn’t come up with myself. I can see I just plugged this into this and this, it checks out. And there’s a big science in doing this with proving things about programs for cybersecurity because companies are so fed up with losing millions of dollars by getting hacked all the time because yet another bug was found in secure

Max Tegmark (13:00):

Or shell or whatever, that there’s been years of investment. And Greg and I had a fun conversation actually just before we went on stage here about how this field of trying to prove that things are safe is still in the Stone Age. In the pre-large language model phase where it’s mostly humans trying to make the proofs. I think it’s ripe for a revolution where the kind of technology that OpenAI is building can totally turbocharge our ability to prove things about code.

Greg Brockman (13:26):

And I think there’s a bigger picture too, because in my mind if you think about cybersecurity, it’s always a cat and mouse game. You have attackers, you have defenders, each one’s leveling up, defeating the other. Is there ever going to be a time when the defenders totally win? Is there going to be a time when the attackers totally win? And I think with AI that balance can change and I think that formal verification is an example where maybe you can have defenders just win once and for all.

(13:48)
And I think there’s a lot of asterisks there. I think it’s definitely a maybe, but I think that at least at a technical level, formal verification has been this idea of if you could mathematically prove these systems are safe and good and that you don’t have bugs in your Linux kernel, then all of these bugs we’ve seen, we were talking about Heartbleed and Shellshock and all these bugs from the past 10 years, those would be solved. Those would go away. You would know there’d be no future one of those. Now, you need to be very careful because you need to make sure that your underlying machine model is actually correct and a bunch of details like that. But it’s just interesting to think that these fundamental things we believe about how security works might just be up and we’ve never had an opportunity for that.

Benjamin Netanyahu (14:23):

That’ll give us a lot of free time.

Greg Brockman (14:25):

Exactly.

Max Tegmark (14:25):

And it’s worth adding for the nerds listening to this who like to code, normally debugging, all it can ever accomplish is prove that there is something wrong. If you don’t find any bugs, is there one more or not? When you prove it, when it’s what’s called formal verification, as Greg said, that means-

Benjamin Netanyahu (14:45):

You seal the deal.

Max Tegmark (14:46):

… you’re guaranteed,

Benjamin Netanyahu (14:47):

You seal the deal.

Max Tegmark (14:49):

Seal the deal.

Benjamin Netanyahu (14:49):

All right. That assumes by the way that mathematics is, that’s it. It’s the law of the universe. It’s not going to change. You’re not going to get alien mathematics.

Max Tegmark (15:00):

You don’t need to assume that actually.

Benjamin Netanyahu (15:01):

Well, you sort of do.

Max Tegmark (15:02):

You get into Godel’s incompleteness theorem and things like that, that comes in a big way when you’re dealing with infinities of various kinds. All the computers, even Greg’s, very biggest servers, it’s all finite.

Greg Brockman (15:12):

I would say you still got to watch out for the alien. mathematics.

Max Tegmark (15:14):

But you can totally formalize what it means that it’s impossible to hack into his laptop. And if you have a rigorous proof, I think that seals the deal.

Benjamin Netanyahu (15:23):

So I think that beyond these enormous questions that you raise, I think for me the immediate problem is to fashion a national policy, which I intend to announce in a few months. And you guys are stimulating me, I have to say, this visit. But you sort of think, okay, what is it that you are trying to achieve? You’re trying to better all the systems, transportation, health, education, agriculture, production, everything, for government, for the private sector, for philanthropy, for academia and so on. And what is the role of the government in this? And various countries are seeking solutions to this. What do you do? What kind of instruments do you build? I think the larger question is you still have the monopolization of AI power in small concentrations because of what you said before, Elon. You said basically it’s too big. You need the big data, you need the big computational power, you need the talent for creating the algorithms, and that’s concentrated in a few countries.

(16:25)
We have another test not only to make sure that we take care of ourselves, but how we have these benefits percolate to the rest, the scientific and technological have-nots. And I think that’s another real issue. But so far the advances that we’ve made, at least in Israel, I don’t know anything from DiskOnKey, remember that? We used to have something like that? And I don’t know. A drug, the camera pill that goes into your intestine or any other thing that you have from ICQ to cherry tomatoes, that benefits everybody. Drip irrigation. In AI, it seems to me that you’re going to have a concentration of power that will create a bigger and bigger distance between the haves and the have-nots. And that’s another thing that causes tremendous instability in our world. And I don’t know if you have an idea how do you overcome that?

Greg Brockman (17:14):

Well, first of all, I want to say just on the talent density front, we have a number of Israeli engineers, scientists, managers, and they are top-notch, no question. We have a manager who runs one of our core teams who is ex-Special Forces and there’s nothing that phases him. So I think there’s something about the Israeli way that I think is actually very conducive to this field. And I think that it is really important to counteract this rich get richer phenomenon, if it comes at the expense of everyone else. I think that the world we should shoot for is one where all the boats are rising. And I think that there will be some sort of inequities in the landscape. And I think that’s something to address and that’s something that I think that leaders such as yourself, I think need to be very cognizant about. I think that we’ve talked about UBI, OpenAI actually has run, I think, the world’s most comprehensive UBI study, something people talk about, universal basic income. The idea of just the government just distributes money. That’s just what it does.

(18:05)
If most people are not working, producing dollars themselves and there’s a world of plenty, is that a mechanism that can work? Maybe it can, maybe it can’t. There’s a lot of assumptions that would go into a world where it works. It’s clearly, I think, also has a lot of downsides to it. But I think that we need to have creative solutions. Again, I think that just like defense versus offense might fundamentally change economic production, and what it takes to actually build even a stable society I think is going to fundamentally change.

Benjamin Netanyahu (18:33):

Well, Elon, you said when I asked you this question, one of our nighttime conversations, I asked you, “Well, what are you going to do when people don’t have jobs and we shell out money for them?” And you said, I think you said, “Oh, what’s bad in living in paradise, right?” I think she said that.

Elon Musk (18:48):

Yeah. This is a question of the concept of heaven. Now in heaven, I don’t think you need to work in heaven. I haven’t read that anywhere.

Benjamin Netanyahu (18:58):

They’re not laughing.

Elon Musk (18:59):

Feel free to, if you want. So the very positive scenario of AI is actually in a lot of ways the description of heaven, in that really nobody would need to work. I wouldn’t even call it universal basic income. I’d say it’s probably universal high income. I’m describing the best case scenario here. So I’m not saying this is definitely what’ll occur. There’s a range of scenarios from very negative to very positive. Very positive scenario is basically sounds like heaven. You can have whatever you want. You don’t need to work, you have no obligations. Any illness you have can be cured. And-

Benjamin Netanyahu (19:31):

When do we die?

Elon Musk (19:32):

Well, that’s a good question.

Greg Brockman (19:33):

I think it’ll be a choice.

Elon Musk (19:34):

I think it probably ends up being somewhat of a choice.

Benjamin Netanyahu (19:37):

You want this world?

Elon Musk (19:38):

Well, I’m not saying… I don’t know, there’s sort of a philosophical debate, is the concept of heaven as it is normally described actually what you want? Do you want to be in a future of a world where in the AI sense that AI could do everything better than you by far? Because you have man search for meaning questions type of thing, good work. Like I said, if AI can do everything better than you, do we live a life of hedonism? I don’t know. It’s hard to say. Where things are headed in a positive scenario is that there would be no scarcity of goods and services. The robots and computers can make as much of whatever you want, for as many people as you want, with really no limitations. Any scarcity you have would be artificial. Meaning if you define that a particular artwork done by a human is unique and special, then it’s artificial scarcity, but there will be no actual scarcity unless we define it arbitrarily. Just define something to be scarce.

(20:38)
So I think for a lot of people that is a good future. I might be personally a bit frustrated because I’m like, “Well, what am I supposed to do now?”

Benjamin Netanyahu (20:46):

You’ll think of something.

Max Tegmark (20:48):

I think that this is a very interesting set of challenges. What would we do if people have no jobs? But I think it’s important to remember that we’re not automatically going to end up in this situation in the way you’re thinking about it now. We could also I think very likely end up in a situation where people have no jobs because there are no people anymore, where we literally go extinct. We had a statement that came out in May that I think Sam Altman and you both signed saying… And Ursula von der Leyen came out with it a couple of days ago saying, “We have to take seriously human extinction.” And if this sounds like very crazy science fiction, we were joking, but just a few years ago, all your success sounded like science fiction too. And you said that we don’t have a lot of time.

(21:30)
These are things that could happen relatively soon. And it’s very important to me because I want the good future, to think about how we steer away from those sort of scenarios. One way in which we could end up in this kind of bad future is pretty obvious. The idea that we can lose control to something more intelligent is not new at all. Alan Touring himself said this in the fifties, that’s kind of the natural thing to expect. That’s what always happened on our planet before. When a new smarter species came in, it took control over the older ones. And then bye-bye Neanderthals, bye-bye mammoths, whatever. More concretely, I think even though Hollywood is full of scenarios where someone somehow loses control of AI and it takes over, there are more sneaky ways in which we could end up in this sort of bad state even more willingly.

(22:21)
So for example, right now I’m a big fan of the free market too, but it’s the fact that if people are competing against other people, they will often seed control voluntarily to machines to get an edge on the other guys. So companies who replace workers with machines, out-compete companies who don’t, armies who seek control to machines, out-compete armies who don’t. Once we have really powerful AI that’s better at running companies and countries and humans, then countries that replace their prime ministers by AI might out-compete others. And then we end up in this future where all these things are happening, but it’s not our future anymore. And we’re like, wait a minute, where did we go wrong here? It was supposed to be for us.

Benjamin Netanyahu (23:03):

But that’s the point. Forgive me for borrowing a page from your book, literally a page from your book. I’m confessing, I’m a species. Okay?

Greg Brockman (23:10):

Hard not to.

Benjamin Netanyahu (23:12):

Okay, so I’m for the human species. We can talk about other evolutions and machine launched intelligence and so on, but I’m for… In the Jewish tradition, there’s a saying, how do we transfer this? The impoverished of your own city, of your own town precede the impoverished of other towns. So the human species as far as I’m concerned, is something that I’m vitally interested in maintaining. I’ll tell you that the problem we have is not really extinction, which I hope you can… Maybe you can extemporize, just elaborate on. But here’s something else. Even if we don’t go extinct, how do we define ourselves? You talked about that, Greg. So we defined ourselves. We had this myth in the Bible. We had heaven, paradise. You can just pick off the trees, pick off the fruit from the trees, you’d have to do nothing. And then the serpent comes and messes things up and then we’re condemned to toil for our labor.

(24:08)
So life is a struggle. It’s defined as a struggle. Defined all the time as a struggle where you’re competing with the forces of nature or with other human beings or with animals and you constantly better your position. This is how the human race has defined itself, and our self-definition is based on that, both as individuals, as nations, as humanity as a whole. Now, that could be challenged and it is challenged. So it’s both our self-definition throughout human history and evolution that is being challenged, and also the question of our continued existence, which you can please explain why you think we’re in danger of extinction?

Greg Brockman (24:45):

Well, I was going to say, I think this whole arc in my mind is all about paradigm shift, and I think that even the question of what would that heaven post AGI positive future look like? I think even that is hard for us to imagine what the true upside could be because it’s not just material abundance. It’s not just we all get nice cars or something, hopefully all Teslas, but also that everyone gets a great education. Everyone has that story of that one teacher that they had who paid attention to them and inspired them on a specific subject. Imagine if you had that teacher for anything you wanted constantly and just how much better of people would we be? How much better would we relate to others? That is just one example of the kinds of upsides that are possible. And I think the question of, well, what does it mean to be a human? Back to, how do you predict what’s going to come next? Actually, the thinker who I think had the best foresight about how the AI revolution was going to play out is actually Ray Kurzweil.

Elon Musk (25:38):

I agree.

Greg Brockman (25:39):

Yeah. And his book ‘Singularity Is Near’, gets a lot of shit. I think that people kind of assume it’s going to be this almost like just sort of religious text, but instead it’s a very dry analytical text, and he just looks at the compute curves and he says, “This is the fundamental unlocker of intelligence.” Everyone thought that was crazy and now it’s basically true. It’s basically common wisdom. And part of what he says is, “Look, what’s going to happen is in 2030s…

Max Tegmark (26:00):

… Thirties. First of all, you said it’s AGI 2029.

Elon Musk (26:02):

Yeah. I keep telling people it seems to be almost exactly right.

Max Tegmark (26:05):

It’s spooky. It’s spooky. 2030s is when the merge happens. So we’ve got Neuralink coming and maybe other systems like that. And what does it mean once you actually are merging with an intelligence?

Elon Musk (26:16):

Well, Neuralink necessarily moves slower than AI because whenever you put a device in a human, you have to be incredibly careful. So I think it’s not clear to me that the Neuralink will be ready before AGI, I think AGI is probably going to happen first. But we’ll have the sort of AGI singularity. Sometimes digital super intelligence is called a singularity, like a black hole. Because just like with a black hole, it’s difficult to predict what happens after you pass the event horizon of a black hole. And we are currently circling the event horizon of the black hole that is digital super intelligence, the event horizon. So the reason for Neuralink, although initially it would be very helpful to people who have brain or spine injuries, just doing basic stuff like enabling people who are tetraplegics or like a Stephen Hawking to communicate actually even faster than I’m communicating with a fully functional body.

(27:10)
And ultimately to improve the bandwidth between the cortex and your AI version of yourself. In fact, I think a lot of people perhaps don’t quite realize that they’re already a cyborg. So you’ve got your limbic system, your basic drives, your cortex, which is the thinking and planning, and then you have tertiary layer, which is your computers, your devices, your phones, laptops, all the servers that exist, the applications. And in fact, I think probably a lot of people have found that if you leave your cell phone behind-

Benjamin Netanyahu (27:41):

Take it away, get panicky.

Elon Musk (27:43):

Yeah. If you forget your cell phone, it’s like missing limb syndrome. Where’d that thing go? Losing your cell phone is like missing limb syndrome. So because it is, a cell phone is an extension of yourself. The limitation is bandwidth. So the rate at which you can input, or I should say output information into your phone or computer is very slow. So with a phone, it’s really just the speed of your thumb movements. And with best case scenario, you’re a speed typist on a keyboard, but even that data rate is very slow. We’re talking about tens, maybe hundreds of bits per second.

(28:16)
Whereas a computer can communicate in trillions of bits per second. And this is admittedly somewhat of a hail Mary shot or whatever, a long shot is that if you can improve the bandwidth between your cortex and your digital tertiary self, then you can achieve better cohesion between what humans want and what AI does. At least that’s one theory. I’m not saying this is a sure thing, it’s just one potential iron in the fire. If ultimately hundreds of millions oo billions of people get a high bandwidth interface to their digital tertiary self, their AI self effectively, then that seems like that probably leads to a better future for humanity. So I’m getting somewhat esoteric, hopefully some of this is making sense. And like I said, it’s a long shot. But what I find interesting is that I’ve not found any human who wishes to delete either their cortex or their limbic system. I probably… I suspect nobody in this room wants to do that. It’s real [inaudible 00:29:16] sorry.

Max Tegmark (29:16):

Or their cell phone.

Elon Musk (29:17):

Well no, I’ve met people who don’t use a cell phone. It’s rare.

Max Tegmark (29:20):

Okay.

Benjamin Netanyahu (29:21):

Well I confess, for years I avoided this and I avoided it. I didn’t even have… Not only did I not have a cell phone within my immediate radius, I didn’t have a television set. My phones were 35 year old phones because I was Prime Minister of Israel and I knew what this means. Okay. I mean maybe your audience doesn’t know, but we knew. Now having spent some time in the opposition, I lapsed into the use of the cell phone and I find it very hard to dissociate from it for the reasons that you said. It’s become an extension of a repository of memory. The ability to get information very fast, which you say correctly is very slow, but compared to what it was before. Remember you had research assistants, they have to go to libraries, things like that, where’s that? That’s gone. But this will be seen as vastly primitive compared to what you’re talking about.

Elon Musk (30:14):

Yeah, I’ll just add something as that, have to go to Max. But I mean it’s sort of a funny thing. If you assume like a best case AI scenario, imagine if you’re the AI and you just want the human to tell you what it wants, just please spit it out. But it’s speaking so slowly like a tree, like trees communicate. If you watch a tree, a sped up a version of a tree growing, it’s actually communicating. It’s communicating with the soil. It’s trying to find the sunlight, it’s reacting to other trees and that kind of thing, very slowly. But from a tree standpoint, it’s not that slow. So what I’m saying is, we don’t want to be a tree. So that’s the idea behind a high bandwidth neural interface, is just even when the AI desperately wants to do good things for us, so that we can actually communicate several orders of magnitude faster than we currently can.

Max Tegmark (31:06):

I love that you mentioned the trees there because if anyone is still struggling to understand why it’s so likely that we could get wiped out by AI, to a super intelligent AI, we are like trees. They are as much faster than us in their thinking then we are to trees. So if some trees in the rainforest are a little bit worried that some humans are going to come chop them down and they’re like, “Oh, don’t worry, we’re so smart. We’ll stop those humans.” Yeah, good luck with that right. I love how you went philosophical here and talked about what it means to be human, and I wanted to follow up on that-

Benjamin Netanyahu (31:38):

But before that maybe just continue that. Okay, so how… You are assuming that the danger of human extinction is that you have machine, super machine, super intelligent machines that outpace humans by orders and orders of magnitude who will not necessarily value the continued human existence as we know it and basically program us out. That’s basically what you’re saying.

Max Tegmark (32:02):

So if we were to be so dumb as to create the entities that are much more powerful than us that don’t share our goals, yeah, that’s a really bad idea. I mean, in Israel I think you understand really well that if you have a bunch of other beings who are intelligent and don’t share your values, it can go really badly. And let’s not make that mistake by giving too much power to machines, don’t share our values. But of course we can tackle that in the various ways. So coming back to what can we concretely do? I just want to say one thing about the philosophical point about being human. Just for myself, one of the things that I really cherish the most about being human is the agency, that I can actually make choices and make a difference. And one of the most beautiful things about what you were all doing with technology is giving us more agency. Back in the middle ages when we had a life expectancy of freaking 30 years and knowing that we might starve to death next year because of the weather. It’s so much more inspiring now that we’re becoming more of the captain of our own ship with technology, able to actually have a more agency as a species. So I say let’s use that agency to create a wonderful future, not throw it away. And three things I wanted to just suggest for concrete action items we can do here. I’ll say a little more about each of them. One has to do with pausing risky stuff. One has to do with controlling the machines and one has to do with inspiration for the future. So for the controlling, we already talked quite a lot about the importance of staying in control of these machines, so I don’t need to dwell on it anymore. I think it’s saying things like, “Oh, what is going to happen to us? Bring out the popcorn.” That’s too passive for me. Let’s celebrate our agency and ask ourselves not what we think will happen, but what we want to happen.

(33:48)
And clearly we want to control these machines, we’re building them. Second thing, so I agree with Greg on many things and I greatly respect your work. One thing where we have a friendly disagreement is I signed this pause letter saying, I think we should pause the riskiest things and you don’t. I think that it would be a good strategy for us to emulate what biotech does and just have clear safety standards. If you want to release a new drug, you can’t just sell it in the supermarket. You have to do a clinical trial, persuade some experts somehow that this is safe. And I would like to see really for high stakes, very powerful systems, some sort of system like this with standards. And then the companies have… It’s on them to prove that they’re safe before they get released.

Benjamin Netanyahu (34:32):

Who does the standards? Who makes the standards?

Max Tegmark (34:34):

Well, and we’ve solved this. It’s been solved in Israel, it’s been solved in the US for biotech-

Benjamin Netanyahu (34:39):

Not for biotech, but here’s, do you want it regulators or do you want it legislatures? I mean it’s not the only choice, but it’s a pretty clear dichotomy.

Max Tegmark (34:50):

I don’t think we need to reinvent this wheel. The people who decide whether a drug gets approved in Israel is not people in parliament or in the Knesset, right? It’s Israeli scientists, but the government is involved in approving them, exactly. We can emulate that and Elon can tell us all about how we deal with cars. I don’t want to dwell on it. I just think this is not rocket science. We can put these standards in place and then we can continue doing all sorts of wonderful stuff with AI tech. Some stuff we’ll wait a little longer for, but it’ll be worth the wait because it’ll be safe. The last thing I just wanted to say is I think for us to really seize this agency and work together, not just between companies but also across borders, we really need a shared positive vision. And this is something I really appreciate so much about you, Elon. You have the guts more than most to talk about really audacious, positive visions for the future.

(35:43)
What is it that makes people collaborate and put little squabbles aside? It’s always a shared positive vision where people realize that if we collaborate, we’ll all be better off. Technology is not a zero sum game. You can take countries like Sweden and Denmark that had been killing each other for centuries and flat GDP, and then technology came along and they both got dramatically better off. It’s so obvious that the technology that you’re building, you’re building can make all humans just spectacularly better off with AI and not just on this planet. I like the list before we started you had there about how it can revolutionize healthcare and so many other things. And I would add to that, why limit ourself to this planet when there’s this beautiful universe where life can flourish for billions of years. The more we can just appreciate how huge the upside is if we do this right, the more incentive we will have to actually be a little patient where we need to be a little bit patient, make everything safe, and therefore get this wonderful future.

Benjamin Netanyahu (36:48):

Well, maybe we’re sort of lagging behind in our social and political development. This is having nuclear technology in the stone age. What do you do with it? Which I think is more or less the thing that I talked about in terms of the pace of development, pacing basically what solutions we need to put in place to maximize the benefits and limits the risks. I think we’re there. I think we should have a meeting of minds. You were in the Senate the other day, I presume you were talking about this. I mean, you don’t have to divulge what you can’t divulge but divulge what you can.

Elon Musk (37:22):

Well there were a lot of people in the meeting, so I don’t think it was secret. It was just a discussion on AI regulation and actually Sam Altman from OpenAI was there and a number of leaders were there. And I think generally it would be a good idea to have some kind of AI regulatory agency. And you start off with a team that gathers insight to get maximum understanding. Then you have some proposed rulemaking and then eventually you have regulations that are put in place. And this is something we have for everything that is a potential danger to the public. So if it is food and drugs, you have the Food and Drug Administration, we’ve got aircraft with the FAA and rockets. Anything that is a danger to the public, over time we have learned as often the hard way after many people have died, to have a regulatory agency to protect public safety. I’m not someone who thinks that regulation is some panacea where it’s only good. Of course there are some downsides to regulation in that things move a bit slower and sometimes you get regulatory capture and that kind of thing. But on balance, I think the public would not want to get rid of most regulatory agencies. You can think of it also as the regulatory agency being like a referee. What sports game doesn’t have a referee? You need someone to make sure that people are playing fairly, not breaking the rules. And that’s why basically every sport has a referee of one kind or another. So that’s the rationale for AI safety. And I’ve been pushing this all around the world and when I was in China a few months ago, meeting with some of the senior leadership. But my primary topic was AI safety and regulation. And after

Elon Musk (39:00):

They, after we had a long discussion, agreed that this merit to AI regulation and immediately took action in this regard. So sometimes we will get this comment of, “Well, if the West does AI regulation, surely then what if China doesn’t and then leaps ahead?” I think they are also taking it very seriously because the opposite of whatever moral constraints you program into the system. Greg, I don’t know, what are your thoughts on this?

Greg Brockman (39:27):

Yeah. I think, again, it’s just like this technology is almost backwards from how you’d expect it to work. I think that a lot of people talk about the paperclips problem of if you think of that as if we literally program into a machine, make as many paperclips as possible and it gets really smart and it’s like, “Well, there’s all these houses and factories and other things that are not producing paperclips. They’re all going to be just paperclip production now, all these humans that are in the way, paperclip would be a better arrangement of their atoms.” That’s just not the technology path we actually seem to be on. You look at GPT3 and it is a system that gets the nuance, it gets the common sense. Now it has other problems. It makes mistakes that no human ever would. It loses its chain of thought and says all these weird things and sometimes make things up.

(40:07)
We have real problems we have to encounter. But I think that for me, that the hard thing that I think gets lost and some of the conversations is the nuance of really being coupled to what’s the technology we actually see and particularly, how does it depart from our intuitions about where we thought it would go? I actually think it’s even interesting, Max mentioned the pause letter. The pause letter, one of the key parts of it was to not train GPT5 for the next six months. We looked at that and we were like, well, we’re not training GPT5 for the next six months anyway. So it’s just like again, if you miss the nuances of what’s going on at a mechanical level, and some of this does require getting really in the weeds and coming and talking to us and us opening up and showing our in-progress work.

(40:47)
I think that’s something that will be very important as you think about moving to these very powerful systems. I also think there’s another blessing and curse of how the technology is playing out, where we talk about digital super intelligence and we also talk about ChatGPT. These are very different things, but somehow there’s going to be a smooth continuum between them. So I think that the blessing is that we see a little bit into the future. We have a little taste of a system that, I talked about a personalized tutor for everyone. You have it. Now, it’s not a perfect tutor. It’s got a lot of problems, but you can go use ChatGPT for free to ask about anything. It’ll give you something sometimes actually pretty insightful.

Benjamin Netanyahu (41:22):

Well-

Greg Brockman (41:23):

Fair enough. I’ll withdraw the…

Benjamin Netanyahu (41:25):

7, 7 1/2, but you’ll get there. You’ll get to a 10.

Greg Brockman (41:29):

We will definitely. That’s right. I think it’s great for people to see it while we can still laugh about it. I think that when it starts coming up with new ideas, you type in some problem you’re having at home trying to figure out how to do something and it just comes back with, “Oh, you should say this to this person and you should do that. Here’s how you should rearrange your schedule.” It’s just like great ideas, that’s going to feel so different. Then one day when we’re actually having a system that can actually just autonomously go off and do a bunch of research and say, “Here’s this crazy chemical formula, please mix them together in these ways,” you want to make sure that thing is trustworthy before you start mixing those chemicals. So I think that this question of how do we separate out the sort of diffusion of today’s AI technology and application and really all the debates that we’re having, let those happen.

(42:12)
Open-source technology is like everyone’s building, we should not squelch those. We should actually promote those. But I think that also looking ahead, being ahead of the game here and saying there is going to be a point and we’re hitting that wall where we’re going to be building the most massive machines that humans have ever built, like Elon talks about, these massive rows of data centers with all these computers. It’s one of those things where you have to see it to really feel it. I’ve walked through some of these data center halls and you just get a sense of the scale of what’s being built. It’s very different when it’s abstractly just like you’re looking at, “Oh, is this many GPUs,” and looking at numbers. It’s like you just realize the mechanical force that is behind what you’re building for cognitive intelligence. I think that kind of thing we need international answers for.

Benjamin Netanyahu (42:55):

I fully agree with you. I think first of all, with all due respect to the pause button, we ain’t going to get it. It’s not going to happen. I think you can’t put the genie back into the bottle, and I’m not sure you can slow it down. The only thing you can do is just accelerate our own efforts. I think you’re right, it has to be international. I think you start with a like-minded and like-valued states. Start with that. Frankly, I think obviously you know what AI stands for, it stands for America and Israel, obviously. I’m just plugging my own country, but it’s not an empty plug because I think that there’s a concentration of talent in Israel that makes it more than a mouse compared to the American elephant. It’s more than that. It’s a very nimble mammal and very fast, compact.

(43:38)
It’s also got this feed of military AI which moves very fast and takes a very large part of the population of the talents there. It screens it trains it very quickly. So I think that you start with the like-minded states, it definitely should have a very robust conversation, first of all, decide among us what is something… there are things that we wouldn’t want to share and it’s obvious and related to our security. There are standards that we have ethical standards and other regulations that we have to achieve quickly between industry and governments quickly. Then I think we have to conduct a robust discussion with the other powers of the world based on their self-interest as you began to do. I think that’s a pioneer at work, and I think we have a shot maybe at getting to some degree of control over our future, which could be amazing. I’m not sure I want to live in heaven. It could be boring. I don’t mean The Boring Company, I mean it could be boring, but I think where we’re headed I’d much rather have a boring heaven than an interesting hell.

Max Tegmark (44:44):

Here at Tesla they have a different name for pause button, they call it the break. When we launched this pause letter, I personally didn’t think that it was immediately going to cause a pause. For me, the real purpose of it was to mainstream the conversation about these issues, which really did happen. I really appreciate you and Sam and your colleagues for saying very openly that there may come a time when you will want to pause certain things if you meet certain criteria. Now, if Elon says that there may come a time when he’s going to slow down his Tesla, I would trust that more if I knew that Tesla actually had brakes in it so that he had a mechanism by which he could actually slow it down. So this is something I’m very curious actually to hear from you about. What criterion would you say that you all have? Can you say it already now? What are some things which would make you want to pause? Then won’t it be difficult with the actual mechanism to resist commercial pressures to continue even if you and Sam want to pause?

Greg Brockman (45:47):

Well, I actually agree with both of you that I think that my picture of how the future is going to play out in terms of AI development is not as simple as, oh, you’re scaling models and then you see something a little bit weird and then you stop all scaling, and you all go home or something, right? It’s definitely not going to be that. I think that we need to be making maximum velocity efforts towards a good future. I think that is something that you don’t want to pause, you actually want to put the foot on the pedal. Now, of course, the question is, what actions actually get you towards there? Which ones are against it? I think that nuance is something that I think is what we should be debating.

(46:22)
But that’s why I actually bounced a little bit off of the framing of the pause because I think even to the point of GPT4 doing all of this prediction work ahead of time, if that prediction work was a little bit wobbly and we weren’t quite there, and we actually do this all the time, that there was some experiment that came back a little bit strange, we’re not really sure we understand it. We actually spend a lot of time to fix that. It’s not even if you forget about the safety reasons, and actually one of the reasons we do it is safety motivated. But if you even forget about that, you’re just not going to build a good system. So I think that there’s a lot of places in AI where there’s actually great alignment between capability and safety, between building something useful and something that is safe. It’s not perfect, but I think it is actually surprising where there are divergences. Actually, the single biggest divergence we’ve seen has not been owed too much pressure to deploy quickly ’cause I actually think that getting things out, you can see the market, it hates hallucinations, it hates it when the model makes something up. All the jokes we made about, “Oh, it doesn’t give you good insights,” the market hates that stuff. Actually, I think it’s very aligned with safety to fix them. But what the market actually pushes us against is actually when it comes into conflict with other values, for example, privacy. So that the way we launch the initial GPT APIs was that we said, “Hey, we’re going to monitor everything. We’re going to record everything. If something sketchy comes up, we’ll go and look. We’ll be able to look back in the log, we’ll be able to see what you did.” People hate that, right? You want that for everyone else, but for yourself, you want privacy.

(47:40)
So how do you balance those two factors? We actually see this very much in terms of to actually win deals, to decide to be able to give companies the assurance that “Hey, we’re not going to retain any of your data. We’ll delete it immediately.” That removes our ability to do any investigation postop. And so this is actually a place where I think government can be very helpful, where norms can be very helpful to say, “Hey, even though this is what would be commercially most viable, that this is what the rule’s going to be and everyone has to play with it,” you don’t have a race to the bottom. So I think that again, it’s all going to be nuanced. It’s all going to be, I think like I’ve just accepted that this field, it’s totally counterintuitive. Everything we learn just is like, it just would make you, just like if you told yourself 10 years ago, you would’ve said, “There’s no way. That sounds totally crazy,” and somehow we have to navigate through that.

Benjamin Netanyahu (48:25):

Well, the only thing that I can add to this conversation is to invite you all to Israel. I’d like to, in fact, continue this conversation. In fact, bring a few more of your colleagues and a few leaders to discuss this international strategy for like-minded states to begin with that. First like-minded states, then the world, okay? How about that as a plan of action? But I think that the work that you are doing is amazing, nothing short of it. Elon was saying, what is a human being? It’s this two legged thing, ape-like creature with a cortex and a computer and it has wants, desires and a brain, but the brain is developing in ways that we couldn’t possibly imagine, and it just doesn’t stop.

Elon Musk (49:09):

Well, from my earlier comments, the thing that perhaps should provide some hope for digital not being controllable in a very, very specific way, but acting in a way that is beneficial to humans is that our cortex and our limbic system, the cortex is actually in service to the limbic system. The cortex, the thinking part is constantly trying to make the instinctual part happy even though it is much smarter than the instinctual part. So I think there’s hope there that digital super intelligence, even though it is much smarter than our cortex, will nonetheless, want to make the cortex happy and ultimately, the limbic system happy.

(49:44)
Not to get too off color here, but the sheer amount of thinking that people have done to have sex is a lot, but not sex for procreation, just to make the limbic system happy. So given how much thought has gone into getting laid essentially for the sole purpose of getting laid, not procreation, and all that does is make the limbic system give a spark of joy. I think that actually suggests that there is a good future where even if we don’t understand the digital super intelligence, just as the limbic system does not understand the cortex, it might still actually at the end of the day be trying to make our limbic system and cortex happy.

Benjamin Netanyahu (50:24):

Is that a happy note to end?

Greg Brockman (50:26):

I think it’s a great note to end on. Thank you. Thank you, everyone.

Benjamin Netanyahu (50:28):

Thank you.

Elon Musk (50:28):

Thank you.

Max Tegmark (50:30):

Actually, that was really fun.

Transcribe Your Own Content

Try Rev and save time transcribing, captioning, and subtitling.