Transcripts
The AI Revolution: Google's Developers on the Future of Artificial Intelligence 60 Minutes Transcript

The AI Revolution: Google's Developers on the Future of Artificial Intelligence 60 Minutes Transcript

Competitive pressure among tech giants is propelling society into the future of artificial intelligence, ready or not. Scott Pelley dives into the world of AI with Google CEO Sundar Pichai. Read the transcript here.

Hungry For More?

Luckily for you, we deliver. Subscribe to our blog today.

Thank You for Subscribing!

A confirmation email is on it’s way to your inbox.

Share this post

Narrator (00:01):

We may look on our time as the moment civilization was transformed as it was by fire, agriculture and electricity. In 2023, we learned that a machine taught itself how to speak to humans like a peer. Which is to say, with creativity, truth, error and lies. The technology, known as a chatbot, is only one of the recent breakthroughs in artificial intelligence, machines that can teach themselves superhuman skills. We explored what’s coming next at Google, a leader in this new world. CEO Sundar Pichai told us AI will be as good or as evil as human nature allows. The revolution, he says, is coming faster than you know.

Speaker 2 (00:54):

The story will continue in a moment.

Scott Pelley (00:59):

Do you think society is prepared for what’s coming?

Sundar Pichai (01:05):

There are two ways I think about it. On one hand I feel, no, because the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology’s evolving, there seems to be a mismatch. On the other hand, compared to any other technology, I’ve seen more people worried about it earlier in its life cycle. So I feel optimistic. The number of people who have started worrying about the implications, and hence the conversations are starting in a serious way as well.

Narrator (01:37):

Our conversations with 50-year-old Sundar Pichai started at Google’s new campus in Mountain View, California. It runs on 40% solar power and collects more water than it uses, high-tech that Pichai couldn’t have imagined growing up in India with no telephone at home.

Sundar Pichai (01:58):

We were on a waiting list to get a rotary phone and for about five years. It finally came home. I can still recall it vividly. It changed our lives. To me it was the first moment I understood the power of what getting access to technology meant, so it’s probably led me to be doing what I’m doing today.

Narrator (02:19):

What he’s doing, since 2019, is leading both Google and its parent company, Alphabet, valued at $1.3 trillion. Worldwide, Google runs 90% of internet searches and 70% of smartphones. But its dominance was attacked this past February when Microsoft linked its search engine to a chatbot. In a race for AI dominance, Google just released its chatbot named Bard.

Sissie Hsiao (02:53):

It’s really here to help you brainstorm ideas, to generate content, like a speech, or a blog post, or an email.

Narrator (03:02):

We were introduced to Bard by Google Vice President Sissie Hsiao and Senior Vice President James Manyika.

Sissie Hsiao (03:09):

Here’s Bard and-

Narrator (03:10):

The first thing we learned was that Bard does not look for answers on the internet like Google search does.

Sissie Hsiao (03:19):

So I wanted to get inspiration from some of the best speeches in the world-

Narrator (03:23):

Bard’s replies come from a self-contained program that was mostly self-taught, our experience was unsettling.

Scott Pelley (03:32):

Confounding, absolutely confounding.

Narrator (03:35):

Bard appeared to possess the sum of human knowledge, with microchips more than 100-thousand times faster than the human brain.

Scott Pelley (03:45):

Summarize the-

Narrator (03:46):

We asked Bard to summarize the New Testament. It did, in five seconds and 17 words.

Scott Pelley (03:52):

In Latin.

Narrator (03:54):

We asked for it in Latin, that took another four seconds. Then, we played with a famous six-word short story, often attributed to Hemingway.

Scott Pelley (04:05):

For sale. Baby shoes. Never worn.

Sissie Hsiao (04:08):

Wow.

Narrator (04:09):

The only prompt we gave was “finish this story.” In five seconds…

Scott Pelley (04:16):

Holy cow! “The shoes were a gift from my wife, but we never had a baby. They were-”

Narrator (04:24):

From the six-word prompt, Bard created a deeply human tale with characters it invented, including a man whose wife could not conceive and a stranger, grieving after a miscarriage, and longing for closure.

Scott Pelley (04:43):

I am rarely speechless. I don’t know what to make of this. Give me that story-

Narrator (04:52):

We asked for the story in verse. In five seconds, there was a poem written by a machine with breathtaking insight into the mystery of faith, Bard wrote, “She knew her baby’s soul would always be alive.” The humanity, at superhuman speed, was a shock.

Scott Pelley (05:15):

How is this possible?

Narrator (05:16):

James Manyika told us that over several months, Bard read most everything on the internet and created a model of what language looks like. Rather than search, its answers come from this language model.

James Manyika (05:32):

So, for example, if I said to you, Scott, peanut butter and?

Scott Pelley (05:37):

Jelly.

James Manyika (05:37):

Right. So, it tries and learns to predict, okay, so peanut butter usually is followed by jelly. It tries to predict the most probable next words, based on everything it’s learned. So, it’s not going out to find stuff, it’s just predicting the next word.

Narrator (05:54):

But it doesn’t feel like that. We asked Bard why it helps people and it replied, “Because it makes me happy.”

Scott Pelley (06:04):

Bard, to my eye, appears to be thinking. Appears to be making judgments. That’s not what’s happening? These machines are not sentient. They are not aware of themselves.

James Manyika (06:20):

They’re not sentient. They’re not aware of themselves. They can exhibit behaviors that look like that. Because keep in mind, they’ve learned from us. We’re sentient beings. We have beings that have feelings, emotions, ideas, thoughts, perspectives. We’ve reflected all that in books, in novels, in fiction. So, when they learn from that, they build patterns from that. So, it’s no surprise to me that the exhibited behavior sometimes looks like maybe there’s somebody behind it. There’s nobody there. These are not sentient beings.

Narrator (06:55):

Zimbabwe born, Oxford educated, James Manyika holds a new position at Google, his job is to think about how AI and humanity will best co-exist.

James Manyika (07:08):

AI has the potential to change many ways in which we’ve thought about society, about what we’re able to do, the problems we can solve.

Narrator (07:18):

But AI itself will pose its own problems. Could Hemingway write a better short story? Maybe. But Bard can write a million before Hemingway could finish one. Imagine that level of automation across the economy.

Scott Pelley (07:36):

A lot of people can be replaced by this technology.

James Manyika (07:38):

Yes, there are some job occupations that’ll start to decline over time. There are also new job categories that’ll grow over time. But the biggest change will be the jobs that’ll be changed. Something like more than two-thirds will have their definitions change. Not go away, but change. Because they’re now being assisted by AI and by automation. So this is a profound change which has implications for skills. How do we assist people to build new skills? Learn to work alongside machines. And how do these complement what people do today?

Sundar Pichai (08:13):

This is going to impact every product across every company and so that’s why I think it’s a very, very profound technology. And so, we are just in early days.

Scott Pelley (08:23):

Every product in every company.

Sundar Pichai (08:25):

That’s right. AI will impact everything. For example, you could be a radiologist. If you think about five to 10 years from now, you’re going to have a AI collaborator with you. It may triage. You come in the morning. Let’s say you have 100 things to go through. It may say, “These are the most serious cases you need to look at first.” Or when you’re looking at something, it may pop up and say, “You may have missed something important.” Why wouldn’t we take advantage of a super-powered assistant to help you across everything you do? You may be a student trying to learn math or history, and you will have something helping you.

Narrator (09:08):

We asked Pichai what jobs would be disrupted, he said, knowledge workers. People like writers, accountants, architects and, ironically, software engineers. AI writes computer code too.

(09:22)
Today Sundar Pichai walks a narrow line. A few employees have quit, some believing that Google’s AI rollout is too slow, others, too fast. There are some serious flaws.

James Manyika (09:36):

Does the return of inflation-

Narrator (09:38):

James Manyika asked Bard about inflation. It wrote an instant essay in economics and recommended five books. But days later, we checked. None of the books is real. Bard fabricated the titles. This very human trait, error with confidence, is called, in the industry, hallucination.

Scott Pelley (10:03):

Are you getting a lot of hallucinations?

Sundar Pichai (10:06):

Yes, which is expected. No one in the field has yet solved the hallucination problems. All models do have this as an issue.

Scott Pelley (10:18):

Is it a solvable problem?

Sundar Pichai (10:20):

It’s a matter of intense debate. I think we’ll make progress.

Narrator (10:24):

To help cure hallucinations, Bard features a Google it button that leads to old-fashioned search. Google has also built safety filters into Bard to screen for things like hate speech and bias.

Scott Pelley (10:40):

How great a risk is the spread of disinformation?

Sundar Pichai (10:44):

AI will challenge that in a deeper way. The scale of this problem will be much bigger.

Narrator (10:50):

Bigger problems, he says, with fake news and fake images.

Sundar Pichai (10:55):

It will be possible with AI to create a video easily. Where it could be Scott saying something, or me saying something, and we never said that. And it could look accurate. But on a societal scale, it can cause a lot of harm.

Scott Pelley (11:11):

Is Bard safe for society?

Sundar Pichai (11:14):

The way we have launched it today, as an experiment in a limited way, I think so. But we all have to be responsible in each step along the way.

Narrator (11:25):

Pichai told us he’s being responsible by holding back for more testing, advanced versions of Bard, that, he says, can reason, plan, and connect to internet search.

Scott Pelley (11:39):

You are letting this out slowly so that society can get used to it?

Sundar Pichai (11:45):

That’s one part of it. One part is also so that we get the user feedback. And we can develop more robust safety layers before we deploy more capable models.

Narrator (11:59):

Of the AI issues we talked about, the most mysterious is called emergent properties. Some AI systems are teaching themselves skills that they weren’t expected to have. How this happens is not well understood. For example, one Google AI program adapted, on its own, after it was prompted in the language of Bangladesh, which it was not trained to know.

James Manyika (12:30):

We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali. So now, all of a sudden, we now have a research effort where we’re now trying to get to a thousand languages.

Sundar Pichai (12:44):

There is an aspect of this which we call… all of us in the field call it as a black box. You don’t fully understand. And you can’t quite tell why it said this, or why it got wrong. We have some ideas, and our ability to understand this gets better over time, but that’s where the state of the art is.

Scott Pelley (13:02):

You don’t fully understand how it works. And yet, you’ve turned it loose on society?

Sundar Pichai (13:08):

Let me put it this way. I don’t think we fully understand how a human mind works either.

Narrator (13:14):

Was it from that black box, we wondered, that Bard drew its short story that seemed so disarmingly human?

Scott Pelley (13:23):

It talked about the pain that humans feel. It talked about redemption. How did it do all of those things if it’s just trying to figure out what the next right word is?

Sundar Pichai (13:35):

I have had these experiences talking with Bard as well. There are two views of this. There are a set of people who view this as, look, these are just algorithms. They’re just repeating what it’s seen online. Then there is the view where these algorithms are showing emergent properties, to be creative, to reason, to plan, and so on. And personally, I think we need to approach this with humility. Part of the reason I think it’s good that some of these technologies are getting out. Is so that society, people like you and others can process what’s happening. And we begin this conversation and debate. And I think it’s important to do that.

Narrator (14:25):

When we come back, we’ll take you inside Google’s artificial intelligence labs, where robots are learning.

(14:41)
The revolution in artificial intelligence is the center of a debate ranging from those who hope it will save humanity to those who predict doom. Google lies somewhere in the optimistic middle, introducing AI in steps so civilization can get used to it. We saw what’s coming next in machine learning at Google’s AI lab in London, a company called DeepMind, where the future looks something like this.

Speaker 2 (15:14):

The story will continue in a moment.

Scott Pelley (15:20):

Look at that! Oh, my goodness.

Raia Hadsell (15:23):

They got a pretty good kick on them. Can still [inaudible 00:15:25]-

Scott Pelley (15:25):

Ah! Goal!

Raia Hadsell (15:26):

… good game.

Narrator (15:27):

A soccer match at DeepMind looks like fun and games but, here’s the thing: humans did not program these robots to play, they learned the game by themselves.

Raia Hadsell (15:40):

It’s coming up with these interesting different strategies, different ways to walk, different ways to block.

Scott Pelley (15:45):

And they’re doing it, they’re scoring over and over again.

Raia Hadsell (15:48):

It’s all about here.

Narrator (15:49):

Raia Hadsell, vice president of Research and Robotics, showed us how engineers used motion capture technology to teach the AI program how to move like a human. But on the soccer pitch, the robots were told only that the object was to score. The self-learning program spent about two weeks testing different moves. It discarded those that didn’t work, built on those that did, and created all-stars.

Scott Pelley (16:21):

There’s another goal.

Narrator (16:22):

And with practice, they get better. Hadsell told us that, independent from the robots, the AI program plays thousands of games from which it learns and invents its own tactics.

Raia Hadsell (16:37):

Here we think that red player’s going to grab it. But instead, it just stops it, hands it back, passes it back, and then goes for the goal.

Scott Pelley (16:45):

And the AI figured out how to do that on its own.

Raia Hadsell (16:47):

That’s right. That’s right. And it takes a while. At first all the players just run after the ball together like a gaggle of 6-year-olds the first time they’re playing ball. Over time what we start to see is now, “Ah, what’s the strategy? You go after the ball. I’m coming around this way. Or we should pass. Or I should block while you get to the goal.” So, we see all of that coordination emerging in the play.

Scott Pelley (17:17):

This is a lot of fun, but what are the practical implications of what we’re seeing here?

Raia Hadsell (17:23):

This is the type of research that can eventually lead to robots that can come out of the factories and work in other types of human environments. Think about mining, think about dangerous construction work or exploration or disaster recovery.

Narrator (17:40):

Raia Hadsell is among 1,000 humans at DeepMind. The company was co-founded just 12 years ago by CEO Demis Hassabis.

Demis Hassabis (17:51):

If I think back to 2010 when we started, nobody was doing AI. There was nothing going on in industry. People used to eye roll when we talked to them, investors, about doing AI. So, we could barely get two cents together to start off with which is crazy if you think about now the billions being invested into AI startups.

Narrator (18:09):

Cambridge, Harvard, MIT, Hassabis has degrees in computer science and neuroscience. His PhD is in human imagination. And imagine this, when he was 12, in his age group, he was the number two chess champion in the world. It was through games that he came to AI.

Demis Hassabis (18:35):

I’ve been working on AI for decades now, and I’ve always believed that it’s going to be the most important invention that humanity will ever make.

Scott Pelley (18:43):

Will the pace of change outstrip our ability to adapt?

Demis Hassabis (18:48):

I don’t think so. I think that we’re sort of an infinitely adaptable species. You look at today, us using all of our smartphones and other devices, and we effortlessly adapt to these new technologies. And this is going to be another one of those changes like that.

Narrator (19:04):

Among the biggest changes at DeepMind was the discovery that self-learning machines can be creative. Hassabis showed us a game playing program that learns. It’s called AlphaZero and it dreamed up a winning chess strategy no human had ever seen.

Scott Pelley (19:24):

But this is just a machine. How does it achieve creativity?

Demis Hassabis (19:28):

It plays against itself 10s of millions of times. So, it can explore parts of chess that maybe human chess players and programmers who program chess computers haven’t thought about before.

Scott Pelley (19:40):

It never gets tired. It never gets hungry. It just plays chess all the time.

Demis Hassabis (19:45):

Yes. It’s kind of an amazing thing to see, because actually you set off AlphaZero in the morning and it starts off playing randomly. By lunchtime it’s able to beat me and beat most chess players. And then by the evening, it’s stronger than the world champion.

Narrator (19:58):

Demis Hassabis sold DeepMind to Google in 2014. One reason, was to get his hands on this. Google has the enormous computing power that AI needs. This computing center is in Pryor, Oklahoma. But Google has 23 of these, putting it near the top in computing power in the world. This is one of two advances that make AI ascendant now. First, the sum of all human knowledge is online and, second, brute force computing that very loosely approximates the neural networks and talents of the brain.

Demis Hassabis (20:40):

Things like memory, imagination, planning, reinforcement learning, these are all things that are known about how the brain does it, and we wanted to replicate some of that in our AI systems.

Narrator (20:53):

Those are some of the elements that led to DeepMind’s greatest achievement so far, solving an impossible problem in biology.

(21:02)
Proteins are building blocks of life, but only a tiny fraction were understood because 3D mapping of just one could take years. DeepMind created an AI program for the protein problem and set it loose.

Demis Hassabis (21:19):

Well, it took us about four, five years to figure out how to build the system and it was probably our most complex project we’ve ever undertaken. But once we did that, it can solve a protein structure in a matter of seconds. And actually over the last year we did all the 200 million proteins that are known to science.

Scott Pelley (21:35):

How long would it have taken using traditional methods?

Demis Hassabis (21:38):

Well, the rule of thumb I was always told by my biologist friends is that it takes a whole PhD five years to do one protein structure experimentally. So if you think 200 million times five, that’s a billion years of PhD time, it would’ve taken.

Narrator (21:52):

DeepMind made its protein database public, a gift to humanity, Hassabis called it.

Scott Pelley (21:59):

How has it been used?

Demis Hassabis (22:01):

It’s been used in an enormously broad number of ways, actually, from malaria vaccines to developing new enzymes that can eat plastic waste to new antibiotics.

Narrator (22:13):

Most AI systems today do one or maybe two things well. The soccer robots, for example, can’t write up a grocery list or book your travel or drive your car. The ultimate goal is what’s called artificial general intelligence, a learning machine that can score on a wide range of talents.

Scott Pelley (22:37):

Would such a machine be conscious of itself?

Demis Hassabis (22:42):

That’s another great question. Philosophers haven’t really settled on a definition of consciousness yet, but if we mean by sort of self-awareness and these kinds of things, I think there’s a possibility AI one day could be. I definitely don’t think they are today. But I think, again, this is one of the fascinating scientific things we’re going to find out on this journey towards AI.

Narrator (23:05):

Even unconscious, current AI is superhuman in narrow ways. Back in California, we saw Google engineers teaching skills that robots will practice continuously on their own.

Speaker 9 (23:18):

Push the blue cube to the blue triangle.

Narrator (23:21):

They comprehend instructions.

Speaker 9 (23:22):

Push the yellow hexagon into the yellow heart.

Narrator (23:25):

And learn to recognize objects.

Robot 106 (23:28):

What would you like?

Scott Pelley (23:29):

How about an apple?

Speaker 11 (23:31):

How about an apple?

Robot 106 (23:33):

On my way, I will bring an apple to you.

Narrator (23:37):

Vincent Vanhoucke, senior director of Robotics, showed us how Robot 106 was trained on millions of images-

Robot 106 (23:45):

I’m going to pick up the apple.

Narrator (23:47):

… and can recognize all the items on a crowded countertop.

Vincent Vanhoucke (23:52):

If we can give the robot a diversity of experiences, a lot more different objects in different settings, the robot gets better at every one of them.

Narrator (24:02):

Now that humans have pulled the forbidden fruit of artificial knowledge-

Scott Pelley (24:07):

Thank you.

Narrator (24:09):

… we start the genesis of a new humanity.

Scott Pelley (24:12):

AI can utilize all the information in the world. What no human could ever hold in their head. And I wonder if humanity is diminished by this enormous capability that we’re developing.

James Manyika (24:31):

I think the possibilities of AI do not diminish humanity in any way. And in fact, in some ways, I think they actually raise us to even deeper, more profound questions.

Narrator (24:44):

Google’s James Manyika sees this moment as an inflection point.

James Manyika (24:49):

I think we’re constantly adding these superpowers or capabilities to what humans can do in a way that expands possibilities, as opposed to narrow them, I think. So I don’t think of it as diminishing humans, but it does raise some really profound questions for us. Who are we? What do we value? What are we good at? How do we relate with each other? Those become very, very important questions that are constantly going to be, in one case, sense exciting, but perhaps unsettling too.

Narrator (25:21):

It is an unsettling moment. Critics argue the rush to AI comes too fast, while competitive pressure, among giants like Google and start-ups you’ve never heard of, is propelling humanity into the future ready or not.

Sundar Pichai (25:38):

But I think if take a 10-year outlook, it is so clear to me, we will have some form of very capable intelligence that can do amazing things. And we need to adapt as a society for it.

Narrator (25:55):

Google CEO Sundar Pichai told us society must quickly adapt with regulations for AI in the economy, laws to punish abuse, and treaties among nations to make AI safe for the world.

Sundar Pichai (26:12):

These are deep questions and we call this alignment. One way we think about: How do you develop AI systems that are aligned to human values, and including morality? This is why I think the development of these needs to include not just engineers, but social scientists, ethicists, philosophers, and so on. And I think we have to be very thoughtful. And I think these are all things society needs to figure out as we move along. It’s not for a company to decide.

Narrator (26:48):

We’ll end with a note that has never appeared on 60 Minutes but one, in the AI revolution, you may be hearing often. The preceding was created with 100% human content.

Subscribe to the Rev Blog

Lectus donec nisi placerat suscipit tellus pellentesque turpis amet.

Share this post

Subscribe to The Rev Blog

Sign up to get Rev content delivered straight to your inbox.