Oct 31, 2023

Biden Signs Order Establishing Standards to Manage Artificial Intelligence Risks Transcript

Biden Signs Order Establishing Standards to Manage Artificial Intelligence Risks Transcript
RevBlogTranscriptsAIBiden Signs Order Establishing Standards to Manage Artificial Intelligence Risks Transcript

The executive order establishes new standards and rules for the use of artificial intelligence. It’s a wide-ranging set of rules and recommendations to address concerns about national security, privacy, equity, and the labor market. Read the transcript here.

Speaker 1 (00:00):

President Joe Biden today signed the government’s first executive order to establish new standards and rules for the use of artificial intelligence. It’s wide-ranging set of rules and recommendations to address concerns about national security, privacy, equity, and the labor market. Some of the key components include creating new safety and security standards for AI, requiring testing and assurances that AI cannot be used to produce biological or nuclear weapons, protecting consumer privacy by developing guidelines for federal agencies, and advancing equity in civil rights to prevent algorithmic discrimination.

(00:36)
To take a closer look at these changes, we’re joined now by Nicol Turner Lee, director of the Center for Technology Innovation at the Brookings Institution. Thanks so much for being with us.

Nicol Turner Lee (00:45):

Oh, thanks for having me.

Speaker 1 (00:46):

Now, this executive order, as you well know, it requires developers, that they put their AI models through testing and then submit those test results to the federal government. How much of a difference will that make, given that this only applies to future technology, and what’s already on the market is already extraordinarily powerful?

Nicol Turner Lee (01:06):

The technologies that have already been deployed, I think, are the ones that we’re really concerned about. And so future-proofing emerging technology is going to be important, but the rigorous testing and the impact assessment and the red teaming that the White House potentially wants us to undergo may be something that may put us behind on some of the outcomes that we’re currently experiencing with existing technologies today.

Speaker 1 (01:28):

The order also requires that the most advanced artificial intelligence products be tested to assure that they can’t be used in the development of biological weapons or nuclear weapons. Help us understand what the government is concerned about in terms of the range of threats on that front.

Nicol Turner Lee (01:44):

We have often talked about this race to AI. And I think this is coming true more and more, particularly as we see these developments across the world, where we’re seeing technological advances being integrated into traditional military stances. What that means is, in the United States, we have to get a handle on this. That means ensuring that our allies, as well as our competitors, are not developing weapons or tools that embed AI in ways that we cannot actually fight. It’s really important that people see this AI order as one that involves the general public. But, most importantly, it’s about protecting our borders and ensuring that we’re not allowing in what could potentially become fateful tools that are relying upon technological vulnerabilities to actually succeed in acts of war.

Speaker 1 (02:32):

In his remarks before signing the executive order today at the White House, the president spoke about AI’s ability to create convincing disinformation.

Joe Biden (02:41):

With AI, fraudsters can take a three-second, and you all know this, three-second recording of your voice. I have watched one of me on a couple. I said, when the hell did I say that?But all kidding aside, three-seconds recording of your voice and generate an impersonation. Everyone has a right to know when audio they’re hearing or video they’re watching is generated or altered by AI.

Speaker 1 (03:07):

So, what the president is talking about those are known as deepfakes. And so this executive order calls for the Department of Commerce to come up with standards for watermarking AI-generated content, so that everybody knows it was created by artificial intelligence. It’s one thing for the Commerce Department to recommend standards. It’s quite another thing for the federal government to enforce those standards. How would that work? Or is there any provision for enforcement?

Nicol Turner Lee (03:32):

Well, the challenge right now in the executive order is that it’s a nonbinding agreement. The hope is that Congress will see the equal importance of this type of intervention in ways that they will actually pass really great legislation. In light of a presidential election coming up and the possibility of manipulation when it comes to face, voice, and other biometric features, the convincing nature of technology, particularly with generative AI, it calls upon this action. So the president may joke about somebody impersonating him. Well, there’s a lot of us that could be impersonated, and that can have very, very detrimental results, particularly as we head into our election period.

Speaker 1 (04:09):

We know that artificial intelligence can absorb human biases in the training data. And the executive order directs federal agencies to use their existing authority to prevent discrimination in areas like housing, education, and employment. How so?

Nicol Turner Lee (04:25):

One of the things that we don’t realize as users of the Internet is that these technologies treat us as products, right? We’re the commodities. We are the subject of why these technologies work so well. And what that means is that the data that it’s constantly scraping is not only the data that belongs to each of us individually, but it’s the data that belongs to the context and the historical periods in which that data was actually generated. So, bias shows up. It shows up along racial lines, gender lines, shows up in terms of distinguishing our sexual orientation. The bottom line is, this data is coming from somewhere, and it’s coming from our community, as well as our person. It’s really important, for the many years that we have fought for civil rights laws and human rights, that we not allow technologies to come in and change the nature of that game.

Speaker 1 (05:15):

Well, on that point, how do the recommendations and requirements in today’s executive order, how do they square with AI standards across the rest of the world?

Nicol Turner Lee (05:24):

Compared to the European Union, for example, they have very prescriptive regulation, not just values and norms. They have those, but they have actually taken the time to come up with things like a national data privacy standard to think about AI use in high-risk categories, like credit or housing or education or employment. I’m excited that the United States has finally gotten into this game and gotten to the party. We’re a little late, but it does mean that we are paying attention to the EU and other places that have put in place AI regulation. But this right here is not regulation, my friend. This is something that is definitely going to need the will of Congress to get things done.

Speaker 1 (06:07):

Nicol Turner Lee is director of the Center for Technology Innovation at the Brookings Institution. It was a real pleasure to speak with you. Thanks for your time.

Nicol Turner Lee (06:14):

Thank you for having me.

Transcribe Your Own Content

Try Rev and save time transcribing, captioning, and subtitling.