Sep 18, 2023

Senate Committee Holds Hearing on Artificial Intelligence and U.S. Competitiveness Transcript

Senate Committee Holds Hearing on Artificial Intelligence and U.S. Competitiveness Transcript
RevBlogTranscriptsArtificial IntelligenceSenate Committee Holds Hearing on Artificial Intelligence and U.S. Competitiveness Transcript

Senate Committee Holds Hearing on Artificial Intelligence and U.S. Competitiveness. Read the transcript here.

Transcribe Your Own Content

Try Rev and save time transcribing, captioning, and subtitling.

Senator Joe Manchin (00:00):

… [inaudible 00:00:01], Lawrence Livermore and Lawrence Berkeley National Labs in California, Los Alamos and Sandia National Labs in New Mexico, and Oak Ridge National Labs in Tennessee. The labs work in bringing together both fundamental science and national security missions. This hearing will examine their findings.

(00:17)
This hearing will also discuss the $1.8 billion Exascale Computing Program that the committee authorized. If we want to invest in AI in a cost-effective way, we must build on these existing programs and avoid wasting resources and duplication. Most people think about the Department of Energy for its work advancing energy technologies, like nuclear reactors, energy efficiencies, carbon capture, and hydrogen. But DOE does more than just energy. The department also is the largest supporter of physical scientific research in the federal government, conducting research and developing technologies across a range of fields, from quantum computing, to vaccine development, to astrophysics.

(00:57)
Last Congress, we spent a lot of time examining DOE’s critical role in broad, scientific research in the context of the Endless Frontier Act, which ultimately became law as the CHIPS and Science Act. DOE’s scientific work jump-starts private sector innovation, it strengthens our economy, and is central to our national security. DOE research ensures the US can anticipate, detect, assess, and mitigate emerging technology’s threats related to advanced computing, biotechnology’s nuclear security, and much more.

(01:29)
Artificial intelligence stands out across DOE’s vast mission. It has the potential to revolutionize scientific discovery, technology deployment, and national security. In fact, AI is already changing the world at a remarkable pace. We’re seeing it deployed in battlefields across the world. Ukraine has successfully used AI-enabled drone swarms against Russian forces. Also, AI helped us fight COVID-19. DOE’s Oak Ridge National Laboratories used its artificial intelligence and computing resources to model proteins in the coronavirus to help develop the vaccine.

(02:07)
But make no mistake, artificial intelligence also presents many risks. Earlier this year, a class of non-scientific students in MIT was tasked with investigating whether AI chatbots could be prompted to assist non-experts in causing a pandemic. In just one hour, the chatbots suggested four potential pandemic pathogens; explained how they can be generated from synthetic DNA using reverse genetics; supplied the names of the DNA synthesis companies unlikely to screen orders; identified detailed protocols and how to troubleshoot them; and recommended that anyone lacking the skills to perform reverse genetics engage a core facility or contract research organizations.

(02:55)
That comes from a research paper titled, “Can Large Language Models Democratize Access to Dual-Use Biotechnology?” Which I ask unanimous consent to enter into the record?

Speaker 1 (03:06):

Without objection.

Senator Joe Manchin (03:07):

Without objection. Scientific and engineering expertise has long been a barrier that protects us from rogue actors. Until now, the common person has not had access to the resources or the know-how to launch these high-tech threats on human society. Irresponsible availability of AI technologies risk eliminating much of the expertise required to develop a weapon, disease, or cyber attack, thereby eroding defenses we had in the past.

(03:31)
AI is not a new issue for the committee or the Department of Energy. Since the 1960s, DOE has been a key player in investments in AI and automated reasoning. As we all know well, the department has 17 national labs, 34 user facilities that are crown jewels of America’s R&D network. DOE’s National Laboratory System houses a workforce of over 70,000 scientists, engineers, researchers, and support personnel with world-leading scientific expertise, whose mission is to serve the American people. Each of these labs plays a significant role in the future of AI.

(04:16)
As I mentioned earlier, DOE is also the largest funder of the physical sciences, and manages more scientific data than any other agency in the US. As a result, a department has computing resources, expertise, experience managing large volumes of data that give the department their natural leadership on artificial intelligence.

(04:36)
When federal agencies have an AI problem, they look to the DOE and its labs for help. Over the past decade, the department has developed thousands, and I say thousands, of AI applications. For example, the National Energy Technology Lab in Morgantown, West Virginia, my home area, supports the Department of Interior and using artificial intelligence to identify orphaned oil and gas wells. For the Orphaned Well Program, AI serves resources by analyzing old land survey maps, drilling permits, historical images, production records, and eyewitness accounts to find well sites.

(05:11)
During the 2023 R&D Awards, which I am told is referred to as the Oscars of Innovation, Dr. Rick Stevens of Argonne National Lab, who is one of our witnesses today, was recognized for his work using AI to accelerate the discovery of new cancer therapies and treatments that are highly personalized for individual patients. And our committee has recently played an important role in advancing DOE’s AI work, recognizing that the United States must not fall behind in a supercomputing race.

(05:43)
We authorized the Exascale Computing Program at the Department of Energy in the 115th Congress. In May of last year, the frontier supercomputer at Oak Ridge National Laboratory in Tennessee passed exascale: the ability to perform one billion calculations per second, that’s a lot, making this the fastest supercomputer in the world. Before we authorized the Exascale Computing Program, China had the fastest computers. Now, the US has regained the lead. This supercomputer at Oak Ridge is already using AI to model the behavior of human cells to develop better treatments for Alzheimer’s, opiate addiction, and cancer.

(06:25)
But the global AI race is just beginning. AI has the potential to add trillions of dollars into the world economy each year. Governments and companies around the world are competing fiercely in the new market. In particular, America must accelerate our efforts to compete and defend against China on AI. It is estimated that the annual Chinese AI investments will reach over $26 billion by 2026, which dwarfs the US government’s current spending of about 3.3 billion per year.

(06:59)
Between 2015 and 2021, Chinese AI companies raised 110 billion, including 40.2 billion from US investors, which I cannot even believe, and 251 AI companies. In 2017, China released their New Generation of AI Development Plan, which includes R&D and infrastructure targets. The US currently does not have a strategic AI plan like this.

(07:25)
In addition to government spending, China’s workforce advantage is significant. It has twice as many STEM PhDs and twice as many STEM master’s degree holders than the US. China has created artificial intelligence PhD programs in every one of their top universities. In regards to the Exoscale Computing Program, this committee champion, the Chinese government could be set to operate as many as 10 exascale supercomputers by 2025. Xi Jinping himself has pointed to our National Lab network, calling them indispensable momentum for the development and innovation of science and technology.

(08:06)
Soon, China may have their very own lab network. Just last week, a company named Baidu released Ernie Bot, which is a Chinese Communist Party-approved AI language model, comparable to the ChatGPT, an app developed in the US, which we’ve all heard a lot about. Ernie Bot is the most-downloaded app in all of Asia and is expected to continue to grow. It has occurred to me that DOE needs to do more strategic planning around AI, so that Americans have confidence that we are leveraging our key resources, such as our National Labs, to their fullest potential. We should encourage other agencies to use DOE’s AI resources and promote private sector partnerships with the department and the National Labs to develop safe commercial applications of AI. We must also understand what additional investments are needed to spur US leadership in artificial intelligence.

(08:59)
Congress should focus on strengthening and expanding our impressive existing programs, rather than creating duplicate new programs at other agencies. We should also ensure DOE and the National Labs are able to responsibly recruit leading AI experts, both from our country and globally. Much of America’s AI expertise comes from abroad. Immigrants founded or co-founded nearly half of top startups in the US, and international students earned 60% of our computer science doctorates. All at the same time, we must be absolutely sure that the department’s AI work includes strong research security requirements. We will not outcompete China and AI if they’re able to just steal the technology funded by our taxpayers’ dollars. The CHIPS and Science Act that passed last Congress featured research security improvements that are now law and currently being implemented by the department. However, foreign espionage is an evolving threat, and we must remain vigilant and clear- eyed in this threat.

(10:02)
The United States must remain at the forefront of new emerging technologies, and the Department of Energy is a central component of that effort. I’m looking forward to hearing our witnesses’ perspectives on specific steps our committee and the department could take to ensure America’s advancing AI in a competitive, responsible manner. With that, I’ll turn to my friend, Senator Barrasso, for his opening remarks.

Senator John Barrasso (10:25):

Well, thanks so much Mr. Chairman, and I appreciate your opening remarks, because artificial intelligence is rapidly transforming the world. It’s already impacting our daily lives.

(10:34)
Artificial intelligence plays an important role in the energy sector. In mining, AI can reduce equipment downtime. Advanced algorithms help miners locate mineral-rich deposits for more efficient exploration. Real-time analytics strengthen worker safety programs by predicting potential hazards. Artificial intelligence helps pinpoint oil and gas reserves. Predictive models harness data to streamline operations and reduce costs. AI-enhanced sensors also reinforce pipeline safety and efficiency. So artificial intelligence has great promise to expand our economy and to strengthen our national security.

(11:11)
It also raises, Mr. Chairman, as you point out, some well-documented concerns. A recent study at the University of East Anglia highlighted a significant and systemic left-wing bias in the ChatGPT platform. In the United States, it revealed a clear bias in favor of Democrats. The same program favored the Labor Party in the United Kingdom and the Workers’ Party in Brazil. We can’t let political bias infiltrate development of AI. This is particularly true when taxpayer dollars are helping fund the technology’s development.

(11:44)
Innovation in emerging technologies like artificial intelligence can be a source of great strength. It can be a key advantage in our geopolitical competition, as you point out, Mr. Chairman, with China and with Russia. It can also create a national security risk if the technologies are not properly protected.

(12:01)
The Department of Energy has an important role in artificial intelligence research. The department maintains the world’s most advanced computing systems. Its 17 National Labs have significant experience developing our nation’s most sensitive technologies. For this reason, the People’s Republic of China is watching nearly every move that is made at our National Labs.

(12:22)
A recent report revealed that since 1987, the Chinese Communist Party has targeted over 160 Chinese researchers working at our premier nuclear weapons lab. Upon returning to China, these researchers helped to advance key military technologies using knowledge financed by American taxpayers. In July of this year, senior FBI officials warned that China is targeting US businesses, universities, and government research facilities. China’s trying to get their hands on cutting-edge American research and technology.

(12:58)
As of 2021, over 4,000 non-US resident Chinese nationals still work at our nation’s labs, at the National Labs. Many of these foreign nationals strive to further scientific innovation and do want to collaborate in good faith. They find themselves beholden to an authoritarian regime at home, and the Chinese Communist Party is relentless. Some of these Chinese nationals will see no choice but to support the Chinese communists through theft of research and technology. Their families back in China may suffer harsh consequences if they do not comply with their government’s demands.

(13:34)
China’s sustained interest in our intellectual property is a stark reminder of the intense global competition surrounding artificial intelligence. This competition may drive advancements in the field. We can’t overlook the threat to our economic and national security posed by the Chinese government.

(13:51)
The Department of Energy and our National Labs must take the China threat more seriously. We can’t let our technology fall into the hands of those in Beijing. I look forward to hearing from our witnesses today on what additional steps research agencies and the laboratories and universities that they fund must take to prevent this theft of American technology. Mr. Chairman, thanks for calling this important hearing.

Senator Joe Manchin (14:16):

Thank you, Senator. I’d like to first of all thank the witnesses for being here today, and I appreciate very much y’all coming, making the effort. First of all, we’re going to have David Turk, Deputy Secretary of Energy; we have Dr. Rick Stevens, as I said before, and thank you for the great work and being recognized for that, Associate Laboratory Director of Argonne National Laboratory; Mrs. Anna Puglisi, senior fellow at Georgetown University Center for Security and Emerging Technology; and Mr. Andrew Wheeler, vice president and fellow at Hewlett Packard Enterprises. And again, thank you all.

(14:46)
I’ll turn to Secretary Turk. Deputy Secretary Turk, we’re going to begin with your opening remarks.

Hon. David M. Turk (14:54):

Chairman Manchin, Ranking Member Barrasso, distinguished members of the committee, thank you for the opportunity on behalf of the Department of Energy to talk about our activities in and our vision for artificial intelligence. Let me begin appropriately so by thanking this committee for years and years of strong, sustained support that has led to the DOE becoming an AI powerhouse. And chairman, you laid out much of that in your own opening statement.

(15:24)
With your leadership, we have designed, developed and currently operate four of the top 10 fastest openly benchmarked supercomputers in the world, including, as the chairman mentioned, the world’s fastest frontier at Oak Ridge National Lab. Through the Exascale Computing Project, DOE is developing the world’s first capable exascale software ecosystem that is helping to drive AI breakthroughs in critical areas, as varied as material science, cancer research, earthquake risk assessment, energy production and storage, computational weapons application, and I could go on and on.

(16:03)
Across a network of 34 national user facilities around our country, DOE generates tremendous volumes of high-quality data, literally the fuel that can lead to more AI breakthroughs. And most importantly, DOE’s national Laboratory System houses a workforce of over 70,000 scientists, engineers, researchers, and support personnel with world-leading expertise. It’s a particular pleasure to be joined on this panel by Professor Rick Stevens, who’s one of those top experts, as you mentioned, Chairman, in your opening statement.

(16:38)
But as proud as we all should be about this robust AI foundation at DOE, now is the time to take these capabilities to the next level. Advances in AI are enabling enormous progress and breakthroughs that can help address key challenges of our time, and we need to double down on that technical capability: the computers, the software, the data, and most importantly, the researchers, to make sure that we have those breakthroughs here in the US, and our private sector can benefit from that as well. Governments around the world are investing in AI capabilities as never before. Chinese investments are expected to reach, as the Chairman said, over $26 billion by 2026. We simply must be bolder and move faster, or risk falling behind. AI also lowers the bar for bad actors to do even worse things, and to do those worse things easier. AI systems can pose risks to individual safety, to privacy, to civil liberties; risk of society from information manipulation, as the ranking member stated; bias and discrimination, social engineering and market manipulation; bio threats, nuclear threats, chemical threats, all made easier by AI potentially.

(17:57)
An industry alone cannot be fully aware of the relevant risks and threats, because much of that information, rightfully so, falls within the purview of our intelligence community and our national security enterprise. DOE can play an incredibly important role here, including developing methods for assessing and red-teaming AI models to identify and mitigate the risks presented by these cutting-edge AI systems, that are only developing and improving incredibly quickly over weeks and months ahead.

(18:26)
Over the past five years, we’ve worked with stakeholders across the AI ecosystem to identify new and rapidly-emerging opportunities and challenges presented by AI, and to identify very specifically how unique DOE capabilities, the strong foundation, again, thanks to this committee, how we can drive progress for AI going forward, from the Department of Energy side of things.

(18:51)
This culminated in the May 2023 release of a report called AI for Science, Energy and Security. This vision and blueprint aligned precisely with the pressing need for scientific grounding in areas such as bias, transparency, security, validation, and the impact of AI on jobs. We have translated this feedback into a specific proposal for your consideration, called the Frontiers in Artificial Intelligence for Science, Security, and Technology, or FASST, by acronym. This is exactly, Chairman, as you said, a strategic vision, a strategic plan, for the DOE, nested within a broader strategic vision for the US and the US government.

(19:34)
Mr. Ranking Member, you rightfully point out, there are also research security issues and challenges we need to take head-on, and be eyes wide open in improving our systems on a regular basis. I want to thank our fellow panelist, Ms. Puglisi, for her testimony, for all her work, excellent testimony that we can improve on even further at the Department of Energy side, including our Science and Technology Risk Matrix, which I’d be happy to get into in the question and answer session. We look very forward to further discussing the FASST proposal, everything else we’re doing, and updating it based on this committee’s continued guidance and leadership.

(20:14)
There is no doubt that with AI, we are now on the cusp of our next grand challenge here in the United States. Working within and outside the government, DOE stands ready to step up to this moment, to play our role in fully engaging in this grand challenge, by utilizing our unique computing capacity, comprehensive well-curated datasets, our algorithms, relationships with industry, and again, most importantly, our skilled leading scientific workforce. All of us at the Department of Energy and our National Labs very much look forward to working with this committee to live up to this moment. Thank you, Mr. Chairman.

Senator Joe Manchin (20:53):

Thank you, Deputy Secretary Turk. And now, we’re going to go to Dr. Stevens.

Dr. Rick L. Stevens (21:00):

Thank you, Chairman Manchin and Ranking Member Barrasso and members of the committee for this opportunity to participate in today’s discussion about National Labs and AI. I have worked on advanced computing for over 30 years at Argonne and at the University of Chicago, and much of that time, I’ve been driven by this idea that we need to build intelligence into future computing.

(21:25)
And over the last four years, I had the fortune to work with my colleagues at all the labs, all 17 labs, over 30 universities and dozens of companies, to run a series of town hall meetings, seven town hall meetings over four years, that involved over 1,300 researchers. And at these meetings, we challenged the community to think broadly about how advanced AI systems, going beyond what we can do today, could be developed and applied in DOE mission spaces to accelerate scientific research, accelerate development of energy technologies, and improve national security-related work. And what I’m going to tell you about right now is a little bit of those outcomes of that report.

(22:09)
The consensus is there’s an enormous opportunity here to use AI to accelerate discovery, both in basic science, accelerate the application of that in energy technologies, and to improve how we actually conduct all of our work in national security. Some of these applications could range from new technologies for better batteries that, say, require less rare earth minerals, which would improve global security in and of itself, to new types of polymers that could be ideal for each application, but could be recycled indefinitely without losing performance. We don’t know how to do that today, but we think AI could help us with that.

(22:46)
We believe AI could be coupled with robotics to automate much experimental science, improving throughput by orders of magnitude. In fact, it’s so compelling, that idea, that some of my colleagues have formed this concept of AI-driven science factories, or some people call it self-driving laboratories, as a way that we will actually accelerate work in drug development for cancer or new materials for semiconductors.

(23:13)
AI can also address key challenges in software development. DOE manages billions of lines of code, and we do not have enough-

Speaker 2 (23:23):

Thank you.

Dr. Rick L. Stevens (23:25):

… developers, enough senior software developers, to maintain that code and deport that code to new machines. We know AI can help us with that problem. In fact, AI systems appropriately trained and tuned could help us design not only software, but hardware, for next-generation systems, and help us build systems that could save a huge amount of energy.

(23:46)
AI systems are also being used to explore ways to control complex systems like fusion reactors, and we think that same idea could be applied to control future power grids, where we have a diversity of sources and changes in demand. AI can also be used to accelerate scientific simulations by replacing traditional numerical methods with new AI-driven methods, and achieving speedups of factors of 100 or more across many applications, from weather prediction to electronic structure computation that’s used on over 30% of DOE computers.

(24:20)
And finally, the biggest opportunity is probably this idea of foundation models, the underlying technology behind things like ChatGPT, but applying that to science. We’re discovering that those types of technologies are incredibly versatile for doing scientific problems. They have been trained on millions of science papers, vastly more knowledge than individual scientists would ever absorb in their lifetime, and can be used to integrate and synthesize knowledge, suggest new lines of attack on open problems, and so forth.

(24:50)
In short, in the surprise to many, current foundation models have demonstrated an unusual utility in science maybe a decade earlier than we thought. And that is one of the dramatic opportunities and challenges, because these models can directly affect scientific productivity today, and we do not have a strategy across the department for aggressively using this. It’s a big opportunity and also a challenge.

(25:18)
So AI in all of its forms is rapidly becoming the most important tool in the scientific and technical toolbox. And as a result of these workshops and the progress of the last five years, I believe it’s imperative that the US lead the world in the development of advanced AI systems for scientific and national security applications. I believe DOE is the only agency that can do this, that has all the resources under one roof. Of course, it’s going to be a partnership with private industry to do this, and with our academic colleagues.

(25:55)
I believe we should commit over the next decade to building the most powerful advanced AI capability for science, energy, and national security. Some might call it an artificial general intelligence for science or perhaps a super-intelligence for science. It could have many names, but the goal is to go dramatically beyond where we are today in a secure fashion and a reliable fashion.

(26:21)
Whoever leads the world in AI for science will lead the world in scientific discovery and have a headstart in the translation of those discoveries into products that expand our economy and address modern needs. And in doing that, we will secure what I call the Innovation Frontier by AI. Whoever leads the world in the development of AI for energy will lead the world in developing and deploying next-generation energy technology, such as modular reactors that can be safe and deployed anywhere at a moment’s notice; or super-efficient combustion systems to take maximum advantage of our resources; and scalable approaches to carbon sequestration, which we desperately need; and better and more effective strategies for, say, electrification of the economy. And by doing that, we will secure the energy and climate frontier.

(27:16)
And finally, whoever leads the world in understanding and mitigating the risks of AI and the use of AI to improve national and global security, will determine the landscape in which we and our allies will live and work in the future, securing our lifestyles and our prosperity. Thank you for your time, and I really look forward to the questions.

Senator Joe Manchin (27:41):

Thank you, Dr. Stevens. And now, we will have Ms. Puglisi. Puglisi? Am I saying that right? I’m so sorry if I-

Ms. Anna B. Puglisi (27:48):

That’s okay. Thank you. Chairman Manchin, Ranking Member Barrasso, distinguished members of the committee and staff, thank you for the opportunity to participate in today’s hearing. It’s an honor to be here alongside the esteemed experts on this panel. I’m currently the senior fellow at the Center for Security and Emerging Technology at Georgetown University. I previously served as the National Counterintelligence Officer for East Asia, and I’ve studied China’s S&T development and tech acquisition strategy for most of my career.

(28:19)
My testimony today will first address why China targets the DOE labs, provide a brief overview of China’s S&T system, and finally discuss potential mitigation strategies. I will also offer lessons learned, which include this is not a DOE problem, but a US-wide problem, because China’s system is not the same as ours. China takes a holistic approach to developing technology, blurring the lines between public, private, civilian, and military. Our policies and mitigation strategies need to reflect this reality. Beijing, in many ways, understands our societal tensions, and its statecraft is directed at them, exploiting identity politics by promoting any changes in US policy as ethnic profiling.

(29:07)
It’s because of this last point that I want to acknowledge how difficult and challenging these issues can be. My own grandparents were immigrants who came into this country with little formal education and worked menial jobs. My presence here today is a testament to the American dream, but it is because there’s no room for xenophobia or ethnic profiling in the US. It goes against everything we stand for as a nation. And precisely because of these values, we must move forward to find principled ways to mitigate the policies of a nation state that is ever more authoritarian and seeks to undermine the global norms of science.

(29:48)
And the importance of science is why China targets the DOE labs. Emerging technologies, as we’ve heard, such as AI, biotechnology, new materials, and green tech, are increasingly

Ms. Anna B. Puglisi (30:00):

… Increasingly at the center of global competition. The DOE labs, because of their mission, is in the crosshairs. While many are familiar with DOE’s mission in regards to stewarding our nuclear deterrent, it also plays an essential role in emerging technologies and research and is essentially a window into the priorities of the US government. And I have to say, DOE is really an underappreciated resource. While China is not the only country that targets US technology and the DOE complex, China’s efforts are complex and multifaceted and part of a state-sponsored strategy to save time, money, and advance its strategic goals, specifically in these emerging technology areas. My written testimony goes into more details on the policies, programs, and infrastructure that support these development efforts. China’s legal system also complicates collaborations with the DOE complex because its laws compel its citizens to share information and data with Chinese entities if asked regardless of the restrictions placed on that data and more importantly, who owns it. I have also provided these in my written testimony.

(31:14)
Moving forward, we need to consider the following. We need to have policies for the China we have, not the China we want. Most policy measures to date have been tactical and not designed to counter an entire system that is structurally different than our own. It’s essential that the United States and other liberal democracies invest in the future. We’ve heard about the great promise of these technologies, but we must build research security into those funding programs from the start. Existing policies and laws are insufficient to address the level of influence that the CCP exerts in our society, especially in academia and research. Increased reporting requirements for foreign money in our academic and research institutes and clear reporting requirements and rules on participation in foreign talent programs are a good start.

(32:07)
We also have to ensure true reciprocity in our collaborations. For too long, we’ve looked the other way when China has not followed through on the details of the S&T agreements. There’s been no repercussions for that, for not sharing data, providing access to its facilities and obfuscating the true affiliations of its scientists. However, I want to caution extreme policy reactions such as closing our eyes and doing nothing or closing our doors only really benefit China, the latter by discrediting en masse all efforts to address the problem and by depriving ourselves of the great contributions of foreign-born scientists.

(32:42)
In conclusion, what will also make this difficult is that the reality that China is presenting is inconvenient in the short term. This includes companies looking for short-term profits, academics that benefit personally from funding for their laboratories and former government officials who cash in as lobbyists for China’s state owned or state supported companies. I want to thank the committee again for continuing to discuss this issue. These are hard conversations that we as a nation must have if we want to protect and promote US competitiveness, future developments and our values. So thank you very much.

Senator Joe Manchin (33:19):

Thank you. And now we have Mr. Wheeler,

Andrew Wheeler (33:25):

Chairman Manchin, ranking member Barrasso and distinguished members of the committee, thank you for the opportunity to testify today, and thank you for this committee’s support for the Exascale Computing Initiative. My name is Andrew Wheeler and I lead advanced development and high performance computing in artificial intelligence and serve as the director of Hewlett Packard Labs, the central applied research group for Hewlett Packard Enterprise. While we trace our roots back to the original Hewlett Packard Company, as many of you know, Hewlett Packard Enterprise was formed as a new publicly traded company in November of 2015.

(34:03)
At HPE, we fundamentally believe that AI will have a significant and impact on our lives as any technology to date. Training the largest AI models is a supercomputing problem, and at HPE, we build the world’s best supercomputers. With our partners at the Department of Energy, we co-design and co-build supercomputers that target complex scientific, engineering and data intensive workloads. These include systems at Sandia and Los Alamos National Laboratories in New Mexico, the National Renewable Energy Laboratory in Colorado, and the National Energy Technology Laboratory in Morgantown, West Virginia.

(34:47)
Our national investments in supercomputing have far-reaching benefits across the federal government. For example, our innovations in computing power and density that we provide to the DOE are also being used across the Department of Defense, in the intelligence community, and at the National Oceanic Atmospheric Administration to forecast weather, and at the National Science Foundation Centers. In fact, during the early stages of the COVID-19 outbreak, the national labs, including Argonne and Lawrence Livermore, used their supercomputers to accelerate a path to treatment to combat the disease. Using detailed digital simulations to analyze a vast set of drug candidates, researchers at Lawrence Livermore narrowed down the number of potential antibody candidates from an initial set of 100 duodecillion, that’s a one with 40 zeros after it, to just 20. The lab’s researchers accomplished this in weeks compared to the years it would take using other approaches.

(35:52)
In 2016, HPE was proud to be chosen as a key partner in the DOE’S Exascale Computing Initiative, which was designed to accelerate the research, development, acquisition, and deployment of new technologies to deliver exascale computing and to usher in a new era of supercomputing speed and capabilities. Then in May of 2022, HPE as part of a public private partnership with Oak Ridge achieved exascale computing with a computer that is more powerful than the world’s next four fastest systems combined. To put exascale into context, the human brain can perform about one simple mathematical operation per second. An exascale computer can do at least one quintillion, which is one billion times a billion, calculations in the same amount of time.

(36:49)
The success of Exascale Computing Initiative restored the US’s position as having the world’s most powerful computer and also marked the creation of the world’s largest AI system, which will soon be joined by systems installed at Argonne and Lawrence Livermore. The Exascale Computing Initiative was a model of success. Congress made the right investments, our national labs challenged America’s technology industry, and at HPE, we rose to the challenge.

(37:19)
In conclusion, while the United States has regained its rightful role as the world leader in supercomputing, now is not the time to rest on our laurels. The DOE National labs are producing results that researchers could only have dreamed of just a few years ago. Continued investment in this successful partnership is in our national, economic, and security interest, and HPE looks forward to working with the US government to continue our global leadership.

Senator Joe Manchin (37:51):

Thank you. Now I’ll begin with our questioning. My first question will go to Secretary Turk and Dr. Stevens and Mr. Wheeler. In my testimony I mentioned a study about how AI was used to provide a clear and detailed steps to create a pandemic or bioweapon. The DOE and the labs are uniquely positioned to do extensive work in detecting and mitigating emerging technological threats related to an array of biotechnologies and nuclear security. So Mr. Turk and Dr. Stevens, what can the department and the labs do to address these safety and security concerns?

David Turk (38:29):

Well, thanks for the question. You’re right to raise this as an issue, and the MIT study that you referenced and put into the record is one of those that’s a real eye-opener. It should be, especially for those who don’t deal with these issues on a daily basis. So we’ve got a real challenge here. As Professor Stevens and others have pointed out, AI can do a lot of good, but it can do a lot of harm here, and it allows actors who aren’t as sophisticated scientifically or technologically to do certain things that could have huge, huge harm.

(38:58)
So from the DOE side, I think we’ve got some ability to be incredibly helpful working with others, Department of Defense, HHS, others as well. We’ve got to remember, our national labs don’t just work for the Department of Energy, they work for all the other agencies, and a lot of other agencies already have a lot of programs, including in the biodefense and biotech area. We also at the Department of Energy know how to deal with classified information. We’ve got our own intelligence branch, and that’s incredibly important here as well. So we’re not just relying on what’s in the open record, but we have the best of what’s going on from a scientific and certainly from an intel perspective.

Senator Joe Manchin (39:34):

Let me give you a quick overview of what I’m trying to get to. Basically, I look back, and we all remember when the internet was coming on board and it came basically born out of the labs. And then by the 1990s, early ’90s, we had do something. We created Section 230, thinking we would let it develop and be all it could be. We look back, it’s even more than what we thought it could be, and it’s been used very effectively to help economies and help people all over the world, but it’s been used very detrimentally too. So we’re trying to not recreate that same environment here with AI. That’s what we’re looking at. So what you saw, just what the MIT students could do in one hour, it’s alarming, and I’ve advised some of my colleagues about this. What I want to do is what can you do to stop something like that and what kind of guardrails, and maybe Dr. Stevens, I have something to come back to you, Mr. Turk anyway about, Dr. Stevens, you want to say something on that?

David Turk (40:34):

I was just going to say this is exactly why we need to invest in these capabilities we need to be ahead of the curve, and Professor Stevens can certainly get into that more.

Dr. Rick L. Stevens (40:42):

Let me just try to outline quickly how we would approach that problem. So of course DOE is working with NIST on a thing called the AI Risk Management Framework, which is largely currently envisioned as a process that uses humans to evaluate the trustworthiness and the alignment that is whether a model does something that you’d like it to do or something that you don’t want it to do. I think there’s actually two key problems that we have to solve. One is we have to have the ability to assess the risks in current models at scale. There are over 100 large language models in circulation in China. There’s more than 1,000 in circulation in the US. A manual process for evaluating that is not going to scale. So we’re going to have to build capabilities using the kind of supercomputers we have and even additional AI systems to assess other AI systems so we can say this model is safe, it doesn’t know how to build a pandemic or it won’t help students do something risky. That’s one thing we have to do.

(41:45)
The second thing we have to do is we have to understand the fundamental issue of alignment, what’s called alignment. That is building these models that align with human values and are reliable in aligning with human values. And that’s a fundamental research task. It’s not something that we can just snap our fingers and say, “We know how to do it.” We don’t know how to do it. Companies don’t know how to do it, labs don’t know how to do it, universities don’t know how to do it. That’s one of the goals that we’d have to have in a research program like this. So we need scale, the ability to assess and evaluate risk in current models and future models, and we need fundamental R&D on alignment and AI safety.

Senator Joe Manchin (42:26):

Well, I mean, it’s growing so quickly and expanding to what AI, when we heard about it and coming at it from our standpoint, to where we are today and to have a class study, and these were non-scientist students, to be able to get this in. How can we put that cat back in the box?

Dr. Rick L. Stevens (42:47):

I don’t think we can put it back in the box. I think we have to get smarter about how we manage the risks associated with advanced AI systems, and using the term that people are using quite a lot about eyes wide open. There’s no putting Pandora back in the box. Every person within the next few years is going to have a very powerful AI assistant in their pocket to do whatever it is that they can get that assistant to help them to do. Hopefully most of that will be positive advances for society and so on. Some of that will be negative. We’ve got to be able to understand how to reduce that negative element, detect it when it happens, and mitigate it either through laws or through other means, technical means, before something dramatically bad happens. And I think that needs to be part of the technical agenda for the labs and, quite frankly, across the federal government.

Senator Joe Manchin (43:42):

I’m going to take the liberty of having seven-minute rounds, so if you want to set that for seven minutes, we will. Ms. Puglisi, do you want to speak on this at all? Do you have any comments coming from the institutional?

Ms. Anna B. Puglisi (43:54):

Sure. I think that it’s also important as we look at the AI as a tool of discovery, and in some ways you could say that the study that the classroom did was of discovery, that there are a lot of steps though that need to happen from the time you go from a sequence into something that can really have the large scale damage that is talked about. And so that’s one of the things that we are actually taking a closer look at is having the sequence is one thing, but then what are those follow-on steps? And so that’s something that-

Senator Joe Manchin (44:33):

Mr. Wheeler?

Ms. Anna B. Puglisi (44:33):

… there’s still a lot of … I’m sorry.

Senator Joe Manchin (44:34):

No, I’m sorry.

Ms. Anna B. Puglisi (44:36):

… biology that has to go on in between that.

Senator Joe Manchin (44:42):

I guess in a nutshell, too late to put any guardrails on? Have we missed it?

Andrew Wheeler (44:46):

Not entirely. I think there are many layers to this. I there’s both a policy aspect to this as well as a research component. But as an example on the policy side, like our own company, we spent over a year and a half developing what we call our AI ethics principles. And this is all about getting our thousands of engineers and users go through training around, okay, what does it mean to use AI in our product developments? How are we going to deploy solutions that harness AI? Now, that can’t solve every problem because as you mentioned, there are bad actors that maybe won’t follow that same line of reasoning, and that’s where I think the research and investment comes into play. There’s a broad field of study around this trustworthy AI, which ultimately can provide some of those guardrails you’re asking about, but we’re still really in the early days of some of that and deploying some of those solutions and there’s a lot of work that’s left.

Senator Joe Manchin (45:57):

Thank you. Sorry. Senator Barrasso.

Senator John Barrasso (45:58):

Thanks, Mr. Chairman. Mr. Puglisi, just a couple things. I mean, you are an expert on Chinese science and technology policy. You very well outlined us in your opening statement the threats that China poses to government funded research as well as private sector development. So we have more than 4,000 Chinese nationals working in the Department of Energy labs. Are these employees vulnerable to the Chinese Communist Party, their talent recruitment programs? How does that work?

Ms. Anna B. Puglisi (46:27):

Great. Thank you. The talent recruitment programs really do pose quite a challenge based on the principles that a lot of these individuals, when they sign these contracts, often obfuscate their participation. But I think as I mentioned in my opening statement, it’s really important as we go forward that we acknowledge the policies and programs that China has put in place and really focus on how our system and their system is different. And that’s why it’s important to talk about the human rights issues and the kinds of pressure that the Chinese government can bring to bear on individuals, especially, as you had mentioned in your opening statement, whose families are still in China. Now, I think it’s a really delicate balance. And so some of these reporting programs and also just following up on different affiliations and thinking through, I think the risk matrix is one of those tools that can be very useful because all research doesn’t have the same amount of risk. And so it’s important to not have a one size fits all approach to this. But it also highlights the importance of really investing in homegrown talent as well. Thank you.

Senator John Barrasso (47:53):

So Mr. Turk, more than a year ago I wrote to the department regarding the persistent threat of the Chinese foreign nationals doing research on sensitive technology at our labs. A copy of the letter. I brought to the attention that 162 Chinese nationals who actually stole sensitive research material from Los Alamos in the lab. Your department answered the letter but really didn’t answer my question. So let me ask the question do you. Does the benefit of the work of the Chinese foreign nationals within our labs outweigh the documented risks to both our research and our national security?

David Turk (48:32):

Well, let me first thank you for all your focus on this issue. Thanks to Ms. Puglisi and others who’ve focused their careers here. And I thought that Ms. Puglisi’s testimony is, I said in my opening, was incredibly useful, just to eyes wide open. Here’s the threat and here’s what we face and how do we deal with it and get the balance right. So three things maybe just to point out, and happy to get into this in any detail. One, we do have specific restrictions. So you can’t work at a DOE lab if you’ve done a talent recruitment program and to make sure that we’ve got that prohibition and those restrictions in place and trying to really think about not just what’s called a talent recruitment program, but other ways that the Chinese government or others can get around that as well, so that we’ve got that eyes wide open on those specific prohibitions.

(49:19)
Secondly, as was mentioned, we’ve got this science and technology risk matrix. This is going beyond what’s under export control or what’s under classification and making sure we’re looking at technologies, and just as Ms. Puglisi said, doing a ranking of where are the most sensitive technologies. AI is one of the six sensitive technology areas that we have a particular focus on in this risk matrix, and make sure that for those very sensitive applications, we have extra protections. So it’s a risk based model along those lines.

(49:49)
Third, we do have a counterintelligence unit at the Department of Energy and all of our field offices cover all of our labs. So we are actively investigating and making sure that we’re following up on any leads so that we can be as thoughtful and proactive as we possibly can. There is a balance here, just as you said, just as Ms. Puglisi said. It’s a great part of our scientific apparatus that we have folks from all over the world who want to come work here, leading scientific minds. You think of Albert Einstein, you think of a number of others who benefited our country immensely and we want to take advantage of that, especially where appropriate with open science, with areas that are fruitful for that kind of focus as well. It’s also useful to note, I’ve got one statistic here, many of the folks who come here to work in the US, including in our labs, end up staying and becoming incredibly important parts of our ecosystem. So over 90% of top AI PhD students from around the world stay here in the US five years after graduating, and that is a huge benefit. But looking forward to working with you further.

Senator John Barrasso (50:51):

I appreciate because the good news is 90% come and stay, and then the concern is that there’s potentially the 10% that do return to China and how do we-

David Turk (51:00):

Or have families there, as you’ve mentioned, and Ms. Puglisi has mentioned, and again, eyes wide open to take those threats head on.

Senator John Barrasso (51:06):

Yes, Dr. Stevens, I don’t know if there’s something you want to add on this, but I’m interested in how are foreign nationals from countries of concern, how they’re vetted before they’re hired in your lab.

Dr. Rick L. Stevens (51:17):

There’s a process that’s actually quite similar across all the laboratories where there’s a background check. There’s the filters that Secretary Turk mentioned in terms of recruitment programs and their history. There’s a famous form, 493 we call it, that foreign nationals have to fill out. It’s a long process to get hired and to get cleared, and not just to be hired, but even to come as a visitor and to participate in use our facilities. So I think labs do a quite good job of screening this, and they make very valuable contributions. One statistic that I think was maybe mentioned, over 60% of the computer science graduate students in the US are foreign born, and the workforce component that we need to build advanced AI systems will not function if we prohibit those students from participating in this ecosystem. So we’re going to need to really accelerate our workforce development, and foreign born participants are an important component of that.

Senator John Barrasso (52:24):

So a follow-up to Mr. Wheeler. So given the global nature of the technology development, how does your organization navigate the challenges of international collaboration while ensuring the security and the integrity of the research?

Andrew Wheeler (52:44):

Yeah, we have a couple processes internally [inaudible 00:52:44]. There we go. Sorry. So much like the national labs, we have a process for how we onboard talent as well. We also have ongoing training that’s mandatory around global trade. And so it’s very specific. Everyone gets trained around what are the regulations around how do you interact, whether it’s a collaboration opportunity with anyone abroad, honestly. And so we have very strict control that manages what kind of technology can be transferred, who we work with. So very tight guidelines there. And then above and beyond that, for the projects we’re involved in, specifically, this is obviously closer to Department of Defense, but if it’s a project that requires only cleared personnel, we have that ability. We have the ability to do secure manufacturing. So we’ve got a lot of steps in terms of security and who we work with and then how the work ultimately gets done.

Senator John Barrasso (53:55):

Thank you, Mr. Chairman.

Senator Joe Manchin (53:57):

Senator Hirono.

Senator Hirono (54:00):

Thank you, Mr. Chairman. So we’ve heard from all of you that the Chinese government has a systematic campaign of stealing American intelligence, intellectual property, to advance their economy, and that our DOE labs are targeted for this kind of effort. But I want to point out, as some of you have pointed out, the sensitivities involved and the balance that is required. So it is important to deter Chinese government wrongdoing and prosecute espionage and theft. But our concern is about the Chinese government’s actions, not Chinese people. And we must avoid misguided prosecutions such as what was undertaken by the Justice Department and the previous administration with their China initiative. Going after researchers on shoddy evidence will hurt, not help, American innovation by sending the best minds elsewhere. So listening to some of the responses that you’ve provided already on this subject, for Secretary Turk, you say that we are going forward with eyes wide open and we have some proactive steps that the DOE has taken. So do you consider these steps to be adequate to protect us from the kind of intellectual property espionage engaged by entities such as China and perhaps Russia, Iran?

David Turk (55:36):

Well, you’re right to say it’s not just China. There’s others as well, of course, Russia, Iran, North Korea, et cetera. And I think the short answer and the honest answer is we always need to do more. The threat is evolving and we need to evolve our responses accordingly, which is why I mentioned this risk matrix. We are annually updating that risk matrix now so that we make sure that we are updating in terms of what technologies we consider sensitive, what protocols we have in place. We have a standing group now made up of folks from throughout the labs and DOE headquarters to take a look and continually provide ideas to the secretary and myself so that we can continually improve. So we just need to improve on a regular basis, on a continual basis, and as you say, rightly, get that balance right.

Senator Hirono (56:22):

Because with the China initiative, I would say that we did not get the balance right. And in fact, the message to the Chinese community and the AAPI community was that here’s our government targeting these people, and it created an environment where AANHPI people were targeted for various kinds of abuse, to say the least. For Mr. Stevens, in the wake of the devastating fire on Maui, residents have been subject to disinformation on social media, likely coordinated by foreign governments, governmental entities, and generated with AI to discourage residents from reaching out to FEMA for disaster assistance and to sow distrust in the federal government. Are you aware that this happened in the wake of the Maui disaster?

Dr. Rick L. Stevens (57:11):

Absolutely.

Senator Hirono (57:12):

Yes. So at this point, with more of these kinds of natural disasters occurring with much more devastating results, we can expect that there will be probably these kinds of misinformation to sow distrust in our own government. So how can we use AI or the tools to rapidly detect and counter such efforts to spread disinformation, especially in emergencies or following disasters?

Dr. Rick L. Stevens (57:40):

So I think we have to take several steps. One is to have advanced systems for detection of synthetic or deepfake information, non-true information that gets disseminated. We should uphold the existing laws that prevent that kind of information legally from being disseminated through social media channels. We need to enforce watermarking, this technique of putting secret information in AI generated output so we can detect when it’s generated by AI. And we need to make headway on watermarking official sources, that is official news that comes from governments or from responsible parties, so that it can be detected automatically that that is true and correct information coming out, distinguish it from misinformation that’s generated by AI. So there’s a multiple layered approach to protecting the citizens from disinformation, and we have to do all of those things.

Senator Hirono (58:38):

Do you think, Secretary Turk, that we already have these kinds of systems in place? Because, as I mentioned, these kinds of natural disasters are occurring more frequently with more devastating results. You can’t have all this misinformation out there stopping people from accessing the very kind of help that they need. Do we have these kinds of counter systems already in place?

David Turk (59:02):

So honestly, we have some of them in place, not just the Department of Energy, but others across the government, but not as much as we need to. Absolutely. And Professor Stevens I think is exactly right. We need a layered approach and we need to continually update and improve that, and frankly, have the capabilities, like we’re talking about in this fast proposal, in the US government so that we can do the kinds of monitoring, we can do the kinds of analysis that allows us, not only the Department of Energy, but others across the government, to have the information and the tools to do the watermarking and other mitigation efforts.

Senator Hirono (59:36):

Do the other two panel members want to weigh in on this concern that we have, following disasters, that there are entities such as Russia, that are spreading misinformation to people who are already in great pain? No? You agree that we need to put in place ways that we can counter this kind of misinformation.

Ms. Anna B. Puglisi (00:00):

 

Senator Hirono (01:00:00):

…misinformation.

Ms. Anna B. Puglisi (01:00:01):

Right. I think the challenge of misinformation with these tools, as I mentioned in my written testimony with the recent reports about what was happening with Facebook and other kinds of social media. We see misinformation across a wide range of topic areas from the natural disasters to, as I mentioned in my opening statement, all kinds of activities that the US government are either doing or putting in place. So I think it’s a growing issue.

Senator Hirono (01:00:34):

Yes. I want to note that in the case of Maui, the family of federal agencies were there, over 25 or so agencies with over a thousand personnel, and yet there was all this disinformation out there saying that the response was lacking. And so this kind of sowing of mistrust by, I say Russia was a big actor in this instance. Once again for Mr. Stevens. I just want to note that your testimony highlighted some of the ways that AI can provide breathtaking opportunities for technological innovation.

(01:01:10)
But what are Argonne and the other DOE National Labs doing to ensure that the technologies you’re helping to develop are accessible to small businesses to help them innovate?

Dr. Rick L. Stevens (01:01:23):

So we’re very concerned about the availability of small businesses and students and so on to learn about AI and to use AI. I think this concept is often called democratization of access. The different DOE labs have different programs to make access to our computing facilities and AI models that we produce that are open, that are safe and secure available to those communities. And we provide help for those communities, whether it’s small businesses or whether it’s local governments to gain access to our systems to do that. And I think it’s an ongoing effort.

(01:02:05)
I think more needs to be done there. And I think DOE working in concert with other agencies, particularly NSF via something like the NAIRR Initiative, could actually make a big impact on that, and it’s something that we should do together.

Senator Hirono (01:02:17):

Thank you, Mr. Chairman.

Speaker 3 (01:02:19):

Thank you, Senator. Now we have Senator Murkowski.

Sen. Murkowski (01:02:23):

Thank you, Mr. Chairman, thank you for this hearing this morning. Obviously a very, very timely topic, as was just alluded to in Senator Hirono’s question. There’s a lot of good that we can gain from AI and as scary, I think as it is in so many different areas. I think it’s important not to lose sight that when utilized correctly, it can truly be transformational. So question to you, Secretary Turk. As we’re looking to different applications for good within AI technologies and AI workflows, we talk a lot here in this committee and we’ve been talking a lot about things like permitting reform.

(01:03:14)
Something that has proven to be tediously long and involving multi-layers of government processes. Do you see application for being able to streamline some internal government processes so that we can reduce the time, for instance, that it might take for an agency to deliver on a permit or to really just kind of process any paperwork, reduce workloads? How do you see this being utilized for the good?

David Turk (01:03:51):

Well, I think the short answer is absolutely. We’re sitting on a treasure trove of data from previous applicants for different permits out there. If we can harness that data with algorithms with AI, we can shrink the timelines with permitting. We can take advantage of that data in a way that allows us to do what we need to do is build out our electricity infrastructure, our transmission, other kinds of infrastructure that we need in our country. So there’s no doubt in my mind there’s an awful lot of good that can happen in the energy space, including accelerating on the permitting side.

(01:04:25)
We’ve got a lot of renewables coming into the grid. We’ve got to balance all of that. AI can be incredibly helpful with the power and the data that it has available.

Sen. Murkowski (01:04:33):

So you’ve had a couple opportunities to come to Alaska. You have been read in very well as to many of the unique aspects. And unfortunately one of the unique aspects that we have is sometimes we have a lack of data. We just haven’t done the mapping, we haven’t done the review, the analysis. And so we know that with AI, your output is just as good as your input. And if you have these holes in that, it can be a concern in itself. Again, how can we utilize the benefits of AI in government processes for good?

(01:05:24)
Well, ensuring that perhaps some states, some areas like Alaska where data is just not complete, that they’re not actually disadvantaged. Have you given thought to that?

David Turk (01:05:37):

Absolutely. And let me say, it’s a pleasure. It’s been to work with you and your staff and I have had a chance to not only come up to Alaska and go to Anchorage, but also to get out there to [inaudible 01:05:48] and [inaudible 01:05:49] and other areas and really hear from folks in terms of what we can do from the Department of Energy to try to be helpful in that space. But I think you raise an incredibly important point. AI is only as good as the data that you feed into it. And if you don’t have the data, it can’t be the powerful tool for good that you just highlighted on that front.

(01:06:07)
So I think it’s a continuing effort on our part and eager to work with you in our arctic energy office to make sure that we are doing everything we can from the Department of Energy, working with others in the federal government to make the investment so that we have that data available so that it can be feeding into these AI models. So it’s an ongoing effort. We’re trying to make sure that we’re bringing that in to everything that we do, but it’s an ongoing effort and something that we’ll continue working on.

Sen. Murkowski (01:06:32):

Well, we do need to work together on it. And as we identify, I spend a fair amount of my time on the appropriation side with the Department of Interior budget and recognize that we’re still directing a lot of federal resources to just basic mapping, just basic mapping. So we’ve got a long ways to go there. Let me ask you about Department of Interior. About a month ago, the IG for Interior, Mark Greenblatt noted in an op-ed in the Washington Post that there had been an inspection undertaken by his office, and they were able to use a simple tool to crack more than 18,000 or 21% of the document’s passwords.

(01:07:20)
And this included senior department officials, hundreds belonging to employees with elevated privileges. More than 14 of these passwords were cracked within the first 90 minutes of testing. And he noted that his office was able to do this by spending less than $15,000. That should alarm all of us. Probably a general question, and I hope you answer yes, but we understand what happens or happened at Interior. Is Department of Energy any better prepared to ward off nefarious actors than we saw at DOI?

David Turk (01:08:03):

So we’re trying to continually improve. One of the things that makes me most nervous, and you’re right to point out the benefits of AI. But one of the biggest challenges and the technology’s only improving and improving, it makes it easier for less sophisticated actors to do more sophisticated kinds of attacks, whether it’s cybersecurity or any number of other things, biohazards, even nuclear proliferation efforts as well. And so we’ve got to take that head on. It’s why we need to make the investments in the US government so we can detect these kinds of things.

(01:08:34)
We can be ahead of the curve as much as we possibly can, but this is something we need to keep working on day in and day out, whether it’s the Interior department, the Department of Energy or private sector companies as well. This should be a wake-up call. The Pandora’s box is open. We now need to deal with it, and we need to take these kinds of emerging AI challenges head on, and we’re not there. We’re not there where we need to be. We need to make the investments. We need to keep working at this.

(01:09:03)
This is why we wanted to put together our fast proposal with our ideas of what we think we need to do to try to do what we can from the Department of Energy side. And again, have the back and forth with you, with others here in Congress to make sure that we are as prepared as we possibly can be but we’ve got work to do.

Sen. Murkowski (01:09:19):

We have work to do. My fear is what we saw within the Department of Interior is just one department of 12 and where the vulnerabilities may be a little bit different, but the impact can be equally disastrous. Mr. Chairman, thank you.

Speaker 3 (01:09:43):

Thank you. Now we have Senator Cortez Masto.

Sen. Cortez Masto (01:09:44):

Thank you, Mr. Chairman. Thank you to the panelists for this great conversation. Let me start with the Deputy Secretary Turk, because in Nevada we have… And I want to thank also Hewlett-Packard. They were a part of this. We have been having some red team hacking going on at some of our universities to really assess what is happening here. And let me address what everybody has talked about, the nefarious actors. The concerns here that AI systems can be tricked into providing instruction for causing physical harm. We’ve talked about that. We need to address it.

(01:10:22)
I think we need to as well. I think those red team hacking weekends are just as important. That’s the manual piece of it. I think Dr. Stevens, you talked about that we need to continue. And let me just highlight, because I know this weekend in particular that I’m talking about in August in Las Vegas, it was designed around the White House’s Office of Science and Technology Policies’ blueprint for AI Bill of Rights, and it’s a competition that happens regularly. But my question here is one, yes, that needs to continue. Two, though, it also is building our cyber workforce.

(01:10:55)
Is that right? That’s what’s key to this as well, is that we need to have more engagement in building that workforce. I am proud that UNLV was the host of this and will continue to be the host of these types of red team hacking exercises. But it also is part of this idea that we have to create these academic centers of excellence in cyber defense, which UNLV is, a number of colleges are. And I think many of you are participating in those exercises. So I guess my question for you, Deputy Secretary Turk, is what else should we be doing to build out that workforce?

(01:11:33)
I know there’s work going on right now. Can you talk a little bit about the national cyber workforce and education strategy? How does that fit in to what we are trying to achieve with developing that cyber workforce and what else do we need to know here in Congress to support it?

David Turk (01:11:48):

So the workforce piece is absolutely indispensable, and I think there’s a number of ways we need to come at it. We need to have a comprehensive and coherent strategy to it. First of all, if you want to have top talent come into the government for all the functions that we need to serve, you’ve got to have the cutting edge facilities and capabilities, right? The fact that we have the world’s largest supercomputer is a pretty nice attractor for some of the top talent wanting to do cutting edge applications along those lines. We’ve got the data, we’ve got the other pieces as well.

(01:12:17)
So we have to have that infrastructure that’s attractive for that top talent. The private sector is going to be able to pay folks an awful lot more than the government, even if we have bonuses and other kinds of attractive options, which we’re trying to do. Having the National Lab apparatus gives us greater flexibility candidly than if they were all federal government officials in the civil servant kind of sense. And so I think, and Professor Stevens can certainly talk about that using those partnerships. Argonne National Lab has a partnership with the University of Chicago, cutting edge university there.

(01:12:48)
That helps in incredibly important ways to try to channel as much focus as we can into this sector. But I think there’s no way we have a successful AI strategy as a government, as a country, unless we have the workforce and the pipelines for the workforce. Making sure that we’ve got that capability, not just in the private sector, incredibly important, but in the government for all the functions that we need to have here.

Sen. Cortez Masto (01:13:10):

And as we’re building out that workforce. And I’m going to ask Hewlett-Packard if you can, because I know you were part of this, and Hewlett-Packard has a Future of Work Academy for community and technical colleges, and they’re involved with nearly, I want to say a hundred institutions and over 500 students. So the private sector is engaged, correct?

Mr. Andrew Wheeler (01:13:30):

Absolutely. In fact, I’m glad you even mentioned the centers of excellence because what we’ve found over the years, honestly, decades, that really is the best practice. Once you have a center with maybe the compute capability, but you bring together those domain experts that are local to that institution. You bring the universities that are local there as well, and it really does allow you to develop that local workforce. And as we think about AI and needing more and more of that expertise, it’s a great best practice to again, help develop that workforce locally and just kind of grow and innovate together.

Sen. Cortez Masto (01:14:19):

And this is the opportunity and maybe Dr. Stevens, I’m going to ask you to talk about this because it is so hard for us in Congress to come back in and overlay a framework and then actually try to develop values and principles in that framework. And this is an opportunity as we are building out that cyber workforce to grow those values and principles around AI. Is that the goal here when we develop the curriculum?

Dr. Rick L. Stevens (01:14:39):

Absolutely. I mean, as AI becomes more powerful, as already been mentioned, it does two things. For somebody who knows something, it empowers them to do more, right? So whether that’s somebody who’s defending our systems from a cybersecurity standpoint, it allows them to be more powerful, to effect more systems, to be smarter about how they can do defense. But it also empowers the other side to be more aggressive in how they might attack systems. And we need to of course, win those battles. And we have to create a community and a new way of thinking, an AI enabled cyber strategy.

(01:15:24)
And I think that’s what we have to start teaching. And of course, it’s very attractive to the students. When you talk about cybersecurity, they’re already interested. But then you bring AI into it, they’re super interested. So I think we have a big opportunity to bring more people into the workforce on this by attaching it to the AI agenda.

Sen. Cortez Masto (01:15:41):

Thank you, Ms. Puglisi.

Ms. Anna B. Puglisi (01:15:44):

And that’s really an essential part, the workforce in the future competition. And I might add that it’s also… We’ve spent a lot of focus on looking at that PhD level or higher education level. It’s really what does it take to have that technically proficient that doesn’t necessarily need a degree or need an advanced degree. And I would adventure to say that it’s really important to start and begin that at the K through 12 level and really lay that groundwork because that is really what it’s going to take to compete.

Sen. Cortez Masto (01:16:16):

And that appears to me what is happening with what I see with the different competition, whether it’s the federal government, state or private sector, that’s the focus, correct?

Ms. Anna B. Puglisi (01:16:25):

Yes. And actually CSAT has done a lot of work around those topics and the importance of competitions as well as looking at the demographics of that workforce. So we can make sure that you have that.

Sen. Cortez Masto (01:16:37):

Thank you, Mr. Chairman.

Speaker 3 (01:16:38):

Thank you. If we could, I’ll tell you, we have two votes at 11:45. We can keep our questions. I know we went to seven. We’ll go back to five minutes if we can. If you need a little bit longer, fine. If you can stay closer to five, it’d be great. Senator Hawley.

Sen. Hawley (01:16:53):

Thank you, Mr. Chairman. Mr. Turk, since I have you before me. Last time we talked, we talked about the radioactive contamination that the federal government had delivered to the St. Louis and St. Charles regions of my state. And in particular, we talked about Jana Elementary School in the greater St. Louis region, which was then and is now currently closed because of nuclear contamination that private tests found were in the school. Now when we visited last, this was in February, you told me that you were having conversations about it. I’m just looking at the transcript here.

(01:17:27)
I asked you about the letter from the Hazelwood School District to the Department of Energy requesting additional testing. You said, “I’ve seen their letter. We’ve talked about it.” You said, “We’re having conversations including with the Army Corps. You said, “Again, we’re having conversations with the Army Corps.” When I asked you what you’re going to do, you said, “I’ll talk to the team.” So that’s been multiple months ago. Why don’t you give me an update? What is the Department of Energy doing?

David Turk (01:17:50):

Happy to do so. And we have a response letter to your most recent letter.

Sen. Hawley (01:17:54):

I’ve read it.

David Turk (01:17:55):

That should be coming today or tomorrow that I was just reviewing.

Sen. Hawley (01:17:57):

I think you sent one yesterday. Well, good. Maybe there’s more.

David Turk (01:17:59):

There’s another one as well. There’s two and there’s a second one that should be coming today or tomorrow that I spent some time with the team reviewing and making sure we were trying to be as responsive as we possibly could on that front. The secretary, myself, the head of our legacy management team, Carmelo, is working with the Army Corps and others on this front. And on the testing side in particular, I’ve pushed the team several times. I said, what can we do? Is there’s nothing we can do from the Department of Energy side? And what we can do is work with the Army Corps.

(01:18:29)
And happy to be very active with the Army Corps and make sure that they’re doing under their authority… FUSRAP gave the authority for these cleanup sites to the Army Corps. We’re playing more of a supporting role, but we’re happy to not only play that supporting role, but to try to push and work with our interagency partners to be responsive, certainly to listen to the concerns that you’ve expressed, the concerns of the community. It’s a horrific situation.

(01:18:53)
I’m a parent. I’ve got three kids. If this was happening to my school, I would be certainly nervous. If I was a school a few miles away, I’d be nervous as well. So there’s an awful lot that we need to do, not just on the science, but also on the human element as well. And thank you for all your focus on this very important issue.

Sen. Hawley (01:19:09):

Well, when you say that you’re happy to do it, to do X, Y, Z to work with the Army Corps, are you doing it? Are you pushing them to do the additional testing?

David Turk (01:19:17):

We are having active conversations with the Army Corps.

Sen. Hawley (01:19:20):

Well, that’s what you said in February.

David Turk (01:19:21):

Well, it’s for them because it’s under FUSRAP for them to make the decisions about where they think it’s appropriate to do additional testing. We’ve had active conversations. We are having conversations about what more they are doing right now, and they are doing more right now. I will let them talk to you about other testing [inaudible 01:19:39]

Sen. Hawley (01:19:39):

I’m aware of what they’re doing. Listen, let me tell you what the situation is. Just a few days ago, the Army Corps reported that they’ve removed 301 truckloads of radioactive dirt from the bank of the creek that’s right near the elementary school. Now, this comes after they said for months that there was no contamination anywhere near the elementary school. That’s what they said to the community. That’s what they said to the parents. That’s what they said to the school district. And they said they wouldn’t do any more testing. It was your responsibility to do the additional testing.

(01:20:11)
You said, “No, it’s not. It’s their responsibility.” So currently, nobody’s doing anything more. Additionally, and this isn’t just a few months, Mr. Turk, this is 70 years since 1949. There has been radioactive contamination in the water, in the soil, all over the St. Louis region. That’s a heck of a long time. And for 70 years, what we now know, we’ve discovered, even since you and I talked last because of the efforts of St. Louis residents who got FOIA materials that showed that the federal government knew from the fifties and sixties forward that there was significant radioactive contamination and they did nothing about it.

(01:20:54)
And they systematically misled and lied to the residents of St. Louis and St. Charles region and said, “No, it’s actually, it’s okay. Play in the creek. It’s fine. There’s nothing we can do here.” So it’s just the same old story over and over. So I don’t want to hear about conversations. I want to hear about action. I want that school reopened. Now, tell me about the Weldon Spring site, which is another of these nuclear contamination sites. You have total ownership of that. When is it going to be fully remediated?

David Turk (01:21:23):

So the response that we’ve got for you lays out the history, and we’ve looked at…

Sen. Hawley (01:21:27):

I know the history. Tell me when it’s going to be remediated.

David Turk (01:21:29):

…back in our archives. Of course, the Department of Energy was created in the seventies, but we’ve got precursor agencies that were responsible for the kinds of time periods you’re talking about. And we lay that history out just from our records. But happy to go into any level of detail in terms of what the government did or didn’t do at 20, 30, 40, 50 years ago, at least based on our record along those lines. We’re very focused on the creek.

(01:21:53)
I’ve got a map here right in front of me looking at all the schools and others in the vicinity of the watershed in the creek area, and have asked our legacy management team, which is the responsible part…

Sen. Hawley (01:22:03):

When is the Weldon Spring site, which is squarely under your jurisdiction? When’s it going to be remediated?

David Turk (01:22:10):

So we’ll have to get back to you on that. I don’t have the exact date on that.

Sen. Hawley (01:22:13):

Did you not think I’d ask that today? I mean, I’ve written it to you. I’ve written to you about it multiple times.

David Turk (01:22:18):

We’re happy Senator whether in a hearing or frankly in [inaudible 01:22:22]

Sen. Hawley (01:22:22):

This is how this goes. You were before me in February and you said, “I’ll have a bunch of conversations. I’ll get back to you.” It’s September. Now you’re saying, “I’ll have a bunch of conversations. I’ll get back to you.” Are we going to be having this conversation again in six months or nine months?

David Turk (01:22:35):

So I can get you that information today? I just don’t have it right in front of me.

Sen. Hawley (01:22:38):

Good. Today would be good.

David Turk (01:22:40):

Happy to get it back to you today.

Sen. Hawley (01:22:41):

Good. I’ll hold you to that. And I’m glad we’re in an open forum here. So let’s get that done. And let’s get a date fixed on when the remediation of Weldon Spring. And for those who are wondering, well, he was talking about an elementary school a second ago, now it’s this other site. That’s because there are multiple sites in the St. Louis and St. Charles regions affecting thousands of people over 70 years who have been exposed to this contamination and lied to about it.

(01:23:07)
I’m not happy about it. Last question for you. I recently submitted an amendment to the National Defense Authorization Act that would provide compensation to the victims of this nuclear contamination. I am delighted to say it passed the Senate. And as I look across the diocese here, just about every person on that side of the diocese over there voted for it. I thank each of you. Thank you for it. Senator Luhan and I worked together on this.

(01:23:37)
President Biden has said that he thinks it is vitally important to get these folks compensated for what has happened to them. Do you agree with that? Do you support our legislation to compensate the victims in St. Louis and elsewhere of nuclear contamination, radioactive waste?

David Turk (01:23:54):

So certainly support the intent behind the legislation. It’s the Department of Justice that’s the relevant agency here. So I can’t speak for them or speak for the administration as a whole.

Sen. Hawley (01:24:02):

You won’t agree with the president?

David Turk (01:24:04):

What’s that?

Sen. Hawley (01:24:05):

You won’t agree with the president?

David Turk (01:24:06):

I’ll always agree with the president.

Sen. Hawley (01:24:08):

Well, let me ask you again. Do you support the legislation…

Speaker 3 (01:24:12):

Senator, can we…

Sen. Hawley (01:24:13):

To compensate the victims of this nuclear contamination, radioactive waste?

David Turk (01:24:19):

Again, I’ll leave it to the President to speak about administration policy on this. I’ve not seen what he said.

Speaker 3 (01:24:24):

Senator, if you want to follow up with the second round, we’ll let everybody get their first round and we’ll come back, okay?

Sen. Hawley (01:24:31):

Thank you, Mr. Chairman.

Speaker 3 (01:24:31):

Thank you. Senator Cantwell.

Sen. Cantwell (01:24:33):

Thank you, Mr. Chairman. Thanks for holding this important and timely hearing. Over the recess, I held a AI form in Seattle. Pacific Northwest Laboratory showcased its rapid analytics for disaster response. A tool that is a detection system for all hazards and important… It was used to assist in both Ukraine and in some of the Maui aftermath. Others in the Allen Institute for AI environment have demonstrated how they’re using satellite imagery to improve wildfire management. Really important for us in the Pacific Northwest. Also using it to detect illegal fishing in our maritime sector. Very important issue to us in the Pacific Northwest and enforcement and surveys of our land for conservation purposes.

(01:25:23)
So we need to invest, I believe in more innovation, and that’s why we obviously are supportive of what happened with CHIPS and Science. And now with AI for our competitiveness, the United States cannot slow down on AI as it relates to our competitiveness internationally and for national security reasons. So our national labs have assisted us in supercomputers, reliable and robust data sets. US Department of Energy National Labs are essential to our leadership in artificial intelligence. So I wanted to ask our panelists, you spoke about the need for US leadership in this issue, Deputy Secretary Turk.

(01:26:06)
And also I believe Mr. Stevens, you mentioned lab supercomputers are positioned to create the tools for risk assessment to evaluate AI systems. So how do we get both NIST and DOE working together on these tool assessments in determining what our true risk assessments, so they’re identified. And what do we need to do to help build a workforce, particularly in skilling the workforce for AI? Dr. Stevens, either one of you want to start. It doesn’t matter.

David Turk (01:26:49):

Go ahead, professor. You start and I’ll clean up.

Dr. Rick L. Stevens (01:26:51):

So we’re having good conversations with NIST about partnering in how to take the assets of DOE and connect them to the analytical and conceptual framework that NIST has been working on for AI risk management. So I think that is an ongoing conversation. They’re participating in working groups that we’ve established, consortia across the laboratories that are working on how we will do risk assessment for large AI models. So I believe that part is already moving, and I feel quite positive about where that’s going. In terms of the workforce, I think the young people are hungry to work on AI. You don’t have to encourage them.

(01:27:32)
All you have to do is say, here’s an opportunity and they’re there. Any course at any major university that’s on AI is going to be oversubscribed. So I think what we have to do is we have to provide enough resources that any student in the US who wants to make a meaningful contribution to AI and the national interest has an opportunity to be funded to go to school, to go to graduate school, to do internships, and to participate. And that’s going to require multiple agencies cooperating on that. DOE, of course, supports students and supports student internships, but in a very limited number.

(01:28:09)
NSF of course can do it in a much larger number, but other agencies as well. We need a coordinated national strategy to build an AI workforce, and we need some leadership to organize that.

Sen. Cantwell (01:28:21):

Mr. Turk?

David Turk (01:28:23):

Just two things to add. One boy, what a gem we’ve got when it comes to AI and everything else in the Pacific Northwest National Lab, whether it’s some AI on a drought study or with vaccine development. There’s example after example coming out of that lab. Of course, working with Argonne and others of our national labs as well. I think the interagency partnership here is going to be absolutely key. Professor Stevens outlined what we’re doing with NIST, and we need to do even more with NIST on the risk framework along those lines. But it’s NOAA, it’s agency after agency that we’ve got good partnerships with.

(01:28:57)
And I think because we have the exascale computing power, because we have data, because we have these other facilities that not only with your role in this committee, but your role as chair of the commerce committee as well have been working for so many years to make sure we’ve got these capabilities that can help work with partners throughout the interagency. And we just need to leverage that. We need to take full advantage of that.

Sen. Cantwell (01:29:16):

And do you agree with Dr. Stevens about the workforce issue?

David Turk (01:29:20):

Completely agree. And rightfully for you to focus on this, Senator Hirono asking questions about this, we all need to focus on the workforce. And I know I’ve talked to a number of folks, they want to work on AI and they also want to work… Private sector’s great. And we need talent in the private sector, but they also want to work in the government and take on some of these public challenges as well. We just need to make it attractive to them in all sorts of ways so that we can compete.

Sen. Cantwell (01:29:45):

Thank you, Mr. Chairman.

Speaker 3 (01:29:47):

Thank you. Senator Hoeven.

Sen. Hoeven (01:29:53):

We get that. Thank you, Mr. Chair. Secretary Turk,

Sen. Hoeven (01:30:02):

You note that China is working very diligently to copy and take a lot of the capabilities and research that is being developed at our national laboratories. In your opinion, how much have they taken or copied? And are the national labs really able to protect themselves, not only in terms of the information they have, but also as they hire people? Don’t they have to be very careful whom they hire and how they hire and so forth so that they know that that information isn’t going from employees to China or other actors that have adverse interest to our country?

David Turk (01:30:44):

Well, thanks for the question, and the answer is we need to be very aware and we need to have a layered strategy to deal with these security challenges, so we’ll even put in place specific prohibitions. If you’ve worked in a foreign talent program in China, for instance, but it’s not just China, then you can’t work in a Department of Energy lab, so we’ve got specific restrictions in place where we see particular risk.

(01:31:08)
Secondly, we’ve adopted, and now we’re annually updating something called our Science and Technology Risk Matrix, which looks at particularly sensitive technologies. AI is one of the six particularly sensitive technologies that we do an extra screening and make sure that we’re taking care of those sensitive technologies in particular.

(01:31:26)
And then third, we’ve also got counterintelligence experts in our field offices that cover all of our national laboratories that are looking into any allegations and making sure that we’re running down all leads along those lines. But we want to attract top talent in our US national labs. We want to have that expertise coming and we benefit from that, public and the private sector. Many of those over 90% of AI PhDs who come and work in our labs and come and get their PhDs here stay more than five years, so we benefit from that, but we’ve got to have eyes wide open and have a real balance here so that we try to get it right and update it over time too.

Sen. Hoeven (01:32:06):

Well, you went right into my next question is then what about people that leave, get recruited away? Because they’ve got all that incredible knowledge and what if they get recruited to either a rogue actor or a country like China or somebody that then is trying to get the information that way, who are just hiring them away from you?

David Turk (01:32:26):

Well, and it’s not just happening with Chinese nationals, it’s other countries’ nationals who are recruited elsewhere also. So we’ve got to be eyes wide open on the front end. If there’s a particular risk of an individual that we think could take some of their experience they learn in a national lab and take it back to China or take it to Russia or other countries that mean us challenge in the world and we’ve got to have restrictions and those kinds of screens in place, just as I mentioned along those lines. And then we’ve got to balance the benefits that we get from all these world-class talent coming here with the risks that we’re going to have from some folks deciding that they want to go work elsewhere, they want to take what they learn and take it elsewhere. So we’ve just got to be very vigilant and have a very layered approach, and we’ve empowered a group of experts across the labs and headquarters to make sure that we are continually improving not only our risk matrix, but how we do things more generally.

Sen. Hoeven (01:33:21):

Yeah, a real challenge, no question about it.

David Turk (01:33:23):

It’s a real challenge. There’s no doubt about it.

Sen. Hoeven (01:33:24):

You need the talent, but you have to screen it on the front end. You’ve got to be careful not to lose it on the back end. Incredibly difficult.

David Turk (01:33:31):

It’s incredibly difficult.

Sen. Hoeven (01:33:33):

In a similar way, but a little bit different, I want to ask both Ms. Puglisi and also Dr. Stevens, what about people just flat out copying? Okay, so you developed some great AI product, whatever. How about somebody just taking it and copying it? Look at what Iran’s doing with drones. They’re obviously just, and other countries too, just copying our technologies. It may be inferior, but they are just copying it in many respects. How do you prevent rogue actors from doing that kind of thing? Or can you?

Ms. Anna B. Puglisi (01:34:10):

That’s correct, Senator. That’s a very difficult-

Sen. Hoeven (01:34:12):

China’s made a living off copying and stealing our stuff.

Ms. Anna B. Puglisi (01:34:15):

Right, and I’d like to actually have a couple of comments on your first question. I think it’s important.

Sen. Hoeven (01:34:24):

Well, the Chairman’s gone, so yeah, go crazy. Go wild.

Ms. Anna B. Puglisi (01:34:29):

They wrote the-

Sen. Hoeven (01:34:30):

Exactly. We do whatever we want now.

Speaker 4 (01:34:32):

I’m sitting in for the chairman.

Sen. Hoeven (01:34:33):

Oh, okay. I take it all back. Yeah. Fellow governor, we’ve got to mind with these-

Ms. Anna B. Puglisi (01:34:40):

Copying is very much a challenge, but it’s the technological know-how, I think. A lot of our existing mitigation strategy is focused on things, something that is tangible. It’s the technological know-how of how do you actually use that so you can copy something where I can translate something. I still don’t understand what it means. And that’s why that talent piece is so important. And I would venture to say that our system really isn’t set up for this particular challenge that we have today. We’re set up to fight the Soviets. We look for intelligence officers, we look for a direct military end use, and we have very narrow laws around economic espionage, which we could discuss en masse for a long time. But what’s being targeted are things that are earlier and earlier in the development cycle, that are beyond most of our mitigation strategy. And that is going to be an ongoing challenge. And we think about how do we find ways that we still enhance and keep investing in that early development cycle work, which is such an essential part of the [inaudible 01:35:54] labs, while at the same time finding those ways to protect that. And then that gets at the workforce, it gets at technological know-how, it gets at how do we then find new ways to face this challenge?

Sen. Hoeven (01:36:12):

I can tell you’ve been thinking about it. It’s good that you’re very thoughtful about that and I appreciate that.

Ms. Anna B. Puglisi (01:36:17):

Thank you.

Sen. Hoeven (01:36:18):

I can tell it’s something you’re working on and that’s good. Thank you very much.

Senator Joe Manchin (01:36:21):

Thank you, Senator. We have Senator Kelly.

Sen. Kelly (01:36:24):

Thank you, Mr. Chairman. Dr. Stevens, we’ve been going through a sort of unprecedented period of drought in the west. Worst drought as far as we can tell in 1200 years. It’s been going on for 20 years. This summer rather hot. It’s always hot in the desert, but we had an unprecedented number of days in Phoenix, especially over 110 degrees. And we’ve had wildfires, unprecedented wildfires, not only in Arizona but in other parts of the country and in Canada. These fires have had significant impacts, impacts on communities, and it’s been clear to me that we have to leverage every tour at our disposal to mitigate these disasters, but also look for opportunities here to promote forest restoration, which has an impact on these.

(01:37:24)
We have a big ponderosa pine forest in Arizona, biggest in the world in fact, and we are looking to get this forest restored. So obviously the potential of artificial intelligence here can’t be overstated with its ability to analyze large data sets rather quickly and accurately and to predict things. Cal Fire is conducting a pilot program using AI to help with the early detection of wildfires. So my question to you is twofold. Can you first elaborate on the current initiatives and advancements here with using AI for wildland firefighting, if there is any going on? And then secondly, a little bit about looking ahead to how your laboratory envisions collaborating with other government agencies and the private sector.

Dr. Rick L. Stevens (01:38:23):

Sure. Thank you for that question. So we’re quite familiar with the Cal Fire effort and our colleagues in San Diego who’ve been very involved in building some of the technology for that. It’s a really challenging problem. As you know, fires start with smoke often and AI trying to detect from cameras on mountaintops and from other vantage points early examples of fire often get confused by fog or by tractors stirring up dust or something like that, so there’s a need to really improve the AI algorithms that are interpreting images, maybe to upgrade the technology so we get both infrared imaging as well as visible light imaging, and really realize it’s going to take some time to fully deploy AI and re-engineer how the processes in Cal Fire and the whole teams will use that AI to be more efficient.

(01:39:19)
Ultimately, the AI can put more computer-based eyeballs on the territory than humans could ever look at the monitors and so on. And so I think the long-term impact of AI in firefighting and disaster management in general is going to be huge. AI can also synthesize, infuse information from remote sensing, from satellites, from on ground, from reports from people texting or tweeting or cameras, and from the workers, firefighters on the ground into a common database that tells us exactly what’s going on. And I think that’s going to be critically important as we go forward to scale up firefighting efforts. The national labs have been involved in trying to model and simulate fire in the west in particular. Los Alamos has had a very large program for many years trying to build simulations that would predict the likelihood of fires and to be able to understand the amount of flammable material that’s accumulating through the forest and so forth.

(01:40:20)
I think all the national laboratories are interested in helping with disaster management. And like the earlier comments about the work at PNNL, the work at Argonne, the work at Los Alamos and Livermore, everybody is really interested in this problem. I think what we need to work out is how we partner between the federal and the state and local that have often the responsibility for this into a structure that really advances both AI but also takes a practical look at what… We have to try lots of things and not everything’s going to work, and then adjust our strategy to focus on what works.

Sen. Kelly (01:40:56):

Do you know the specifics of the AI algorithm and are they trying to incorporate lightning detection into it? Because obviously a lot of forest fires start with lightning and we know where lightning occurs.

Dr. Rick L. Stevens (01:41:10):

That’s right. So we can detect lightning through the EM spectrum, through electromagnetic stuff, and overlay that on the geographical maps and then overlay that with imagery. So I’m quite familiar with how that’s being done, but I don’t think it’s fully integrated yet. I think we could actually do a lot better than we’re currently doing.

Sen. Kelly (01:41:29):

Yeah, because then you could just narrow the field-

Dr. Rick L. Stevens (01:41:31):

Absolutely. If there’s a history of lightning there and you’re seeing smoke, yep, exactly. Precisely.

Sen. Kelly (01:41:36):

Okay. All right. Well, thank you.

Senator Joe Manchin (01:41:39):

Senator, you want to defer to Senator King? Senator King. No, Senator Hickenlooper. You all fight it out.

Sen. Hickenlooper (01:41:47):

Thank you, Mr. Chair. Thank you all for spending time here today. I think it’s a fascinating discussion. I want to start with Mr. Wheeler. We have a company in Colorado called Guild that does training for large companies, say Walmart or Chipotle, that their employees want to take skills-based classes at colleges or universities wherever. And Guild does polling of entry level engineers and coders across professions. And one of their most recent poll saw a dramatic increase like near universal in the level of concern expressed at AI to these beginning-level engineers and across every industry, and every industry of course needs technology. And I think the intense computing resources that are needed to train and run AI models at scale raise a lot of questions you’re already addressing. NREL out in Colorado operates a high- performance computing data center that is designed to be the world’s most energy efficient. I think when we look at some of these things, what other types of efficiency can we look at, Mr. Wheeler, recognizing that there is a level of anxiety that has come up in terms of the workforce?

Mr. Andrew Wheeler (01:43:24):

Yeah, thanks for the question, Senator Hickenlooper, and I’m very familiar with the facility and some of the machines there at NREL. Colorado is my home state. But I think there’s multiple ways to look at this. So I think as you say, all the excitement around AI, there’s a lot of people wanting to get into that as maybe they’re transitioning careers, but understandably concerned with some of the risk around it. Something we say internally a lot is, look, AI is not going to replace that scientist or that engineer, editor, teacher, you name it, the list goes on. But those same individuals, those same professionals that harness AI and use AI will likely replace those that don’t. So that’s why I think from a workforce and a transition, we’re seeing many people wanting to get into that as a career, so they recognize the opportunity.

(01:44:27)
And the great thing about Guild is maybe they’re not taking people that historically came up from a STEM background or education, but guess what, with all of the tools, everything that’s being developed, suddenly you don’t have to necessarily be the deep mathematics expert. You can be very proficient with the tool sets that are out there if you’ve got that willingness to learn. Now, that being said, what are those guardrails that are in place or how do we think about some of the risks associated with the technology? You asked about what other things can be done. I think NREL is a great example of showcasing a lot of what can be done, whether it’s from the energy recapture of those systems. And as others on the panel have stated, just providing access to that type of infrastructure in compute is a big part of lowering the barrier to maybe some of the anxiety and being part of the solution. Because if you’ve got a passion around that as a field of study on how to mitigate some of the risk around this, well, having access enables you to develop and propose those types of solutions.

Sen. Hickenlooper (01:45:53):

Right. Thank you for that. Ms. Puglisi, a lot of the innovations fueled by AI come from fundamental and applied research that’s adopted by small businesses. In your testimony, you highlight about how investing in research supports our country’s technological leadership. How will the historic investments into translational research from CHIPS Plus Science Act grow our technical workforce within our innovation economy?

Ms. Anna B. Puglisi (01:46:32):

Yes, I think those investments are essential because I think what gets lost a lot of times are those transition points, and so a lot of funding focuses on basic research or focuses on applied. And I think what some of the advantages that are focused on in the CHIPS and sciences that it looks at those transition points and how do we move technology so that it best serves the public and best serves society? And so having those kinds of skills, those are very different kinds of skills than actually doing the research, but having that support across what we call the Valley of Death or other kinds of challenges are really, I think, important to kind of keep that spark going.

Sen. Hickenlooper (01:47:23):

Great. Thank you on that. I appreciate it. I’ll yield. And Senator King, I’ll wander over and make sure that that seat is covered if you take a couple extra minutes.

Sen. King (01:47:35):

Thank you. Appreciate it. Thank you, Mr. Chairman. I wanted to put a fine point on this. A few minutes ago I put into ChatGPT the following question. Write a poem about Joe Manchin and John Barrasso. In less than three seconds, this is what it wrote.

Senator Joe Manchin (01:47:49):

Oh my God.

Sen. King (01:47:50):

In Senate chambers where voices resound, Joe Manchin and John Barrasso are found. Two different paths they often pursue, yet united by a role they must construe. Manchin, a Democrat from West Virginia’s hills, his principles shaped by his homes coal-filled mills. He seeks compromise of middle ground’s grace. In a divided Senate, he finds his place. Barrasso, a Republican from Wyoming’s plains, with conservative values that deeply ingrains. His vision for policy distinct and clear. In the halls of Congress, he perseveres. Though their ideologies often diverge, in the Senate, they engage, discuss, and urge. For the common good, they both aspire to serve their constituents, their hearts on fire. Two senators distinct in their view, but bound by a duty to represent you. In democracy’s dance, they play their part, Joe Manchin and John Barrasso, with determined heart. Isn’t that-

Senator Joe Manchin (01:48:51):

Let’s make sure we don’t eliminate that part of the AI.

Sen. King (01:48:56):

But think of that. In less than two seconds, the data that was searched to put that material together, make it rhyme and have it be so representative of our two senators. I just think we ought to realize the unbelievable power of this. That’s a homely example, but I think we need to understand the radical nature of this technology. Mr. Turk, a homely question. One of the problems in the energy transition, which as you know is one of the major issues of our time, is ISO’s timeliness of processing applications for connection to the grid. There’s a huge backlog in virtually all the ISOs in the country. Can AI contribute? Because it seems to me that decision about interconnection, which involves capacity, reliability, safety, that strikes me as an engineering question that AI should be helpful with.

David Turk (01:50:02):

Well, I think the short answer is yes, and I’ve been speaking to the heads of ISOs and really trying to make sure we’re doing everything we can on the interconnection queue, which is a big deal right now. If you can’t get things connected to the grid, then we’re not going to achieve our goals. We’re not going to get all the benefits from these technologies.

Sen. King (01:50:22):

And right now, the queue is one of the major bottlenecks to this transition.

David Turk (01:50:26):

The queue and the queues with different ISOs is a major bottleneck. That’s exactly right. And there’s a number of efforts that are being undertaken right now. The FERC has put out some rules to try to make sure that it’s not just first in who gets consideration, it’s first ready. So that we try to make sure that we’re taking the applications of those who are most impactful along the lines.

Sen. King (01:50:48):

I hope you’ll take steps to use AI to radically shorten this process. I think that would be a major contribution.

David Turk (01:50:54):

I think it’d be great. I wanted to highlight, there’s an effort, the acronym is I2X, that our energy efficiency and renewable colleagues are working on, that’s using technology among other things to try to bring all the ISOs together and software fixes, AI fixes. Happy to get you more information on that, but it’s a very exciting effort.

Sen. King (01:51:11):

I appreciate that. The word watermarking was used earlier. I don’t want the government deciding what’s true and not true. That’s just not the direction we want to go, and it’s not consistent with our principles and values. On the other hand, it seems to me people that use information on the internet or otherwise have a right to know its source. Mr. Stevens, you mentioned watermarking. What we’re really talking about is, for me, it’s labeling. “This film or this article was produced with AI.” That would be important information for people to have assessing the validity of what they’re seeing. How close are we to having that technology?

Dr. Rick L. Stevens (01:51:51):

We know how to do it. It’s a question of getting agreement that AI companies would use some kind of common approach and not some proprietary approach, and then also how we enforce or require it.

Sen. King (01:52:07):

I was going to say, could the Congress require the platforms, if they’re going to post AI material, it’s got to be labeled?

Dr. Rick L. Stevens (01:52:14):

That’s the current approach. I think it’s flawed in the sense that there will be ultimately many hundreds or thousands of generators of AI, some of which will be the big companies like Google and OpenAI and so forth. But there’ll be many, many open models produced outside the United States and produced elsewhere that of course wouldn’t be bound by US regulation. And so I think what we’re ultimately going to end up having to do is validate real sources as well as we can have a law that says, “Watermark AI-generated content,” but a rogue player outside the US, say operating in Russia or China somewhere, wouldn’t be bound by that and could produce a ton of material that wouldn’t actually have those watermarks, and so it could pass a test perhaps.

(01:53:02)
So I think we’re going to have to be more nuanced or more strategic in this, in that we’re going to have to authenticate real content down to the source. Whether it’s true or not as a separate issue, but if it’s produced by real humans in a real meeting, that stream would get tagged so you would know that’s real versus something that would be synthetic.

Sen. King (01:53:21):

I’m out of time and I’m due over to preside, but I would really appreciate it if all of you would give some real thought to this because this is a current issue for us and we’ve got a major election coming up in a little over a year. Disinformation via AI could play a pivotal role. We need your best thinking now. So to the extent you can get back to this Committee on these subjects, it would be very, very helpful to us. Thank you, Mr. Chairman.

Dr. Rick L. Stevens (01:53:51):

Happy to do that.

Senator Joe Manchin (01:53:52):

Well, let me tell you, I don’t need to tell you how informative and how interesting this has been, what we’ve received. I think what everyone has told you, what we’re concerned about. My good friend here found something very complimentary and I appreciate very much, but he could have probably found something very concerning and harmful very quickly also. I think the first line of defense that I’m looking at that I’m concerned about is how do we protect from altering people’s lives? And that’s basically their compensation, whether it be at their workplace or if they’re retired, retirement checks, social security, Medicare. How well are we hardened there, or basically preventing AI from figuring out a way to come through what another door, back door, side door, anything differently that could put them at risk changing and altering their lives? Because that’s when it’s going to be very difficult for us to put that genie back in a bottle. And that’s what I’m concerned about.

(01:54:51)
Getting in investment portfolios. They’re doing this all the time, trying, but it makes it very difficult and it’s bad enough when someone gets their credit card hacked and stolen, what they have to go through to get that corrected. Can you only imagine what this could do? So this is what we’re asking all of you with the knowledge that you have and expertise, but also the challenges that we’re going to have. I know we think about defense. We’ve been talking in armed services about offense versus defense. We’re already using AI in defensive procedures now, but offensively, we still want that human element involved to make a decision. Do we launch a strike or not? That’s going to be very, very detrimental and very important. It has unbelievable far-reaching results. So I think we’re in unchartered waters to a certain extent, but those of you ahead of the curve right now can help us from falling really into the deep end, can’t be saved.

(01:55:44)
So if we learned anything about the internet, we learned that for all the good it did, there’s a lot of people out there waiting to use it for nefarious situations and they do it every day. So with that, let me just say that I appreciate you very much. I think you all have done a wonderful job presenting this. You can see the interest that we have and the concerns that we have. But I think the support that you have from all of us trying to make sure that whatever dollar that we invest, we have to invest an awful lot. We’re willing to do that. We just don’t want to reinvent the wheel. We want to basically balance it out and run a little smoother, so we’re all here to help. But again, thank you so much and members will have until the close of business tomorrow to submit additional questions for the record. Thank you, and the meeting is adjourned.

Transcribe Your Own Content

Try Rev and save time transcribing, captioning, and subtitling.