Hearing on Artificial Intelligence

Hearing on Artificial Intelligence

House Homeland Security Committee hearing on artificial intelligence. Read the transcript here.

Experts speak to House.
Hungry For More?

Luckily for you, we deliver. Subscribe to our blog today.

Thank You for Subscribing!

A confirmation email is on it’s way to your inbox.

Share this post
LinkedIn
Facebook
X logo
Pinterest
Reddit logo
Email

Copyright Disclaimer

Under Title 17 U.S.C. Section 107, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is permitted by copyright statute that might otherwise be infringing.

Chairman Ogles (02:46):

The Committee on Homeland Security Subcommittee on Cybersecurity and Infrastructure Protection and Subcommittee on Oversight, Investigations, and Accountability will come to order. Without objection, the chair may declare the committee in recess at any point. The purpose of this hearing is to examine how rapid advances in artificial intelligence, quantum computing and cloud technologies are reshaping the cybersecurity landscape in ways that affect both U.S. defensive capabilities and the operational reach of our adversaries. The hearing will also assess how the adoption and governance of AI, cloud infrastructure and post quantum security measures are strengthening or in some cases exposing U.S. critical infrastructure, federal systems, and sensitive data and what steps government and industry must take to stay ahead of the rapidly evolving threats. I now recognize myself for an opening statement.

(03:43)
Good morning and thank you all for being here. I want to begin by thanking Chairman Brecheen and members of the Subcommittee on Oversight, Investigations, and Accountability for partnering with my subcommittee to hold this hearing. The issues before us today affect national security, economic competitiveness and public trust, and they deserve attention that reflects their scale and importance. We are meeting at a time when the technology shaping our digital environment are also shaping the security and strength of the United States. Artificial intelligence, cloud computing and quantum technologies are now woven into how federal, state and local governments operate, how intelligence is collected and analyzed, how critical infrastructure functions and how American companies compete in a global economy.

(04:31)
These technologies offer extraordinary promise, but they also introduce risks that are advancing faster than many of the frameworks and systems designed to manage them. Artificial intelligence is changing the pace and character of cyber activity. It allows information to be processed at speeds far beyond human capacity and perhaps in some ways even comprehension. It enables automation across complex networks and supports decision making at scale. These capabilities can strengthen cyber defense and improve resilience. However, they can also be exploited to accelerate malicious activities, expand the reach of cyber operations and make hostile actions more difficult to detect, attribute and disrupt.

(05:17)
Cloud computing has amplified both opportunity and risk. Cloud platforms have enabled modernization across government and industry, supporting flexibility, scalability and innovation. Yet, they also consolidate vast amounts of data, access and computing power into shared environments, raising the stakes of security, configuration and oversight decisions. Quantum technologies present a longer term challenge with significant implications. Much of our digital security relies on encryption, protect sensitive communications, verify identities and secure critical systems. Advances in quantum computing raises serious questions about whether today's encryption methods will remain effective in the future.

(06:03)
Our adversaries understand this risk and are already planning, including by collecting encrypted data now with the expectation that it may be accessed later. The threat environment surrounding these developments is intensifying. The People's Republic of China, PRC, and the Russian Federation, the RF, are investing heavily in advanced computing, automation and data exploitation as tools of national power. They view artificial intelligence, cloud infrastructure and emerging technologies as means to gain strategic advantage, conduct sustained cyber and intelligence operations, and operate below the threshold of an open or kinetic conflict. China in particular has pursued a model that tightly integrates government, military, academia and the private sector.

(06:54)
This approach allows innovations developed for commercial purposes to be adapted quickly for state use. In cyberspace, it supports operations built for scale and persistence, including the use of automated tools to scan networks, identify vulnerabilities, manage stolen credentials, and analyze large volumes of data across many targets simultaneously. At the same time, these technologies provide the United States with powerful tools to strengthen security and resilience. Artificial intelligence can prove threat detection and response. Cloud computing can enhance reliability and operational flexibility. Advances in quantum research may ultimately yield new security capabilities, but also there's a downside. The challenge lies in ensuring these benefits are realized without introducing vulnerabilities that adversaries can exploit.

(07:44)
The Department of Homeland Security and the Cybersecurity and Infrastructure Security Agency or CISA play an essential role in this effort. Their work on cloud security practices, artificial intelligence, risk management and preparation for future changes and encryption helps shape how federal agencies and critical infrastructure operators address emerging threats. Congress also has an important responsibility. Oversight helps ensure that security keeps pace with adoption, that roles and responsibilities are clearly defined and that risks are addressed early rather than after they've created serious harm.

(08:23)
This is not about slowing innovation, it's about making sure innovation strengthens the nature rather than exposing it. The decision being made now about how artificial intelligence, cloud computing and quantum technologies are secured will shape the country's security, prosperity for years to come, and I would argue also our role as the, quite frankly, sole superpower. I appreciate our witnesses for being here. I look forward to their testimony and the discussion ahead. I now recognize the ranking member for the Subcommittee on Oversight, Investigations, and Accountability, the gentleman from Michigan, Mr. Thanedar, for his opening statement.

Mr. Thanedar (09:04):

Thank you, Chairman Ogles. Appreciate this hearing and good morning to all of our witnesses. I look forward to hearing your thoughts. For decades, hostile nations have conducted increasingly sophisticated cyber attacks against the United States. These attacks have been used to spy, steal intellectual property, cripple critical infrastructure and demand ransom payments. China, Russia, Iran, North Korea are aggressively using advanced cyber capabilities to threaten our national security and economic prosperity. China is both the most active and persistent cyber threat and is also the only country with both the desire and the ability to reshape the world order, which is why it is extremely shocking that President Trump recently agreed to allow Nvidia to sell advanced artificial intelligence chips to China. Really shocking.

(10:38)
And let's just see some background information here, why did this decision the president made. The president was quick to sell out America's security after Nvidia's CEO attended a $1 million per plate dinner at Mar-a-Lago, and donated to Trump's White House ballroom. So much, so much for America first. Trump's own Department of Justice has warned that China is seeking to become the AI leader by 2030 and plans to use AI chips to modernize its military, design and test weapons of mass destruction and deploy advanced surveillance tools. We should be disrupting and dismantling threat actors whose actions threaten our national interest, not enabling them. The rapid development of emerging technologies, including advanced AI and quantum computing enables and enhances security risk.

(12:03)
These advanced technologies not only accelerate the cyber abilities of countries such as China, but they also make it easier for countries that are not well resourced and enable a growing threat from organized criminal groups. Over the past year, cyber attacks have become faster, more widespread and harder to detect. As AI assisted cyber attacks hit harder and faster, it is critical that Congress extends CISA 2015, the Cyber Security Information Sharing Act of 2015. CISA 2015 provides privacy and liability protection to companies to encourage them to share data about cyber vulnerabilities and threats. These protections are necessary to fully understand the risk and facilitate collaboration between the federal government and the private sector.

(13:12)
Unfortunately, CISA 2015 expires next month. A 10-year extension is the best reauthorization strategy that will also provide the private sector with assurances while eliminating the risk of this authority lapsing. I look forward to hearing from our witnesses how else we can best defend against cyber attacks that are leveraging powerful emerging technologies. Thank you and I yield back, Mr. Chair.

Chairman Ogles (13:48):

Thank you Ranking Member Thanedar, and I look forward to following up on your insightful comments. I now recognize the chairman for the Subcommittee on Oversight, Investigations, and Accountability, the gentleman from Oklahoma, Mr. Brecheen, for his opening statement.

Mr. Brecheen (14:02):

Thank you, Chairman Ogles. Good morning. Thank you to our witnesses. Very complex subject. Many of us in all vulnerability feel really unqualified to be in these discussions. Grateful we're going to have some expertise to drive into the massive amount of vulnerabilities that AI is presenting on our cyber front. As chair of the Subcommittee on Oversight, Investigations, and Accountability, I'm looking forward to partnering with the Subcommittee on Infrastructure Protection to focus on this topic, explore ways that Congress can assist the Department of Homeland Security in countering this new threat. This integration of AI into cyber attacks should concern every American.

(14:40)
The recent cyber attack leveraging Anthropic's AI infrastructure showed that complex attack campaigns can now be conducted with little to no human interaction with speeds faster than any human could replicate. We've all seen how AI can easily streamline tasks that would otherwise be very labor intensive both in business and everyday life. Now that an attack like this has successfully taken place, we can expect to see more events like this in the future. The proof of concept is there, and even if U.S.-based AI companies can put safeguards against using their models for such attacks, these actors will find other ways to access this technology. China is our most significant cyber threat actor, and it continues to search for tactics to infiltrate critical U.S. systems and prioritize the development of advanced computing, technology and AI that supports its economic and strategic goals. Cyber espionage has been a key part of their plan, China's plan ongoing campaign of stealing intellectual property. This is decades old. They now have new tools. And this will fuel rapid technological advancement at the expense of American innovators. As this committee has highlighted over the years, cyber actors linked to China pose a threat on an unprecedented scale targeting U.S. companies, critical infrastructure and the federal government. As technologies like AI continue to advance at such speeds, we have to be vigilant, strategic in protecting intellectual property and our national security. From an oversight perspective, we need to make sure that federal civilian agencies are taking the proactive steps needed to protect their networks against intrusion.

(16:07)
Technology doesn't advance on the government's timeline and we can't afford to have cybersecurity practices moving at such speeds absent government interdiction. That path leaves us reacting to security failures instead of proactively confronting today's threats. This is an area where federal government can partner with and learn from the private sector to implement best practices and incorporate needed technology. Federal government needs to be better at sharing information on cyber threats between federal agencies and with private stakeholders in a timelier manner. I hope to learn in today's hearings how Congress can empower the Department of Homeland Security and its sub-agencies to counter this threat and ensure safety and integrity of U.S.-based infrastructure.

(16:48)
And I want to thank again our panel of witnesses for joining us today to discuss the cyber attack, implementation of that, and Congress and American people need to consider how we can work with you all and your expertise to safeguard our critical infrastructure. And with that, I want to yield back to Chairman Ogles.

Chairman Ogles (17:07):

Thank you, Chairman Brecheen, and just echo your sentiments. Other members of the committee, you are reminded that you can submit for the record an opening statement. I'm pleased to have a distinguished panel of witnesses before us today on this critical topic. Pursuant to committee rule 8C, I ask that our witnesses please rise and raise their right hands. Do you solemnly swear that the testimony you will give before the Committee on Homeland Security of the United States House of Representatives will be the truth, the whole truth, and nothing but the truth, so help you God? Let the record reflect that the witnesses have answered in the affirmative. Thank you and please be seated. I would like to now formally introduce our witnesses.

(17:55)
Dr. Logan Graham serves as the department head of the Frontier Red Team at Anthropic where he leads efforts to evaluate the behavior and potential misuse of advanced AI systems as model capabilities continue to scale. His work focuses on identifying national security risk posed by frontier AI, including its potential use in cyber espionage and offensive cyber operations, as well as developing safeguards to detect and disrupt malicious activity. Prior to joining Anthropic, Dr. Graham held roles at Google, X and Babylon Health. He also previously served as special advisor to the Prime Minister of the United Kingdom, contributing to national science and technology policy and the development of the UK's AI strategy. Dr. Graham earned his undergraduate degree in economics from the University of British Columbia and completed his PhD in engineering science at the University of Oxford where he was a Rhodes scholar. Thank you, sir.

(18:55)
Mr. Royal Hansen is vice president for privacy, safety, security engineering at Google where he leads the engineering team responsible for securing Google's global technical infrastructure and protecting billions of users worldwide. Prior to joining Google, Mr. Hansen held senior security leadership roles in the financial services sector including at American Express, Goldman Sachs, Morgan Stanley and Fidelity Investments. Mr. Hansen holds a bachelor of arts in computer science from Yale University. Thank you, sir.

(19:27)
Mr. Eddy Zervigon is the chief executive officer of Quantum Exchange. Under his leadership, Quantum Exchange works with government and private sector partners to prepare critical systems for emerging cyber and quantum enabled threats. Mr. Zervigon brings extensive experience in corporate leadership, operations and restructuring, including prior service as a managing director in the principal investments group at Morgan Stanley where he oversaw technology and infrastructure investments across the United States and Latin America. He hold a bachelor's degree in accounting and a master's degree in taxation from Florida International University and a master of business administration from Dartmouth. Thank you, sir.

(20:11)
Mr. Michael Coates is the founding partner of Seven Hill Ventures, an early stage venture firm focused exclusively on cybersecurity investments addressing enterprise, operational and national security challenges. He brings more than two decades of experience securing large scale digital platforms and advising organizations on cyber risk. Mr. Coates previously served as the chief information officer at Twitter and also led security efforts at Mozilla. Mr. Coates holds a bachelor of science in computer science from the University of Illinois Urbana-Champaign and a master of science in computer information and network security from DePaul University. Thank you, sir.

(20:53)
I thank each of our distinguished witnesses for being here today. This is a topic that a year and a half ago was somewhat of a niche for laypersons, but for you experts, obviously clearly recognized that this was going to be quite frankly the next arms race, threat battlefield as we go forward. And so what you're doing today here before Congress means more than I think we can possibly comprehend as we begin this discussion and quite frankly dive into the emergence of this technology. And with that, I now recognize Dr. Graham for five minutes to summarize his opening statement.

Dr. Logan Graham (21:36):

Chair Ogles and Brecheen, Ranking Member Thanedar, members of the committee, thank you for the opportunity to testify today. Anthropic is a leading frontier AI model developer working to build reliable, interpretable and steerable artificial intelligence. Our flagship AI assistant, Claude, serves millions of Americans and trusted partners worldwide from Fortune 500 companies and U.S. government agencies to small businesses and cutting edge startups and consumers, enhancing productivity on tasks, including software engineering, data analysis and scientific research. At Anthropic, I lead the Frontier Red Team. Our job is to build an early warning system for advanced risks from AI so that we can mitigate them and to help the world prepare as far in advance as possible.

(22:24)
Transparency is a fundamental value for Anthropic, and we believe it should be an industry standard. That is why we published a report about how in mid-September 2025, Anthropic detected suspicious activity that our investigation determined to be a largely autonomous, sophisticated cyber espionage campaign conducted by a group sponsored by the Chinese Communist Party. To be clear, Claude's code was not compromised, nor were Anthropic's labs infiltrated. Instead, this group maliciously misused Claude to automate large portions of cyber attacks against their targets. We estimate their use of the model allowed them to automate approximately 80 to 90% of the work that previously required humans to do. This is a significant increase in the speed and scale of operations compared to traditional methods.

(23:13)
Further, this group invested significant resources and used their sophisticated network infrastructure in order to circumvent our safeguards and detection mechanisms prior to being detected. They then deceived the model into believing the tasks were ethical cybersecurity tasks. The campaign consisted of a few distinct phases. First, a human operator provided targets to Claude directing it to conduct autonomous reconnaissance against them in parallel. Second, acting on the human operator's direction, Claude leveraged third party software tools to search for vulnerabilities in these systems. The third and final step was to task Claude to exploit these vulnerabilities and extract sensitive information from the targets, which was only successful in a handful of cases.

(23:57)
We detected this campaign within two weeks the attackers first confirmed offensive activity, triggering a swift response, including account bans, strengthening our safeguards, entity notifications, authority coordination, and indicator sharing with partners. We have reached an inflection point in cybersecurity. It is now clear that sophisticated actors will attempt to use AI models to enable cyber attacks at unprecedented scale. This threat is not unique to Claude and affects all AI models. That is why we've been open and transparent about this instant and one of the reasons why I'm grateful to you that you are holding this hearing today. Industry and government must collaborate to prevent this misuse and enable cyber defenders to prepare.

(24:43)
To address these risks, there are at least three things that should be done immediately. First, there needs to be rapid testing of models for national security capabilities. Government-led evaluations like those conducted by NIST's Center for AI Standards and Innovation give us visibility into model capabilities and security. Codifying and expanding this process is critical. Second, there must be robust threat intelligence sharing. Frontier AI labs and the U.S. government needs stronger channels to share indicators of misuse as exists in critical infrastructure sectors. Third and finally, industry should invest in empowering our cyber defenders. We must make models useful for defenders and get them into their hands. Anthropic is improving its models for cyber defenders and building tools, for example, that can patch vulnerabilities.

(25:33)
We cannot lose sight of the strategic picture. The United States and its allies must maintain leadership in AI. The Trump administration has taken important steps to advance U.S. AI leadership, including accelerating the build out of AI infrastructure, promoting federal adoption and strengthening security testing and coordination. We strongly support these efforts. Equally critical is maintaining the United States advantage in computing power, the single most important input into developing powerful AI models. The United States currently has a significant edge over the CCP in access to advanced chips, but if advanced compute flows to the CCP, its national champions could train models that exceed U.S. frontier cyber capabilities.

(26:21)
Attacks from these models will be much more difficult to detect and deter. We are in a race against threat actors who will stop at nothing to misuse AI for cyber attacks. Our response must be urgent, coordinated and focused on securing systems faster than they can be attacked. Thank you again for the opportunity to testify and I look forward to your questions.

Chairman Ogles (26:46):

Thank you, Dr. Graham, and I recognize Mr. Hansen for five minutes to summarize his opening statement.

Royal Hansen (26:53):

Chairman Garbarino, Ogles, Brecheen, Ranking Members Thompsons, Mallwell, Thanedar, and members of the committee and sub-committees. Thank you for the opportunity to speak with you today. My name is Royal Hansen and I serve as the vice president of privacy, safety, security engineering at Google. And as discussed, we build the financial technology that keeps billions of people safe online. As this committee knows, we stand at a critical technological inflection point. Rapid advances in AI are unlocking new possibilities for the way we work and accelerating innovation in science, technology and beyond. Some of these same AI capabilities, however, can also be deployed by attackers, leading to understandable anxieties about the potential for AI to be misused for malicious purposes.

(27:40)
Until recently, our analysis showed that government-backed threat actors were using generative AI primarily for common tasks like troubleshooting, research and content generation. Over the past year, Google's threat intelligence team has identified an important shift with adversaries not only leveraging AI for productivity gains, but deploying novel AI-enabled malware in active operations. We have identified malware families that use LLMs to generate malicious scripts, obfuscate their own code to evade detection, and use AI models to create malicious functions on demand rather than hard coding them into the malware. This marks a new operational phase of AI abuse involving tools that dynamically alter behavior mid-execution. While still nascent, this development represents a significant step toward more autonomous and adaptive malware.

(28:35)
We believe not only that these highly sophisticated threats can be countered, but that AI can supercharge our cyber defenses and enhance our collective security. LLMs can unlock new and promising opportunities from sifting through complex telemetry to secure coding, vulnerability discovery and streamlining operations. Google's AI-based efforts like Big Sleep and OSS-Fuzz have demonstrated AI's capability to find new zero day vulnerabilities in well-tested, widely used software. And recently we developed CodeMender, an AI-powered agent that utilizes the advanced reasoning capabilities of our Gemini models to automatically fix critical code vulnerabilities. CodeMender scales security, accelerating time to patch across the open source landscape. It represents a major leap in proactive AI-powered defense and includes features such as root cause analysis and self-validating patching.

(29:36)
We believe the private sector, governments, educational institutions and other stakeholders must work together to maximize AI's benefits while also reducing the risks of abuse. As innovation moves forward, the industry more broadly needs security standards for building and deploying AI responsibly. That's why Google introduced the Secure AI Framework or SAIF, a consent Conceptual framework to secure AI systems. Our recent expansion to SAIF 2.0 addresses the rapidly emerging risks posed by autonomous AI agents and extends our proven framework with new guidance on agent security risks and controls to mitigate them.

(30:17)
We published a comprehensive toolkit for developers that includes resources and guidance for designing, building and evaluating AI models responsibly. We've also shared best practices for implementing safeguards, evaluating model safety and red teaming to test and secure AI systems. We are committed to developing technology responsibly and in a manner that is built for safety, enables accountability and upholds high standards of scientific excellence. For example, as part of our industry leading security architecture, we do not offer our core products such as search, Gmail maps, and YouTube in mainland China. We also do not conduct AI research, offer domestic cloud services, or have data centers in mainland China.

(31:03)
Our comprehensive approach means we secure all components of the AI ecosystem, including data, infrastructure, applications, and models. As governments and civil society leaders look to counter the growing threat from cyber criminals and state backed attackers, we're committed to leading the way in using AI to tip the balance of cybersecurity in favor of defenders. Finally, this is more than a job for me. My youngest son, now 15, has suffered from a chronic illness for the past five years, during which time he has rarely moved from lying down in a dark, cold room.

(31:39)
One of the few things that gives him hope is that technologies like AI and quantum will continue to yield scientific and medical breakthroughs that will alleviate his suffering and the suffering of millions like him. Security and safety are among the critical foundations that will enable this science at digital speed. I am personally committed

Mr. Hansen (32:00):

… to that mission with the help of both the public and private sector. We look forward to answering your questions.

US Representative Andrew Ogles (32:07):

Thank you, Mr. Hansen. First of all, thank you for sharing and I look forward to hearing more about what you're working on, sir. We do have votes, so we will take a short recess. I would ask all members of the committee after the second vote to come back here as promptly as possible so that we can get to the remaining two witnesses and their opening testimony. I plan on starting as quickly as we can if that's possible. So, thank you all. We will take a short recess.

(32:35)
(silence).

Chairman Ogles (01:18:14):

Call to order the Committee on Homeland Security Subcommittee on Cybersecurity Infrastructure Protection and Subcommittee on Oversight Investigations and Accountability will come to order. Again, thank you, Mr. Hansen. And then I would like to recognize Mr. Zervigon for five minutes to summarize his opening statement. And again, to the witnesses, we appreciate your patience.

Speaker 2 (01:18:37):

Thank you. Good morning, Chairman Garbarino, Ranking Member Thompson, Thanedar, Chairman Ogles, Chairman Brecheen, and members of the committee. Thank you very much for the opportunity to testify today. My name is Eddy Zervigon and I'm the CEO of Quantum Xchange. We were founded in 2018, two years after NIST was tasked with evaluating the algorithms to take us into the Quantum Age.

(01:19:01)
Quantum Xchange is a cybersecurity company that operates with the major network infrastructure vendors to enable encryption that protects data today and into the post-quantum future with hardware and software solutions developed entirely in the United States. While quantum computing and AI promised new breakthrough capabilities, they also introduce significant risk to our national and economic security. They must be urgently addressed.

(01:19:28)
AI can enable faster, more dangerous cyber attacks, and quantum computers can break current encryption standards, exposing sensitive data. These capabilities will be weaponized by our adversaries, creating a very dangerous imbalance in our cyber defenses. For more than 50 years, encryption has safeguarded our data from theft and misuse. We've had the luxury of a set it and forget it mindset, trusting its strength by default. That error is now ending with quantum computing.

(01:19:56)
Think about it like this. Imagine all digital communication form from government agencies sent over the past 10 years being readable by our adversaries. This is a real threat to the US today. Rogue nation states and state sponsored terrorist groups are collecting encrypted data now to decrypt later with a quantum computer. Further, now imagine our adversaries reading sensitive government data in real time and altering it without anyone knowing. This could be tomorrow's reality.

(01:20:25)
Public and private sector work on a quantum resilient solutions is ongoing. Technologies like post-quantum cryptography, PQC, or quantum safe encryption algorithms are part of the solution, but not the complete answer. Despite our best efforts, post-quantum cryptography may still be vulnerable to quantum related attacks. All of which raises the fundamental question and challenge. What happens when an algorithm breaks? Because it is a when and not if.

(01:20:54)
Every agency CIO, enterprise CISO, security vendor, and network gear manufacturer must be able to answer that question. In our view, what's needed to ensure data security and confidentiality in the quantum age is an architectural approach, not just a new algorithm. This architectural approach enables agencies to focus on securing the network that data travels on to strengthen the existing infrastructure against quantum attacks while minimizing disruption to existing operations.

(01:21:24)
This is how our government agencies need to be protected. When you have valuables in your house, the first step isn't going out and buying a new jewelry box with biometric access controls. It's locking your front and back doors so the house is secure and harder to get in. Once your home is secure, then you can figure out what specific rooms need further locks or security measures to protect your valuables and sensitive documents.

(01:21:47)
Federal agencies handling sensitive data need to act now and follow the leads set by customs and border protection. Our work with CBP to incorporate PQCs across their network infrastructure in 2026 has shown that you can begin to secure your networks today. With quantum resistant technologies in a FIPS validated way, without having to rip and replace your entire infrastructure, I cannot stress enough the timing here is critical.

(01:22:13)
Agencies that fail to prepare today risk leaving their data vulnerable. Every day that we are not quantum resistant is another day that data is harvested to be decrypted later. It is important to note that we at Quantum Xchange are not the only ones advocating for action today. The Quantum Industry Coalition, of which we are a part of, as well as Amazon Web Services, Google, IBM, Microsoft, Accenture, and others believe that agencies handling sensitive government data should be actively working and preparing for the transition and should begin migrating to high risk systems to FIP/NIST validated PQC where possible.

(01:22:53)
Having the opportunity to meet with several of your offices, I was often asked what can Congress do. Through this committee's leadership and building off the work previously done, Congress can accelerate the timelines for PQC compliance, allocate the budget to allow migration process to begin and work with leaders within the administration to encourage adoption as the technology is readily available and deployable today.

(01:23:16)
America's defenses cannot stop at our physical borders. Through your leadership and efforts and in partnership with private sector partners, like us, we can and will secure America's digital borders too. In closing, I want to thank you again for the opportunity to offer some thoughts today, and I look forward to your questions. Thank you.

Chairman Ogles (01:23:35):

Thank you, Mr. Zervigon. I now recognize Mr. Coates for five minutes to summarize his opening statement.

Speaker 1 (01:23:45):

Chairman Ogles, Ranking Member Swalwell, Chairman Brecheen, and ranking member Thanedar. Thank you for the opportunity to testify. I'm honored to be here to discuss the changing cybersecurity landscape and the impacts of artificial intelligence and quantum computing. My perspective is grounded in over 20 years of experience in cybersecurity, including service as a chief information security officer, leadership and global software security organizations, founding a technology startup and investing in cybersecurity innovation.

(01:24:13)
Today we sit at the precipice of significant change. While much attention is paid to AI and future breakthroughs like AGI, the most immediate impact on cybersecurity is not the creation of entirely new threats. Instead, AI and quantum technologies are collapsing the time, cost, and skill required to conduct cyber operations. These changes are outpacing existing technical, regulatory, and operational defenses, fundamentally reshaping the threat landscape.

(01:24:41)
Historically, different attackers, nation states, cyber criminal organizations, and lone activists were constrained by skill, resources, and scale. The most sophisticated attacks were largely limited to nation states while criminals focused on repeatable, monetizable techniques. That constraint is rapidly changing. Recent real world examples, such as the report issued by Anthropic, show AI systems being used as a central orchestration layer for complete cyber operations, coordinating reconnaissance, exploitation, and execution with limited human involvement.

(01:25:14)
While the techniques themselves may not be novel, the orchestration and automation represent a meaningful shift in adversary capability. Agentic AI further removes human constraints. Autonomous systems are not limited by time, fatigue, or attention, and research recently released from Stanford, Carnegie Mellon, and Grace One AI already show AI-driven penetration testing performing at or above the level of highly skilled professionals at a fraction of the cost.

(01:25:41)
At the same time, AI is accelerating vulnerability discovery and exploitation. AI-powered software analysis is capable of identifying previously unknown zero-day vulnerabilities faster than ever. Yet for many organizations, the longstanding challenge has not been awareness that a vulnerability exists, but rather the inability to patch and remediate quickly. As attack timelines compress, this operational inertia becomes more dangerous.

(01:26:08)
The practical result is a dramatic reduction in the time available for defenders. Comprehensive attacks are easier to launch. The pool of capable adversaries expands, and smaller organizations such as hospitals, schools, and small businesses are increasingly exposed to the same level of adversarial capability once reserved for critical national infrastructure. This compression of time changes the nature of cyber risk itself. Defenders are often no longer responding to early indicators, but to attacks that are already in progress.

(01:26:37)
Intelligent automation allows attacks to become continuous rather than episodic, eroding assumptions that organizations can recover between incidents or rely on periodic assessments. The widening gap between machine speed attacks and human speed defenses means cybersecurity outcomes are increasingly determined by whether defenses can operate at comparable speeds. These shifts have clear implications for defense, policy, and coordination. First, secure by design principles must become a baseline expectation, particularly as AI increasingly writes and modify software.

(01:27:11)
Second, regulatory clarity is critical. Fragmented or ambitious regulations can slow defensive responses in an environment where speed matters. Third, public-private coordination remains essential, ensuring that defensive learning keeps pace with adversarial innovation. Fourth, defensive capabilities must increasingly rely on automation and autonomy as purely human-driven defenses will struggle to keep up.

(01:27:36)
And fifth, finally, quantum preparedness is necessary. While post-quantum cryptographic standards exist, the challenge lies in the time and coordination required to migrate existing systems before an adversary achieves cryptographically relevant quantum capability. Finally, trust and transparency in AI systems are crucial. AI reflects the data, incentives, and governance under which it is trained. In a security-related context, understanding potential model bias and model origin is as important as performance.

(01:28:10)
Artificial intelligence and quantum computing are accelerating forces that dramatically reshape cybersecurity. Our success will depend on whether our technical, operational, institutional responses can adapt at a comparables pace. Thank you, and I look forward to your questions.

Chairman Ogles (01:28:25):

Thank you, Mr. Coates. Members will be recognized by order of seniority for their five minutes of questioning. And I'll recognize myself for five minutes. Dr. Graham, Anthropic's investigation into the recent PRC affiliated cyber incident involving Claude suggests we may be approaching a turning point in how cyber operations are conducted, where AI systems once tasked by human operators can execute and refine large portions of a cyber attack at machine speed rather than human speed.

(01:28:58)
And obviously you touched on this in your opening statement, but should this incident be understood as an early warning of the future of AI systems, how they're autonomously writing and adapting to systems? And quite frankly, from a defensive perspective, what capability gaps do we have? Where do we need to be anticipating? I see a horizon that we can't quite define because of the rapidness and just the evolving nature of the technology.

(01:29:32)
I go back to kind of the arms race. There was a point at which between the US and Russia, there was this détente. There was this mutually assured destruction where it was at some point we all had enough nukes to kill everybody and blow the whole world up. It was all about delivery systems at that point. AI is different. There is no horizon. There is no kind of point at which I think it stops that there's a ceiling. So please, take it away.

Dr. Logan Graham (01:29:58):

You're correct that we are at a change point, and there are a couple of change points here. The first that we see now is, to our understanding, this is the first time where these models will now be sought and used by sophisticated state actors. We've been tracking this trendline for many years. This is the clearest evidence for the first time that this is now happening, but it's also possible this gets more serious and the stakes become much higher.

(01:30:24)
As you say, it's very possible that attacks from Huron might scale if we don't properly secure and safeguard the models. And it's also possible that while in this case, we didn't see an instance of novel methods of attack. It's very possible that models could get that good. What's important now is a few things. First, it's really hard to win if we can't see the playing field. And I think the easiest way to start is continuing to evaluate the capabilities of these models. This is something industry should do, this is something government should do.

(01:30:58)
Second, we should be sharing threat intelligence as it happens so that we can mitigate as fast as possible. And third, as you say, we need to make sure defenders have the advantage, particularly the United States, make sure that it defends itself faster than it can be attacked. And we are working very hard, and I think all of industry needs to work hard to make that happen.

Chairman Ogles (01:31:20):

Yeah. I follow up onto that point. Clearly, when you look at the investments that China is making on these quantum capabilities, AI, et cetera, there is a requirement between their private sector, if you want to even call it a private sector, because most of it is state-owned, that any innovation is immediately shared with the state. And so as you mentioned, there's going for us to be successful. There's going to have to be this collaboration between private and government, quite frankly.

(01:31:50)
But one of the things, and obviously that's easier to accomplish, but I foresee a need where the industry itself is going to have to be sharing information. Of course, the problem you get into there is the proprietary nature of things. Obviously, there's the monetization factor that comes into that, but at the end of the day, we're talking about the homeland. So how do you see that working in practice, understanding the complications that we have essentially in a free market?

(01:32:19)
Then another layer to that is essentially the five eyes, the seven eyes, our European partners who are aligned with us in our values, who understand the existential threat that China poses. And again, it's important for everyone to understand that China is probing us daily to look for weaknesses and opportunities to take advantage of information that's not properly secured. What's different about this is the leveraging and the scale and the percentage, if you will, that AI was leveraged, sir.

Dr. Logan Graham (01:32:48):

It's very, very important. The industry does share the information that it has between itself. It's very important it shares that with government. It's very important that industry develops solutions now, whether it's by improving the models or building tools and putting them in the hands of the defenders. I think just making the models good enough isn't sufficient. We need to make sure people are using it to proactively defend critical infrastructure.

(01:33:12)
One way that I think government can be extremely helpful here is identifying the critical infrastructure that needs to be defended in this new era of cybersecurity and allowing industry to point its talents and innovation towards that.

Chairman Ogles (01:33:26):

Well, I want to thank again, all of you for being here. And quite frankly, to Anthropic for your report, I think it was one of those inflection points that we all understood the seriousness of this, but your report, I think, really put a light on where we're at in some of our vulnerabilities. And I'll recognize the ranking member of the gentleman from Michigan, Mr. Thanedar, for his five minutes of questions.

Mr. Thanedar (01:33:49):

Thank you again, Chairman Ogles. Appreciate all of our witnesses. I remain deeply worried and concerned about President Trump's decision to export, allow export of advanced chips to China. Other than his desire to please a donor, I just don't understand why would we give such advanced technology to an adversary like China who can then use this technology to attack us? Who could use this technology to cyber attack our critical infrastructure?

(01:34:28)
Dr. Graham, how would China having access to this advanced chips, will that help advance their AI technology? Will that pose a threat to the United States on national security?

Dr. Logan Graham (01:34:46):

First, extremely important that America retains its AI leadership. The most important input to this is the compute advantage. My concern from watching these models progress in their capabilities, especially as a result of the cyber espionage campaign, is that if Chinese frontier labs have access to similar amounts of compute, they could train models that are equally or more capable in the cyber domain and that this could unleash new scale and new sophistication and we will have a harder time detecting and defending it.

Mr. Thanedar (01:35:18):

Thank you. I only have a little limited time. I want to shift my focus on immigration. In his first term, President Trump's first term and now in his second term, there is just so much of hate against immigrants. And yet we know, and I hope the panel agrees with me that the United States technology industry has benefited greatly from immigrants.

(01:35:46)
Just answer yes or no from the witnesses, does your companies have immigrants, skilled immigrants, and do you depend on them? Yes, no?

Dr. Logan Graham (01:35:57):

Anthropic is composed of many

Dr. Logan Graham (01:36:00):

Of the best talent from around the world.

Mr. Thanedar (01:36:03):

Anybody thinks we should have less of skilled immigrants on the panel here? Should we restrict access of immigrants to our technology companies, immigrants who help us keep on the edge? Well, certainly I'm myself an immigrant. 24 years old, I came here escaping poverty in India, got a PhD in chemistry, became a serial entrepreneurial many pharmaceutical companies developing technology that helped U.S. stay on top of innovation.

(01:36:39)
While it is important that American jobs be protected, it's important that we create skills, but at the same time, our tech industry heavily depends on skills, skill sets, immigrant skillsets.

(01:36:56)
Have the actions of the Trump administration? How has the Trump administration made it difficult to retain international talent in your companies with regard to both international workers choosing to leave or being forced to leave due to discrimination changes, the hardship that they have in terms of getting their status suggested, getting their green cards, the long delayed in processing, making it harder to get an H1B visa?

(01:37:26)
Just wanted to understand what kind of impact this administration's positions are doing to your ability to grow your companies, grow your new technology for the United States. Anybody? Yeah.

Dr. Logan Graham (01:37:46):

Well, it's not my issue here that I cover in the company. Speaking for my team, it's really important that I find and hire the best people around the world that are committed to our mission of making AI stay secure and ensuring America's leadership.

Mr. Thanedar (01:38:01):

Yeah. Anybody else? How important is immigration?

Royal Hansen (01:38:08):

I mean, I'd just say again, we'd have to talk to our HR department so we can come back to you with, I'll relay that question to the teams.

Mr. Thanedar (01:38:17):

What percent of your organization has immigrants?

Royal Hansen (01:38:21):

I wouldn't know the exact number, but certainly we do have green cards and immigrants that work at Google.

Mr. Thanedar (01:38:26):

Thank you. Thank you. Anybody else? Again, the need continues and for us, America, to have its edge on innovation, whether it's cybersecurity, AI, quantum, we must have skilled workforce. And if that means we are to depend on immigrants, so be it. Thank you. I yield back.

Chairman Ogles (01:38:53):

The gentleman yields back. I recognize the chairman of the Subcommittee on Oversight, Investigations, and Accountability, the gentleman from Oklahoma, Mr. Brecheen.

Mr. Brecheen (01:39:02):

Thank you, Mr. Chairman. Mr. Hansen, just before I get started, prayers over your son. May the Lord do what human hands can't. Appreciate your passion. Appreciate your vulnerability in sharing that.

(01:39:14)
Also appreciate what you expressed about limiting services for Mainland China. I think that's great that your company's willing to do that. My hope is that others would watch your concern over proprietary information and the desire to make sure that U.S. citizenry is protected and follow your lead.

(01:39:40)
Mr. Graham, you talked about that you felt like that robust intelligence sharing could be enhanced. So what is it that you are seeing that could be improved upon about, of course, your frontline, free market, government learns from it? What can the Fed be doing to a greater level, Homeland Security specific to this committee's assignment, to make sure that that robust intelligence sharing is happening so that in real time we're sending out information that others can be protected based upon immediate experience?

Dr. Logan Graham (01:40:16):

A fundamental issue here is that as the technology gets better, we are going to start seeing new patterns that are potentially more sophisticated that either in industry or across government, we've not seen before in terms of what these attacks look like.

(01:40:31)
I think the first most important thing is we need good and quick and sensitive channels to share the novelty of this information, possibly within and to government and across industry. We probably need to get ahead of it as well. So we need to be able to share information prior to the attack occurring. We regularly brief and share information about model capabilities as they're advancing.

(01:40:53)
In general, any effort here I think is extremely valuable and I think is going to put all of industry in a better position.

Mr. Brecheen (01:40:59):

And one of the things we can do is there are people that work behind the scenes that never get in front of the limelight of government. So without naming names, what division with Homeland Security can we highlight to just send a special thank you to working with you?

Dr. Logan Graham (01:41:13):

I'm not an issue expert in the specific components of Homeland Security, but would very happily follow up with you to talk more [inaudible 01:41:20].

Mr. Brecheen (01:41:19):

It'd be great. We want to make sure we're congratulating those groups that are taking your experience seriously.

(01:41:25)
I want to talk about the at scale capability of 80 to 90% of non-human hands-on, what would be formally labor-intensive now turned into generated by computer processing. So Mr. Hansen, if AI is utilized to provoke, then AI can be utilized to defend. So how can we enhance our scale of utilizing AI to wall off?

Royal Hansen (01:41:55):

It's exactly the right question. And so when you talk about what we can do is I think of the old adage about the cobbler's children who don't have shoes.

(01:42:05)
And so there are far more defenders in the world than there are attackers, but we need to arm them with that same type of automation that you saw in the attack described by Anthropic because it's just in many ways using commodity tools that we already have to both find and fix vulnerabilities. Those can be turned from offensive capabilities to the patching and fixing, but the defenders have to put shoes on. They have to use AI in defense.

(01:42:39)
So while the attackers are experimenting, we need the defenders to be experimenting and becoming great users of AI to find the same vulnerabilities that were described, but instead of exploiting them, to patch them. And that's the kind of, I mentioned CodeMender is our project which takes advantage of these vibe coding, if you want to call it. It's easier and easier to code. We make it easier and easier to patch.

(01:43:06)
And with so much of our problems based on legacy technology, small companies, others, that's the only way we're going to get ahead. This defender's dilemma of attacker needs to be right once, defender needs to be right all the time. AI can help the defender be right all the time. That's what we need to do.

Mr. Brecheen (01:43:24):

Mr. Zervigon, if I did a horrible job of pronouncing your name, you have a last name like mine, I apologize. And Mr. Coates, you've taken the time to be here. I've got 30 seconds. If there's anything because this is such an exploratory exercise for so many of us that are not experts, is there anything you want to just highlight? I've got 20 seconds to split between the two of you.

Mr. Zervigon (01:43:46):

I would say innovative results demand innovative timelines, right? You can't be operating on legacy timelines in order to achieve innovative results to protect the homeland.

Mr. Coates (01:43:58):

The piece I would add is that the information sharing is critical and staying abreast of how this is evolving is going to be one of the most important pieces amongst enterprises fighting against the new threats.

Mr. Brecheen (01:44:09):

I look forward to highlighting Homeland Security staff with our committee staff. Thank you, Mr. Chairman.

Chairman Ogles (01:44:13):

The gentleman yields back. I recognize the gentleman from Rhode Island, Mr. Magaziner.

Mr. Magaziner (01:44:19):

Thank you, chairman. I'm going to get right to the point. The Chinese government just launched the first ever AI powered cyber attack against our country that we know of. And at the same time, President Trump is selling the powerful H200 Nvidia chips, the next generation chips to China.

(01:44:45)
I will ask any of our four experts, does anybody think this is a good idea or our colleagues or anyone, does anyone want to defend this decision? They are engaging in cyber warfare against us right now. They just did it. They just launched the first AI powered cyber attack against U.S. organizations. Why in the world, given that they just did this, what, a couple months ago, would we be giving them these next generation chips? Now at the very least, we ought to be holding them back until we have some way of verifying that these chips are not going to be used to attack us.

(01:45:27)
So I'll ask again, any of our witnesses, Mr. Graham, anyone. Why is it concerning to you that China is about to receive these H200 chips from Nvidia?

(01:45:43)
Mr. Coates, would you like to take a stab at it?

Mr. Coates (01:45:46):

The defenses that we put into our LLMs, that Anthropic, that Google and others are doing to provide safety are things that we can control and we can use to prevent future type attacks from China using these resources. As China achieves the same capabilities in their technology from these chips, we lose control of the ability to put those safeguards in place and we're on our heels. So I agree with the concern that's being raised.

(01:46:13)
And the other piece that I will mention here is that as China provides greater frontier models like DeepSeek, and it's appealing to U.S. software corporations to integrate that into their stack for performance regions, we have to remember that that is essentially delegating decision making and trust to China, even though it might be US software. And we need greater focus on that.

Mr. Magaziner (01:46:36):

Yeah. I mean, look, cybersecurity is a bipartisan issue, and I believe that there are people on both sides who care genuinely about keeping us safe in the cyber domain, but I don't know how anybody can be okay with this chip sale, given what literally just happened two months ago. And that is something that I think we need to find a way as a Congress to deal with because the administration, I fear, has made a grave mistake.

(01:47:03)
I want to talk about the attack more specifically because we need to learn as much as we can from it. Mr. Graham, I'm grateful that Anthropic was able to detect and then report about the nature of the attack, but my understanding is it took about two weeks for Anthropic to realize that the attack was happening, give or take. Is that correct? Can you explain to us, you mentioned it in your written testimony. Can you explain to us generally why it took so long and what lessons you have learned and how you can now detect similar attacks hopefully faster in the future?

Dr. Logan Graham (01:47:39):

Yeah. The first thing to note is we ultimately did detect and disrupt the attack. And when we did, it was clear that this was a highly resourced, sophisticated effort to get around the safeguards in order to conduct the attack. Very specifically, what they did was they used a private obfuscation network to ensure that it was difficult to trace where the operations were coming from. They broke out the attack into small components that individually looked benign, but taken together from a broad pattern of misuse. And then ultimately they deceived the model to believing that it was performing ethical tasks.

Mr. Magaziner (01:48:14):

I mean, they basically told the model, "Help us figure out how to protect ourselves from a cyber attack," but in so doing the model revealed the vulnerabilities to a cyber attack. Is that in layman's terms, what happened?

Dr. Logan Graham (01:48:28):

That is one of the components, and I linked one of the key issues with cybersecurity.

Mr. Magaziner (01:48:33):

Yeah. I mean, I would just say as a lay person that seems like something that ought to be flagged, right? If someone says, "Help me figure out what my vulnerabilities are," there should be an instant flag that someone may actually be looking for vulnerabilities for a nefarious purpose.

(01:48:48)
So I'll just ask for the time I have left to any of our witnesses, I mean, what regulation is required to ensure that commercially available AI products have adequate guardrails in place? We appreciate the efforts that companies are already undertaking, but there should be some sort of a baseline of standards that we set as a country, should there not?

Royal Hansen (01:49:08):

We released this secure AI framework safe, and then there's a 2.0 version as well as a coalition for secure AI where we're not just helping set standards, but open source the implementations so broadly people can take advantage of and use those in their infrastructure.

Mr. Magaziner (01:49:26):

All right. Thank you all. I yield back.

Chairman Ogles (01:49:28):

The gentleman yields back. I now recognize the gentleman from Texas, Mr. Luttrell.

Mr. Luttrell (01:49:32):

Thank you, Mr. Chairman.

Chairman Ogles (01:49:32):

For his five minutes.

Mr. Luttrell (01:49:33):

Mr. Zervigon, is that how I say it? Perfect.

Mr. Zervigon (01:49:36):

Yes, sir.

Mr. Luttrell (01:49:38):

You spoke on architecture and how to secure our proverbial infrastructure and how information flows. And the question was hinted at earlier, and we need to know this on this side. Who is it that you deal with? Department of Homeland Security. Mr. Brecheen brought that up.

(01:49:58)
From my understanding, and this is what I'm trying to get clarity on, from my understanding, there's three entities, Department of Justice, Department of Homeland Security and Department of Defense all touch our communication capabilities above the ground and below the ground. Can you add clarity for me on who you deal with directly and is there one more than the other? The discussions I've had with our departments is they kind of hand the football off and I really can't find anybody who's running point on this. And I'll start with you, sir, and we can move back and forth.

Mr. Zervigon (01:50:33):

I mean, from our experience, I think Customs Border Protection are showing a lot of leadership on this issue and understanding that this is an architectural problem that needs to be remedied. And obviously with the cost benefit analysis of being able to do this over period.

Mr. Luttrell (01:50:50):

So is that brick and mortar facilities that our undersea cabling runs into, that Salt Typhoon's having a heyday with, things like that?

Mr. Zervigon (01:50:59):

All of them, all of the above. So it's about any network connection, any network endpoint that needs to be updated for post quantum cryptography.

Mr. Luttrell (01:51:08):

Mr. Hansen?

Royal Hansen (01:51:09):

Yeah. As an example, we in the Chrome browser back in 2023 changed the implementation of the encryption to begin to be post quantum crypto resistant because everyone would use it. It's used broadly in the industry. So our strategy is to, whether it's undersea cables, whether it's data centers, whether it's the hardware, make it secure by default. [inaudible 01:51:34]

Mr. Luttrell (01:51:33):

Is that your company specifically that's providing the security profile for that? Or is that something that the Homeland is coming in assisting with or Department of Defense is coming in and assisting with? I cannot tell you this. And I hate to say ignorant to really come to what the answer is to that.

Royal Hansen (01:51:52):

Yeah. In a world where every one of these departments or the scope of their oversight is digital or increasingly digital, we work across all of those entities you've mentioned and more on these kinds of-

Mr. Luttrell (01:52:06):

I feel like we're not doing enough, case in point, Mr. Graham, with what happened with Claude, and you guys have Gemini, correct? I'm saying that correctly?

Royal Hansen (01:52:12):

That's right.

Mr. Luttrell (01:52:13):

Where the bad actors and nefarious actors are utilizing AI capabilities to hack into the sweet spot of what we're not looking at. Mr. Graham, was it a human or a software that found the attack or both?

Dr. Logan Graham (01:52:32):

On our side, it was a combination of both. First, there's a series of detection measures that are generally automated and software based, and this triggered a human investigation that allowed us to-

Mr. Luttrell (01:52:41):

So as fast as we're moving on the advancements of artificial intelligence, and I don't think we can stop because if we slow down, everyone else is going to keep going. And then if we're behind now, we're absolutely going to be in last place. So here we go. If we move to a point where artificial intelligence moves the human element, but you needed the human element to find it, what happens?

Dr. Logan Graham (01:53:16):

I am enormously optimistic about the opportunities here to leverage AI to do this. This is the first time we're seeing some of this. [inaudible 01:53:24]

Mr. Luttrell (01:53:24):

We all are too. This is us being overly cautious. It's not us that's going to be able to regulate it. It's too fast. And by the time you show up in front of us to tell us what happened, whomever took a hold of Claude to make, are they lying in wait? Are they sleeping inside the program now and we've missed it? And they're watching you fix the problem and they know how you fixed it and they're going to attack someone else that's not as strong and capable of yourself or Google.

Dr. Logan Graham (01:53:54):

Well, in this case, it wasn't Anthropic itself that was infiltrated.

Mr. Luttrell (01:53:58):

I'm sorry. Okay. Thanks for clearing. Yes, sir

Dr. Logan Graham (01:54:00):

It is very clear that sophisticated actors are now doing preparations for the next time, for the next model, for the next capability they can exploit. This is why we have to be detecting them as fast as possible and mitigating at the model layer.

Mr. Luttrell (01:54:13):

Because I believe you use the term super scientist. This is what AI has created. You've titrated hundreds of attackers down to two or three that have the capability to ask the AI the question on exactly how to get in.

Royal Hansen (01:54:30):

Yeah. I think one just-

Mr. Luttrell (01:54:32):

At a speed that's uncomprehensible.

Royal Hansen (01:54:35):

To this point, we've been using behind Gmail and behind the Play Store and behind Chrome for almost a decade AI in its earlier forms to do exactly what you're talking about. So no humans involved. So your question is correct, and it's actually been happening long before the large language models emerged.

Mr. Luttrell (01:54:55):

Okay. Thank you. I'm sorry, Mr. Chairman. Yield back.

Chairman Ogles (01:54:58):

The gentleman yields back. I recognize the gentlewoman from New Jersey, Ms. McIver, for five minutes.

Ms. McIver (01:55:03):

Thank you, Mr. Chair and ranking member, and thank you to our witnesses for joining us today.

(01:55:08)
Every community, state, and country will be impacted by the benefits and risks of AI. In fact, we already see these impacts occurring. While the United States has been a leader with AI technology, our rivals are innovating in this area with great speed, and we have to make sure working people here have what they need to stay safe and successful. Education will be key to maintaining American dominance, security, and economic success.

(01:55:37)
With my colleagues, Representative Cleaver and Senators Blunt Rochester and Hirono and Schiff, we introduced the Workforce of the Future Act. This legislation would help us better examine the skills necessary for workers to thrive in a AI dominated economy. It will also provide resources for educators and students to get the skills they need to participate in the workforce of the future and stay protected against adverse consequences of new technology. We need to make sure that all Americans are set up to succeed in a world impacted by AI, not be displaced by it. An AI competent workforce will lead to a more secure United States and a stronger future for working people.

(01:56:24)
With that, Mr. Coates, I would love to talk with you about Trump recently signed an executive order that would overturn any state-based AI regulation deemed burdensome. What are some risks of letting AI develop unregulated?

Mr. Coates (01:56:43):

I think the important piece with AI regulation is to set clear guidelines and rules of the road and establish transparency amongst the creators. We want to motivate innovation and ensure that the U.S. stays as a leader in the world on AI.

(01:57:01)
One of the challenges in cybersecurity in particular can be a patchwork of regulations across states to deal with, especially in things like data disclosure, breach responsiveness, et cetera. And so we want to make sure that in the fast moving field of AI innovation, we are setting the right objectives clear so we can operate to rules of the road, but we don't hamstring our technology organizations and prevent innovation. The last thing we want to be is on our heels or second to others in the world with AI technology.

Ms. McIver (01:57:33):

Thank you for that. Just to follow up, you mentioned cybersecurity. Can you expand a little bit of how important will AI knowledge and competency be in the future of cybersecurity?

Mr. Coates (01:57:44):

I would consider AI to be a critical piece of the future of cybersecurity, both from the operators and the defenders. Understanding the core principles of cybersecurity through education, understanding how technology works, and then understanding how the different resources can be used as a defender.

(01:58:03)
As I mentioned in my testimony, there's no question that for defense to be effective, it's going to have to move at the speed of computers. So we need the best humans to understand this technology and harness AI in a defensive capability.

Ms. McIver (01:58:17):

Thank you for that. As AI data centers continue to expand, how do you balance innovation with the significant environmental and economic burdens they place on local communities and infrastructure? Mr. Coates, you can start, but anyone else can chime in as well.

Mr. Coates (01:58:34):

Maintaining dominance in AI is multifaceted. It's from the technology innovation in the models themselves to having sufficient power and technology and data centers to fund and power this innovation. So I do think it's critical to work across the nation to understand where can we have the right locations of data centers with sufficient power. We don't want to lose control of the pieces that go together to build technology. And to have effective AI, you have to have sufficient power and data center resources.

Ms. McIver (01:59:06):

Thank you. Anyone else? Mr. Hansen?

Royal Hansen (01:59:08):

I was just going to say, I talked a little bit about my son's situation and the science and tech. And you think of this alpha fold, which was the protein folding work that won the Nobel Prize from Google last year. Fusion and energy and clean and safe energy for me is another problem like the cobbler's children. Let's use the AI to help solve that problem. You ask a very good question and that's why we need to keep going on the science and technology as well.

Ms. McIver (01:59:34):

Got it. Anyone else in 28 seconds? All right. Well, thank you so much. With that, Mr. Chairman, I yield back.

Chairman Ogles (01:59:41):

Gentlewoman yields back and appreciate the topics you touched on because as we move forward and hopefully we'll have time to come back to it, but this idea of what does that regulatory landscape look like and this ever developing, quickly evolving subject matter where energy's a factor, right? And this latency period where we're realizing we have these vulnerabilities that we're not quite ready to adapt to or backfill.

(02:00:06)
So this is one of those, again, this hearing is the beginning of a very large conversation, whether it's energy, whether it's homeland security, and quite frankly, the future of our role in the world.

(02:00:19)
I recognize the gentleman for Alabama for his five minutes of questions, Mr. Strong.

Mr. Strong (02:00:24):

Thank you, Mr. Chairman, ranking member. Witnesses, thank you for being here today.

(02:00:29)
Dr. Graham, as my colleagues have mentioned, one concern is that AI allows adversaries to scale operations without scaling personnel. This changes the threat calculus for the United States. When AI tools are misused by cyber activity, what visibility, if any, does DHS and CISA have into these instances?

Dr. Logan Graham (02:00:53):

Well, I'm not familiar with the specific visibility of DHS and CISA here. I do know that what's important is industry should have information sharing mechanisms with government in these areas in order to give that visibility and also in reverse to understand the areas the industry should defend.

Mr. Strong (02:01:10):

Absolutely. Turning to you, Mr. Hansen, cloud platforms now underpin federal network's critical infrastructure and increasingly AI enables government systems. From a national security perspective, does that concentration of sensitive activity in the cloud create new widespread risk for the Homeland?

Royal Hansen (02:01:33):

Actually, I think it is helping us clean up legacy technology issues. When you look at the vulnerabilities we've had over the last decade, it's generally people running on old versions of software that they're not maintaining. And so we need competition in the space, and I think it is competitive in many dimensions, but overall, modernizing is going to make you more secure in the moment. [inaudible 02:02:01]

Mr. Strong (02:02:00):

I agree with you. Competition is where it's going to be also. AI and data centers are the future. I represent a state that is blessed with all forms of energy, coal, hydro, gas, solar, and nuclear power. We're able to meet the demand. What are your thoughts on AI and data centers in the future?

Royal Hansen (02:02:25):

I know this is a big topic as you would imagine at Google, and there may be better people to talk about it. I would just say to the point about using AI, we use AI in the management of our data centers, in the management of the power in a variety of ways. So using the technology to help us do it as efficiently, as effectively as possible is sort of my only perspective, but we could go deeper on that with others in the company.

Mr. Strong (02:02:51):

I also know that companies like Google, Meta, which both of those are located in my district, work closely with universities in the public sector on emerging technologies. In my district, we have institutions such as the Alabama School of Cyber Technology and Engineering that focuses on building early hands, hands-on cyber and technology skills.

(02:03:14)
Mr. Hansen, from your view, how can public, private partnerships and collaboration with universities help accelerate practical understanding and to secure adoption of AI and cloud technologies across the government?

Royal Hansen (02:03:27):

It's a really great question and it relates to the workforce question as well. We, in fact, over the last few years have stood up what we call cyber clinics. And these are not just with the big state universities or private universities. They're with community colleges and they represent places across the country. So I think the working together on the curriculum, the technology, the approach for the next generation is critical.

Mr. Strong (02:03:52):

Thank you. Mr. Zervigon, many national security data sets must remain secure for decades. What are the biggest practical challenges to deploying quantum resistant encryption at scale today?

Mr. Zervigon (02:04:10):

The desire to do so. The desire to do so, I think. I think the capabilities are there. There are many innovative technologies and innovative companies that can assist. And with the desire to do so, I think we can start going by protecting the transport layer, right? The overriding layer that which this information, this data travels.

Mr. Strong (02:04:31):

Thank you. How can government and industry work together to reduce risk without disrupting operations or slowing innovation?

Mr. Zervigon (02:04:40):

Looking at it from an architectural standpoint, it's not just about the math. It's not just about creating new algorithms. It's about creating an architecture that allow you to deliver these algorithms, be able to swap them out at scale, be able to protect ourselves in the case that an algorithm is broken because it will happen. And so by doing so, it allows us to mitigate the effects, the yield effects of a harvest now decrypt later attack.

Mr. Strong (02:05:03):

Thank you. To close out, I'd like to ask all the witnesses. If resources are limited, what should DHS and CISA prioritize first to reduce cyber risk most effectively? I'll start on the end.

Dr. Logan Graham (02:05:19):

I think establishing threat intelligence sharing channels, very important. Identifying infrastructure that needs to be secured that we can go secure.

Mr. Strong (02:05:25):

Thank you. Mr. Hansen?

Royal Hansen (02:05:26):

Modernization. This is not something we go backwards on. We got to go forwards.

Mr. Zervigon (02:05:32):

Again, looking at the transport layer, looking at the biggest pipes carrying the most important and pertinent data, and protect those first and then move downward from there.

Mr. Coates (02:05:41):

It would be information sharing on emerging threats and adoption of autonomous defense systems.

Mr. Strong (02:05:47):

Thank you. Mr. Chairman, I yield back.

Chairman Ogles (02:05:48):

The gentleman yields back and I recognize the gentleman from Louisiana, Mr. Carter, for his five minutes.

Mr. Carter (02:05:53):

Thank you, Mr. Chairman.

(02:05:55)
Cybersecurity is no longer a hypothetical risk. It is a real and growing threat to Louisiana and to our nation's energy security. Louisiana sits at the heart of America's energy system with refineries, petrochemical plants, pipelines, LNG export terminals, offshore platforms, and the electric grid all tightly interconnected. A successful cyber attack on any one of these systems could ripple across our entire national economy.

(02:06:26)
In 2021, the Colonial Pipeline cyber attacks shut down a major few artery, caused shortages across the Southeast and drove panic buying and price spikes, all without a single physical asset being damaged. That attack showed just how vulnerable our energy systems can be.

(02:06:44)
That's why we must act now by strengthening cybersecurity, modernizing systems, sharing threat intelligence, and using AI defensively to stop attacks before they succeed.

(02:06:57)
Mr. Coates, in your testimony, you state that bias in AI systems, whether intentional or unintentional, can affect how software is generated, how alerts are prioritized, how decisions are made.

(02:07:13)
How can bias enter AI driven security tools and what risk that poses to our cybersecurity?

Mr. Coates (02:07:22):

It's an excellent question. The challenge in front of us is that we are offloading decision making into AI when we use AI in our software systems. And AI itself is trained on pre-training data, post-training data, configuration, et cetera, but that's reflective of the entity and organization that creates it.

(02:07:42)
CrowdStrike just released a report recently showing that the DeepSeek LLM model has bias. And when you ask that model to create software and mention terms related to items like Tibet and other things not favorable in the CCPI,

Mr. Coates (02:08:00):

… CPI. It generates code that is more vulnerable than had you not mentioned it. So this bias is built deeply into it, and maybe that is unintentional and a result of training data that was used, but nonetheless, we need to be aware that if American corporations are using software that's powered by LLMs that are built outside the U.S., that bias could come back to put us in a more risky position.

Mr. Carter (02:08:23):

So what should we, should the federal government, should Congress be doing to detect and mitigate these actions going forward?

Mr. Coates (02:08:31):

The most important piece here is transparency, requiring in the bill of materials for software procurement that we clearly state the origin of the pieces of the software. This is something we're doing already, but needs to be expanded to cover things like LLM, including where it was created, training information, et cetera.

Mr. Carter (02:08:50):

Dr. Graham, you predict these attacks will only grow in effectiveness. What steps should we be taking to get ahead of this, evolving threats, particularly those targeting critical infrastructure? What should Congress, what should we be doing as this committee do to arm you, to arm others, to make sure that we are not playing catch up, but we're catching this before it happens?

Dr. Logan Graham (02:09:13):

The very first thing we should do is that industry and government should share threat intelligence so that we can get ahead.

Mr. Carter (02:09:19):

Is that happening at a rate that you're comfortable?

Dr. Logan Graham (02:09:22):

It should always happen faster and more. The second is that I believe Congress can enable the deployment of these tools defensively. We can identify the infrastructure. We should proactively defend, and we can support or remove barriers to pulling these tools in order to defend them.

Mr. Carter (02:09:41):

Mr. Hansen, as CISA developed and issued AI guidance, it worked in collaboration with our international allies. Why should the U.S. continue to coordinate with countries in this area?

Royal Hansen (02:09:52):

Yeah. I was thinking about this when I was in Poland just after the Russian invasion of Ukraine, and they explained how they were now getting grain on the railroad out of Ukraine through Poland, but it had to be changed at the border because the Soviet-era railroad tracks gauge was different from that in the West.

(02:10:15)
I view this the same. We want American technology to be the railroad gauge of the 21st century. And so to me, it's a national security question that people use our technology and not others.

Mr. Carter (02:10:30):

Mr. Zervigon, I've got a lot of good friends in Louisiana with that name. So we'll check boxes and see if Luis or some of those people are related to you, but-

Mr. Zervigon (02:10:39):

They are.

Mr. Carter (02:10:40):

Are they really? Fantastic. Some of my very dear friends. But now that we've had our family reunion, tell me about investments. Are we making the kind of investments to stay ahead of the nefarious actors?

(02:10:53)
As was mentioned earlier, we know that the bad guys sometimes get a lot more information than we do and their technology grows pretty quickly. What can we do to make sure … because we've got listening ears here, and this is a great bipartisan group of individuals who really want to help.

(02:11:07)
And I know my time's expired, so can you give me a quick answer on that?

Mr. Zervigon (02:11:10):

Sure. I mean, as I mentioned in my testimony, I think increasing the budget for the migration. I think we don't have to do as much on the inventorying and the assessing and the understanding. We know the pipes that we need to secure. We know the data that we need to secure. We need to start doing that. And also, I think helping that is accelerating the timelines and removing these artificial numbers out in the distance when we should start doing it now.

Mr. Carter (02:11:33):

Thank you, Mr. Chairman. You're very generous.

Chairman Ogles (02:11:36):

Gentleman yields back. And thank you, sir, for your questions.

(02:11:38)
I'm going to go to the gentleman from Texas-

Mr. Luttrell (02:11:40):

Thank you, Mr. Chairman.

Chairman Ogles (02:11:40):

… Mr. Luttrell for a second round.

Mr. Luttrell (02:11:43):

The amount of data centers that we are building out, they draw a lot of power, and we are steadily increasing the footprint of each one of those facilities. Now, Texas stands alone as far as the national grid goes. There will come a time the amount of power drawn on everything that we're putting onto the grid will kill it. I'm talking next year, two years maybe, max. Then what?

(02:12:21)
I think because we're all in the game together, is there a way that you all can decrease the amount of power, photon communications, or how the data centers themselves communicate instead of that amount of power being drawn in because we'll never catch you. There's no way we can build out enough infrastructure to power the amount of data centers being built, just those alone.

(02:12:54)
So I don't know if this is more of a question than a concern that I'm sure you're thinking about this. There's going to come a hinge point that it's either going to be an all-stop evolution. We have to do what we have right now because China, they don't have that problem. They're building hand over fist just to keep up the amount of energy that they're drawing. What do we do?

Royal Hansen (02:13:15):

So we talked a little bit about the fusion or technological investment, so I think we need to get started on doing that. You've seen this from Google with our TPUs, which is a different type of chip. There are more efficient ways to do some of the computational work related to AI, and so I think we need a round of innovation, which we're investing in to make these chips more efficient and more performant at the same time.

Mr. Luttrell (02:13:43):

Will that happen before the grid phase?

Royal Hansen (02:13:48):

That's the work. That's the work. Yeah, that is the work.

Mr. Luttrell (02:13:53):

Mr. Graham, Mr. Coates, anything on this? I mean, Ms. McIver hit the nail on the head here. This is a very real thing. We're not trying to slow innovation in any way, shape, or form. Then the entire globe is moving to the metaverse, and we have to be able to sustain that, and we do not have the infrastructure in place.

(02:14:18)
I think in Texas it's two years. It's going to hit. I'd bet you a dollar on that one.

(02:14:23)
But anyway, thank you, sir. Yield back.

Chairman Ogles (02:14:27):

Gentleman yields back. I'll go to the gentleman, the ranking member from OINA, Mr. Thanedar.

Mr. Thanedar (02:14:34):

Thank you, Chairman Ogles. Appreciate. As cyber attacks evolve, it is critical that the private sector share information about cyber threats with the federal government. This evolution is only accelerating due to AI, making it more important than ever that the federal government has the information necessary to understand current threat landscape.

(02:15:04)
The Cybersecurity Information Sharing Act of 2015, the law that facilitates this kind of critical information sharing between the private sector and federal government, this law is set to expire on January 30th. My question to all of you is, how important is it that Congress pass a long-term reauthorization of CISA 2015, particularly in light of the rapid evolution and deployment of novel technologies?

Mr. Coates (02:15:47):

I think this is critical. In cybersecurity defense, the basic primitives are known across organizations. We understand the plumbing, the core items that we need to do, but the techniques and the methods being used by the adversaries continues to change.

(02:16:03)
It's crucial that organizations can say, "We've discovered this piece," and share it with others. So collectively, we don't need to compete on defense, but look at it as a national imperative that we are secure, and information sharing is a key piece of that.

Mr. Thanedar (02:16:18):

Thank you.

Royal Hansen (02:16:21):

Yeah. We're very supportive. In fact, I'd go further and say the information sharing and analysis centers, the ISACs, which exist by sector, this isn't just going to be a technical issue. This will be a healthcare energy, and so the sector-specific sharing we need to focus on as well, particularly as AI operates more at the human layer than at the technical layer.

Mr. Thanedar (02:16:45):

And the private sector is usually on the top of the developments and certainly would be in a position to help the federal government, right?

Royal Hansen (02:16:57):

Absolutely. One of the reasons I came to Google after working in financial services for many years was the realization that every industry would need the benefits of security being baked into their technology, which includes sharing and making it easier for people to defend themselves.

Mr. Thanedar (02:17:13):

Thank you. I appreciate it. I yield back.

Chairman Ogles (02:17:18):

Gentlemen yields back. There's a lot to unpack here. Unless other members come in, we can drop some of the formality and have more of a conversation. Feel free to jump in. I guess I want to start us off with, we know that we have a lot of, I think, infrastructure gaps. I mean, I like to say we're the dominant predator currently across landscapes, but in this space in particular, that can change rapidly.

(02:17:47)
So when you're setting the marker down, if you had to predict, and whoever wants to answer and understanding this is just a prediction, when you think of our nearest adversary, how long before they are at quantum computing? I know that's a big question, by the way, but who wants to guess?

Mr. Zervigon (02:18:09):

That would be the $64,000 question.

Chairman Ogles (02:18:11):

Right, right. But are we talking about two years or 12 years?

Mr. Zervigon (02:18:13):

Well, I think the better analysis is whatever the number is, the data that you want to keep secret and you want to keep protected, is it outside of that? So if you think that a quantum or cryptographically-relevant quantum computer is five years out, then any information outside of the five we know is problematic. So we need to make sure that we're protected.

(02:18:37)
It's not like Y2K where it's one moment in time where we need to worry about. It's that moment in time and then the predating of that information and protecting that information.

Chairman Ogles (02:18:48):

Well, that's kind of where I wanted to take this is that when I think about just in general, we as individuals, members of Congress, kind of device hygiene, the amount of information that's stored that, if compromised, that is suddenly is unlocked or unleashed. My fear is currently, as has been stated, is there's a harvesting going on of information across sectors, so financial services.

(02:19:12)
That actually what piqued my interest in AI was being on the financial services committee and specifically the subcommittee on national security, and I'm thinking about all of the threats and how they're escalating and continuing to escalate when it comes to personal information, but also breaching of accounts, where suddenly your voice, if it's out there somewhere, can be replicated, where IDs can be falsified, et cetera.

(02:19:36)
So if you want to speak to the amount of information and then what do we do with it? Do we need to take this information offline? Do we silo it? How do we clean up this mess, all these footprints and fingerprints that we have all left across that cyber landscape because it's being harvested, quite frankly, to be weaponized against us? You want to start, Dr. Graham?

Dr. Logan Graham (02:19:59):

I think there are a number of very substantial opportunities that we have here. Again, I'm extremely optimistic about using AI to help do this. Anthropic takes privacy and sensitivity of data extremely seriously. I think we could probably unleash quite a lot of innovation here using AI to secure data, infrastructure, sensitive systems.

(02:20:21)
I think this is going to be one of the important topics if we deploy this technology more and more into the economy to ensure that it's critical we get it to defend critical infrastructure without exposing it anymore.

Royal Hansen (02:20:34):

First of all, the reason we implemented the new encryption in Chrome was to start to get ahead of exactly the kind of question you're talking about. So there are some common utilities, whereas we at Google or other companies migrate, you get an architectural benefit for others.

(02:20:51)
But to the point on using AI, we have used, again, even before large language models, AI to help identify unused data, label data per certain sensitivities, and then you can implement policy that protects it. But I think he's correct. We'll have to use AI to get to the scale of the problem that you're describing.

(02:21:13)
That means we'll also have to modernize though because we can't do that with the servers that are under desks and in second class data centers that no one's modernized before. So that combination of modernization and using the tools, I do think we can scale to that problem.

Mr. Coates (02:21:36):

I see two parts to the question you raised, one of which is how do we defend organizations against the rising orchestration of attacks that we've talked about some through AI, and the second piece around how quantum changes things. The biggest challenge with the ability to decrypt traffic when quantum becomes relevant is the change that we need to do to be defensive here is a administrative and operational change.

(02:22:05)
We understand the systems that we have inside our organizations, we need to essentially upgrade them. Unfortunately, with the number of priorities we have for cybersecurity, it needs to become a top issue for organizations to say, "This needs to happen by this date," because otherwise we're going to be really caught behind the eight ball where the data will be captured.

(02:22:27)
It will be decrypted, and the time to do the upgrade will be so significant that we'll be in that risky position for a much longer period.

Chairman Ogles (02:22:38):

Thank you. Google's infrastructure, you mean the amount of computing that you're supporting from government to private to health, I mean, just across the board, when you look at these kind of constant attacks … So we just had a hearing last week, financial services on the Oversight Committee, and we had everyone from Verizon to the credit card companies to across the board, the social media platforms, the architecture platforms.

(02:23:19)
We were talking about the threats that they're facing and the amount of investment that is being made, and quite frankly, leveraging. So when it comes to credit cards, for example, where you have AI that is constantly watching transactions, looking for those patterns that otherwise are outside the norms, but what are those fail points when you look at that ecosystem from a Google perspective?

Royal Hansen (02:23:41):

Yeah. It's a great point, and I'll maybe just extend that a little bit and see if this is what you're asking about. But it's the controls that we care about in finance or healthcare or transportation are going to be different. The risks are different. So it's not just about the plumbing, let's call it, the technology, but in your credit card, the limits you set. Show me what any transaction over $100 and you get that monitoring.

(02:24:11)
You think about the kind of monitoring that occurs in healthcare. I think the key is that this isn't just a technical problem. This is an industry problem, and AI can help because AI understands the language. If you write a policy that says, "This heartbeat level is problematic under these conditions," the AI model's going to be better at monitoring that than a human. So that's where we need to go is to use AI.

(02:24:40)
I keep coming back to the cobbler's children. Let's not be shoeless in defending ourselves.

Chairman Ogles (02:24:50):

Well, again, on the AI, when I think about when you look at Elon and some of the other companies that are doing any of the autonomous robots or humanoids, whatever you want to call them, and the ability to have a partner that now can watch a child who is ill or a spouse or an elderly parent that is where they're wearing a ring or a bracelet where they're constantly being monitored in real time, where you have a situation where they can dispense or disperse medicines, and again, immediately relaying back to the doctor.

(02:25:20)
There's a huge upside to this, and it's going to be transformative in a way that, again, I think is hard to fathom. My concern is when we have these nation-states that are constantly seeking to exploit what otherwise could be used for tremendous good. When I think about China and their overt … I mean, at this point they're not even hiding it. I mean, I think they were testing.

(02:25:47)
The question or the point was made is I don't think we should ever underestimate our adversaries. This idea that they put it out there. It was detected. You know they're watching to see how you detected it. How can they replicate or do it better the next time? So we know it's coming. It's just a matter of time.

(02:26:10)
The investment, quite frankly, that they're making, is that I think, from our perspective, we have to do a better job, put up the guardrails, increase the transparency, but this flow of information is going to be critical. That's going to include some of our partners overseas.

(02:26:27)
So from an industry perspective, how is that cross collaboration going with some of our European partners or Israel or to the extent that you can disclose?

Dr. Logan Graham (02:26:39):

On topics of national security, Anthropic works with U.S. and democratic allies quite heavily for exactly this reason. One of the areas of collaboration that has helped the most has been in testing of model capabilities so that everybody understands where we're at and what's coming down the pipeline. That is the key first step.

(02:26:58)
Additionally, there are probably international insights into how we do secure our infrastructure and learn from each other. Broadly, we generally support this, and I think it's a testament to America's leadership that it has instigated that degree of international collaboration.

Royal Hansen (02:27:15):

It's a great point. Just my job's changed dramatically from 20 years ago when I started. I was just thinking this year I was in Tokyo, Singapore, Abu Dhabi, Tel Aviv, Sao Paulo, Warsaw, talking exactly about these kinds of issues and how do we raise the baseline for those citizens. So it's a big part of the job. We realize that.

Mr. Zervigon (02:27:41):

For us, I think a large part of it is on the architecture. As we develop the architecture that allows different countries, different regions to employ the encryption that they want to employ, we certainly like to show leadership in that, and we are and with the work that NIST has done over the past decade. But at the end of the day, different countries, different regions are going to want to do what they want to do. So focusing on the architecture enables that.

Mr. Coates (02:28:08):

In terms of information sharing, I would point to the innovation pipeline. I was just in Tel Aviv last week at a major cybersecurity conference speaking with startups and other innovators in the space, and Tel Aviv, in particular, in Israel creates amazing technology that bridges to the United States as one of their main customer bases.

(02:28:29)
So as we look at where the next great ideas are coming from, they are being created inside the United States and they're being created with our allies, and working closely, especially with Israel, for cybersecurity is definitely to our advantage.

Chairman Ogles (02:28:42):

Well, on that, when I think about the innovation and that innovation pipeline, as we look at, again, the five, seven, 14 AI groups, I think where it's imperative that we're sharing information across countries and nation states is this, certain countries, based off of where they're at and the type of threats they're exposed to, get quite good at those types of attacks.

(02:29:10)
So what South Korea is facing maybe slightly different or a different perspective than Israel is facing versus Eastern Europe. One of the things that I've done is I've had the opportunity to travel in South and Central America and to Eastern Europe to talk about cybersecurity, and what troubles me is in many of these countries, especially when you get into that second tier, is they're wholly unprepared.

(02:29:33)
I think, Mr. Zervigon, you mentioned that what we want to do is create a cyber environment where the world is, quite frankly, reliant on our architecture, our expertise. So the idea of the chips, there's some huge … It's a pause moment to figure out what do we want to share versus where do we want to hold back? That's probably not a conversation that we can have in this setting.

(02:29:59)
That being said is ultimately we want our global partners, whether in South America or Africa or Europe, Central America, to be dependent on us and trust us in this ever-evolving space because, in my humble opinion, the threat to the West and the developing world is China, and it's time we have that honest conversation.

(02:30:21)
Quite frankly, your report really puts a fine point on the fact that this was an intentional attack to undermine the United States of America, to undermine the West, and to quite frankly, to try to achieve a technical advantage that they currently don't have and as they seek to leap forward in their own development and their own technology.

(02:30:41)
So with that, and we're probably going to end a little soon, but what I would love to do is just go down the line, any thoughts that you might have. Sometimes you're in a room. You don't ask the right question. So feel free to point out the right question. Then also, what is that thing? What are next steps? And then what keeps you up at night?

(02:31:04)
Dr. Graham, you're at the top of the table. So we'll just start with you, sir.

Dr. Logan Graham (02:31:08):

To me personally, as we watch these threats and have for the past two-plus years, we have seen the models go from zero to extremely useful and now used in the real world. This only happens because we monitor this threat in the first place. But the most important thing in our team's view from now on is to take this moment here as the change point, is from now on that we will have a degree of scale that I think we've never had before and very possibly very soon, a degree of sophistication.

(02:31:45)
I fear the day we wake up and models are doing things more complicated and sophisticated than the best humans on Earth are able to understand. The only answer, we think, over the long term is to make sure that we're using models to keep up and outpace the attackers. We need to give the defenders a permanent advantage.

(02:32:05)
We're going to work really hard to make sure our models can do that. We're going to work really hard to make sure that they're deployed. This is a cross-industry challenge. We have to work with government on it. This is, we believe, the fundamental issue.

Chairman Ogles (02:32:19):

Dr. Hansen?

Royal Hansen (02:32:19):

Yeah, maybe just two things. One, I'm reminded that in 2009, Google was compromised by Chinese threat actors. This goes back to over 15 years. It was a watershed moment at the company, and we spoke openly about it. They had attacked 25 companies. It's really where the modern architecture for security was born.

(02:32:43)
You hear about Zero Trust. This was the company redoing our infrastructure from the ground up to be up to the kind of attacks we now knew were possible. To the point about AI, I think that's the next phase of this threshold is to put in hands of defenders the tools that will allow them to be successful in ways that we've frankly been … The numbers game doesn't work for us right now with all this legacy software. So now is the time to put those tools in the hands of defenders.

Mr. Zervigon (02:33:19):

I would say also to accelerate the timelines and the budget, as we talked about. I mean, 15 years ago, two-factor authentication, nobody had ever heard of it. Now it's everywhere. You can't buy concert tickets without two-factor authentication. Same thing is going to be the case with encryption.

(02:33:34)
I think under the legislative branch, as well as the executive branch, continuing to lead on this and to kind of push the envelope and set the table for innovative technologies and innovative companies to actually be able to start doing what they do best rather than waiting for legacy timelines to take hold, I think that's in everyone's best interest.

(02:33:55)
It starts with the government, and then it'll move quickly to critical infrastructure or critical industries, and then it'll move to everything, just like two-factor authentication did.

Mr. Coates (02:34:06):

The country that leads in AI will lead in the world. This is the most important and innovative time in recent history. I believe that it is imperative that we align behind the challenges, be that data centers, be that energy, be that human resources, be that regulation, to create a transparent playing field in the United States where we can spur innovation forward.

(02:34:33)
I think if we are caught up in any of the obstacles in pursuit of that, it will only give foreign adversaries the upper hand and then let them lead other countries to build on top of their technologies, which will be even harder to dig out from. So the future is in front of us, and leading in AI is the most important thing we can do.

Chairman Ogles (02:34:53):

Absolutely. I think all the witnesses, Mr. Coates, to your point of … There are a lot of subjects in Congress that we address that are kind of very heated and at times partisan, but I would like to think this is the one that isn't. We have a lot to do, whether it's the sharing of information, whether it's better educating our allies overseas, preparing for the energy load that we know is coming, and just sheer innovation. Like has been said, we want to put up the guardrails to protect Americans and our allies. We also understand that our adversaries are not going to use guardrails. I would argue that they, quite ,frankly are willing to be reckless in achieving this goal, this endgame, which is AI and quantum, because it does. It changes the world forever, and so I think this is the wake-up call.

(02:35:58)
This is that moment in time that we'll point to in this space. Did we heed the warning? Were we listening? Were we paying attention? You've got our attention. My challenge to you would be to feel free to come to this body, come to me, come to the ranking member, and have those honest conversations of, "We see a deficiency here, and we need your help," or, "This is a space where you're getting it wrong." Because if we don't have that communication and that trust, forget ideologies and politics and who you voted for.

(02:36:37)
This is about national security. This is about your son. It's not putting impediments and guardrails in the way that impedes that cure or whatever discovery is next. I can't imagine what the future looks like, but it's coming whether we prepare for it or not. So I commend all of you for being here. And quite frankly, I would love to have the conversation with each of you about having a working group that is outside that reports back to this body.

(02:37:06)
We can get bipartisan membership to participate in it to guarantee that we truly … It's one thing to give platitudes. It's one thing, "Oh, we're going to share information. We're going to work with our allies. We're going to do the right thing for the right reasons." But if we're not having the conversations, it's all platitudes, and I'm not one to shy and beat around the bush.

(02:37:27)
If we don't get this right, we're screwed. I think you said, Dr. Hansen, the defender has to be right every time. Your adversary only has to be right once. If we mess this up, it changes everything forever.

(02:37:43)
Any final thoughts? Well, again, I thank you all. I'm humbled that you would come before Congress. It is important that we have this conversation. I look forward to getting to know each of you better, and I personally will reach out to each one of you individually so that you know that you have access to Congress every single day of the week, 24/7. I will answer my phone.

(02:38:04)
With that, the committee stands adjourned. And God bless you, sir, and your son.

Topics:
No items found.
Hungry For More?

Luckily for you, we deliver. Subscribe to our blog today.

Thank You for Subscribing!

A confirmation email is on it’s way to your inbox.

Share this post
LinkedIn
Facebook
X logo
Pinterest
Reddit logo
Email

Copyright Disclaimer

Under Title 17 U.S.C. Section 107, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is permitted by copyright statute that might otherwise be infringing.

Subscribe to The Rev Blog

Sign up to get Rev content delivered straight to your inbox.