Apr 9, 2021

Pentagon Briefing Transcript on Artificial Intelligence

Pentagon Briefing on Artificial Intelligence
RevBlogTranscriptsPentagon Briefing Transcript on Artificial Intelligence

Pentagon officials Robert Work and Michael Groen held a news conference on April 9, 2021 on artificial intelligence. Read the full transcript here.

Transcribe Your Own Content

Try Rev and save time transcribing, captioning, and subtitling.

Robert Work: (00:00)
… two overarching comments. First for the first time since World War II, the United States technical predominance, which undergirds both our economic and our military competitiveness, is under severe threat by the people’s Republic of China. Nick Burns, who is in his confirmation hearing, or Bill Burns? Bill Burns, I’m sorry. Bill Burns in his confirmation hearing is the director of CIA, said that in the strategic competition with China technology competition is the central pillar. And the AI commission agrees totally with that. The second broad thought is within this technological competition, the single most important technology that the United States must master is artificial intelligence and all of its associated technologies. Now we believe we view AI much like Thomas Edison viewed electricity. He said it is a field of fields. It holds the secrets which will reorganize the life of the world. Now it sounds like a little hyperbole, but we actually believe that.

Robert Work: (01:21)
It is a new way of learning, which will change everything. It will help us in utilize quantum computing better. It will help us in health. It will help us in finance. It will help us in military competition. It is truly a field of fields. So if that is background, we said, “Look, we are not organized to win this competition. We just are not.” We say we’re in a competition, which is a good thing. The first thing you have to do is admit you have a problem. So Houston, we have a problem, but we have not organized ourselves to win the competition. We do not have a strategy to win the competition. We do not have the resources to implement a strategy, even if we had one. So the first thing is we have got to take this competition seriously, and we need to win it. We need to enter it with the one single goal, we will win this technological competition.

Robert Work: (02:25)
Now what we decided the best way to think about this is we are not organized now. We need to get organized. We said by 2025, we should, the department and the federal government, should have the foundations and piece for widespread integration of AI across the federal government and particularly in DOD. Now there are three main building blocks to achieve this vision. First, you have to have top-down leadership. You cannot say AI is important and then let all of the agencies and subordinate departments figure out what that means. You have to have someone from the top saying, “This is the vector. You will follow the vector. If you do not follow the vector, you will be penalized. If you do follow the vector, you will gain extra resources.” So you have to have top-down leadership.

Robert Work: (03:19)
Now, one of the first recommendations that we made is JAIC was underneath the CIO and it was actually underneath DISA in many ways, administratively. We said, “If you want to make AI your central technological thrust, it needs to be elevated.” And we recommended that the JAIC report either to the secretary or the deputy secretary. That was actually included in the NDAA and now JAIC reports to the deputy secretary of defense. And that’s a very good first step. But we think the next step is to establish a steering committee on emerging technology. This would be a tri-chaired organization, the deputy secretary, the vice chairman of the joint chiefs of staff and the principal deputy director of national intelligence. They would sit and they would look at all of the technologies. They would drive the thrust towards an AI future. And they would coordinate all activities between the intelligence community and DOD, which is a righteous thing. They would be the ones who identify lack of resources, address that problem, and also remove any bureaucratic obstacles.

Robert Work: (04:32)
The steering committee would oversee the development of a technology annex of the national defense strategy. The last time we had a list of technologies, there were 10 on the list. All 10 of those were very, very important. But when you have 10 things as your priorities, you have no priorities. You have to establish some type of prioritization and enforce it. So the technology annex to the national defense strategy would do just that. Also, the departments should set AI readiness performance goals by the end of this fiscal year, 2021, with an eye towards 2025, when we need to be “AI ready.” So top-down leadership is the first big pillar. The second is to ensure that we have in place, the resources, processes, and the organizations to enable AI integration into the force.

Robert Work: (05:28)
Now the commission said you need to establish a common digital ecosystem. The JAIC has established the joint common foundation. There are a lot of similarities between the two, although the commission’s view is a little bit broader than the joint common foundation at the point. But the point is that everyone sees the necessity that provides access to all users in the department to software, train models, data, computing, and a developmental environment for DevSecOps that is secure. We recommended that you designate the JAIC as the AI accelerator. We actually assess that China is a little bit ahead of the United States in fielding applications at scale. We can catch up with them and we believe that JAIC is the logical place in the department to really be the accelerator for AI applications at scale. The department has to increase its S&T spending on AI and all of R&D. We think it should be a minimum of 3.4% of the budget, and we recommend that the department spend about $8 billion on AI R&D annually. That will allow us, we think, to cover down on all the key research areas.

Robert Work: (06:54)
There’s all sorts of specialized acquisition pathways and contracting authorities out there. We still continually need to refine them because many of them are not perfectly applicable to software type things. And I know JAIC is working on this, but we have to have an updated approach to the budget and oversight process for these things. So the second big pillar is ensure you have the resources and the processes and the organizations. And third, you have to accelerate and scale tech adoption. You really have to push this. So we recommend standing up an AI development team at every single COCOM with forward deployable elements, and they leverage technological knowledge to develop innovative operational concepts and essentially establish a poll for AI enabled applications that will help them accomplish their missions.

Robert Work: (07:50)
The department should prioritize adoption of commercial AI solutions, especially for all of the back office stuff. There’s really no reason to do a lot of research on those type applications. The commercial industry has plenty of them. You just have to prioritize, identifying the ones that can be modified for our use and bring them in as quickly as possible. We think the department should establish a dedicated AI fund under the control of the undersecretary of defense for research and engineering. And that fund would allow the undersecretary to get small, innovative AI companies across the Valley of Death. And this would be up to the undersecretary of defense for R&E who is the chief technology officer of the department. Now the things that under cross all of these are talent, ethics and international partnerships.

Robert Work: (08:46)
Let me talk about talent first. We think we have to have a DOD digital core modeled after the medical core. These are digitally savvy warriors, administrators, and leaders. We just need to know who they are. We need to code them in some way, and we need to make sure they’re in the places that have the highest return on investment. We need to train and educate war fighters to develop core competencies and using and responsibly teaming with machine systems. Understanding their limitations, understanding what they should not be asked to do, et cetera. And equally, AI and other emerging technologies need to feature prominently in senior leader education and training, with a key focus on ethics, the ethical use of AI and I’ll go right into that.

Robert Work: (09:40)
We’re in a competition with authoritarian regimes. Authoritarian regimes will use technology that reflect their own governing principles. We already know how China wants to use AI. They want to use it for population surveillance. They want to use it to suppress minorities. They want to use it to cut individual privacy and trample on civil liberties. That’s not going to work for a democratic nation like the United States. And so this is as much a values competition as it is a technological competition. The way Eric Schmidt, our chairman, talks about this is we’re going to employ platforms, which bring these technologies.

Robert Work: (10:24)
So let’s just think about how 5G worked. Huawei’s 5G technologies allowed a country do this, uses it, to essentially surveil their population. So these values are very, very critical and an important part of the competition. And finally, we’re not going to succeed if we do it alone. This is a central thinking and U.S. defense strategies. So we have to promote AI interoperability and the adoption of emerging technologies across, among our allies and our partners. We are absolutely confident as a commission, we can win this competition. But we will not win it if we do not organize ourselves and have a strategy, and have resources for the strategy, and a means by which to implement the strategy and make sure that everyone is doing their part. Thank you.

Michael Groen: (11:31)
All right. Good morning everybody and thank you very much for participating in this important session. And first I want to say thank you to what Secretary Work and the national security commission AI on AI team. Just incredible work. I mean, what you see if you’ve read the report. If you haven’t, I encourage you to go to the website and look at the NCSAI final report. What you see is like a deep understanding and a deep analysis of down to first principles, bare metal, for what it takes for AI integration and preserving our military effectiveness. What they produce is critically important and critically important for us in the department, but it’s also critically important for our national competitiveness. In the same breath, I’d like to say thank you to Congress and department leadership, both of which clearly understand the importance and the need to innovate and modernize the way we fight and the way we do business.

Michael Groen: (12:24)
And I’m happy to report as the director of the JAIC, a positive momentum toward implementation of AI at scale. We certainly have a long ways to go, but you can see the needle trending positive. With bipartisan support from Congress, with great support from the DOD leadership, the services are beginning to develop AI initiatives and expand operational experimentation, that is taking those first steps. The defense agencies are reaching out daily to share their best practices with us and with each other. The combatant commands, especially the combatant commanders, have caught a glimpse of what the future might look like through a series of integrative exercises. They like it and they’re eager to gain these capabilities. With the JAIC now aligned under the deputy secretary, which gives her and the rest of the department leadership access to the tools and processes to reinforce their priorities, underline our ethical foundations, integrate our enterprises and transform our business processes. And we are eagerly looking forward to that work.

Michael Groen: (13:26)
Like the NSCAI, we see AI as a core tenet of defense modernization. And when I say AI, I want to be clear. I’m not just talking about the JAIC, all AI, the efforts of the services, the efforts of the departments and the agencies rides on the foundations of good networks, good data services, good security and good partnerships. And an important part of the JAIC’s business model is to build those as part of our AI infrastructure. And with lots of budget work ahead, I think we’ll hear as FY22 is relooked and the Palm 23 to 27 is developed, we’ll hear a lot about modern weapons systems and concepts. And it’s important that we understand that their potential, those weapons systems those concepts, their potential to modernize our ward fighting rides on the foundational data, the networks, the algorithms that we built to integrate and inform them.

Michael Groen: (14:25)
We’ll have to talk about these technical foundation and architectures in the same conversation that we talk about platforms. Getting AI, right and our secure data fabric environment right, will be central to our ability to compete effectively with the Chinese and the Russians as well, or any modern threat for that matter. And there’s more actually. So in an era of tightening budgets and a focus on squeezing out things that are legacy or not important in the budget, the productivity gains and the efficiency gains that AI can bring to the department, especially through the business process transformation, actually becomes an economic necessity.

Michael Groen: (15:10)
So in a squeeze play between modernizing our warfare that moves at machine speed and tighter budgets, AI is doubly necessary. So, what am I talking about when I talk about AI? As a Secretary Work’s comments convey, the integration of AI across the government and the Department of Defense is much more than just a facile layer of technology applied. It’s not about shiny objects. You’ve heard the phrase amateur study tactics and professional study logistics. Well in this environment, amateurs talk about applications and professionals talk about architectures and networks, and elevating the AI dialogue in the department so that we are talking about the foundations of all of our modern capabilities is a really important task. One that we’re working hard on.

Michael Groen: (16:03)
Really important task, one that we’re working hard on. The core business model, that is what the Department gives to the American people, what our mission is doesn’t change, but a modernized data-driven software heavy organization will do things in a different way. It really represents a transformation of our operating model. How do we do the things that we do as a Department of Defense? And that operating model will have to create a common data environment where data is shared, data is authoritative, data is available. The data feeds and algorithms across the Department will create productivity gains, accelerate processes, provide management visibility, insights into markets. And if all of that sounds like a modern software-driven company, you can think of all of our tech giants and smaller innovative companies across the US economy, it’s because it is. It’s the same challenge. It’s the same problem.

Michael Groen: (17:01)
And so we have examples. There’s very little magic here. It’s about making our organization, the Department of Defense in this case, as productive and efficient as any of these modern, successful data-driven enterprises. But there’s so much more, because all of this technology applies equally to our war-fighting capabilities. Our capabilities in the broad range of supporting activities from all the defense agencies and other places that make up the business of the Department. We’ve created positive momentum for AI, and we continue to build on that now, but now comes the real critical test. As in any transformation, the hardest part is institutional change and change management of the workforce, and practices, and processes that drive a business. This step will not be easy, even within the Department of Defense, but it’s foundational to our competitive success, our accountability, and our affordability.

Michael Groen: (17:56)
As the NSCAI work reveals, we have a generational opportunity here. For AI to be our future, we must act now. We need to start putting these places into place now. So I want to quickly describe our position through two different lenses. One is competition and the other’s opportunity. First of all, with respect to AI competition, I think it’s illustrative to talk about the economic impacts of artificial intelligence as a first order. Economic forecasts predict an AI economy of $16 trillion, a $16 trillion AI economy in the next 10 years. And this could amount to massive GDP increases 26%, as high as 26% for China, as high as 15% for the United States to participate in this competitive AI marketplace. And if we do that, this core economic competitiveness of the United States then needs to be reflected in a core military competitiveness in this space as well.

Michael Groen: (19:04)
It’s important to note that while we talk about a $16 trillion market in the next decade, this happens to coincide pretty closely with China’s declared and often repeated intent to be globally dominant in AI by 2030. So we look at the transformation of our economy has to be accompanied with close attention to the emerging threats that are declaring their intention to use this as a point of competition between autocracies and democracies. Our forces must operate with tempo, with data-driven decisions, with human machine teaming. Our forces must have broad situational awareness, multi-domain integration. The the PRC has a robust entrepreneurial AI environment. I mean, we’re all familiar with Ant Financial, or Alibaba, Tencent. I mean, these are global companies. But we’re also very familiar with the artifacts of population surveillance, minority oppression, the things that Secretary Work talked about under the Chinese Communist Party’s rule. We read about Beijing’s large-scale campuses, their tech campuses, and their state-owned enterprises that create a pipeline from entrepreneurs and innovators in China through the civil military fusion, take those capabilities directly into the PLA and military capabilities without intervening, accountability, or transparency.

Michael Groen: (20:33)
Their organizational efficiency, that autocratic rule, they count that as an advantage, is being applied directly to their AI development. And they are surging forward in their capability. This has to give us pause to contemplate. What does China’s dominance in AI mean for us if they intend that dominance by 2030? What does that imply for us? But we also can look through the lens of opportunity. Our best opportunities lie in American innovation. Academia and small companies are brimming with good ideas in the AI space. The number of AI companies is proliferating rapidly. We have war-fighters across the Department, especially young ones that can visualize their use cases in their operating environments and the things that they need to do from a military capability perspective. They’re good at this. They know how to operate in a data-driven and app- based environment, because they grew up that way and they expect the same from their defense systems. We have the best science and the best AI research available in academia inside the United States and in small companies.

Michael Groen: (21:43)
And we also benefit from the fact that we have a tech inversion in place where the AI technology that we need to run our Department and change our operating model exists literally right across the street. And many of the companies, the modern AI-driven data-driven companies that have survived in a very competitive market, we have lots of good examples to look at. We also have a rock solid ethical baseline that drives a principled approach, that drives our test and evaluation, our verification, our validation, our policy. And in the end of the analysis, our trust in our AI systems. And I welcome your questions about that.

Michael Groen: (22:21)
The good news, we have 1,000 flowers blooming inside the Department through the initiative of the services, the agencies, and the activities of the Department. And we’re doing better to integrate our industry technical expertise with war fighting functional expertise, so that we can actually responsibly implement technology in the places that matter. We have the opportunity to drive productivity, efficiency, effectiveness of the Department to new heights. And the performance across the Department in the JAIC, in the services, and other places are very excited and count themselves lucky to be part of this work. And with that, we very much look forward to your questions and appreciate your attention.

Moderator: (23:04)
All right, everybody, we’ve got about 16 or 17 reporters on the line. So if we could ask just one question at a time. And then I promise I will get to you for a second if we have time. So the first question is going to go out to Mr. Aaron Gregg from the Washington Post. Aaron, I know you’re on the line, I believe you’re on the line. Go ahead.

Aaron Gregg: (23:23)
Thank you guys for doing this. How does the enterprise cloud strategy play into all of this? Is this hodgepodge that you’re currently working with working for the Department? And what does the strategy look like under this new administration and the new SecDef?

Michael Groen: (23:40)
So, I’ll take that one first. So, what we have today, you’re right, we have development environments and pretty mature development environments in each of the services. Some of the services have multiple development environments. And so one of the things that we have to look at is what degree of resilience do we gain from having multiple dev environments, but also what advantages do we gain by stitching those development environments together into a fabric? So that is our intent, and that is what we’re looking for now, mapping that out. So what we need is, is a network of development environments that shares through containerized processes, shares authority to operate on networks, that shares access to data sources, that shares algorithms, and that shares even developmental tools and developmental environments. And so this is what we’re trying to construct today, so that we can broaden the base of developmental work.

Michael Groen: (24:40)
But on top of that, we need an operating layer, an operating network. And this is kind of the next step, because if you take those developmental algorithms and you’re going to employ them in a steady state basis, in a combat and command, in a war-fighting situation, wherever, then you need a network of operating platforms that you can do the same thing. And so this is the next step. As we evolve developmental platforms into a fabric, we move that up to the operational level and integrate service networks into a global network. This will give us the capability to have global situational awareness, and then to achieve the goals of what’s described in JADC2, which is any sensor, any shooter, or any sensor and any decision maker, we’re going to build that network, the the data stores, and the processes that make that possible. And we’re going to do that as a team across the Department, but the JAIC hopes to help coordinate the alliance that brings that together.

Speaker 1: (25:38)
I can’t add to that.

Moderator: (25:42)
Okay, we’ll go to the next question, sir. Go ahead.

Luis Martinez: (25:44)
Hi, I’m Luis Martinez with ABC News. Just a question for both of you please. General, Secretary Work talked about how China is way ahead on this. In terms of what you just spoke about, worldwide awareness, China right now is really still more of a regional player trying to become a worldwide player. Does AI make that leap for them or is the AI advantage that they have still strictly only regional? And Mr. Work, if I could ask you about, I think the final report talked about the importance of the human element in AI. Can you talk about that, especially as some people may have concerns about since we’re here at the Pentagon talking about how AI relates to the weaponization of that technology?

Michael Groen: (26:33)
Yeah. So thank you Luis for the question. I think it’s important to kind of pay attention to what China and their relationship with AI and the technology is. For example, the Chinese export autonomous systems to nations around the world in some places that have some pretty ugly conflicts that are underway and lots of human suffering and not a lot of world attention in some cases. So here you are, you have a nation that’s proliferating autonomous systems with no ethical baseline, no sets of controls, no transparency into those very dangerous small brush fire wars that are going on in a lot of different places. So that proliferation of technology is something that we need to pay attention to. Similarly, as you look at, for example, just right now, Chinese ships underway moving east as a demonstration capability shows you their willingness to push the boundaries and to be considered something more than a regional power.

Michael Groen: (27:39)
So that ambition drives, I think, is linked to their technological ambition of AI dominance. And so we have to look at if these things are coupled today, what does that hold for the future in 2025 or 2030, and we have to be prepared for that. And we have to be as agile and as competitive in this space as the Chinese intend to be.

Robert Work: (28:04)
Luis, it’s a great question. I would like to clarify something I said, we do not believe China is ahead right now in AI. The way we went about it as a commission is we said, “Look, AI is not a single technology. It is a bundle of technologies.” And we referred to it as the AI stack. And the AI stack has talent, the people that are going to use this has data, has the hardware that actually runs the algorithms, algorithms, applications, and integration. And so what we tried to do is we looked at each of the six and said, “Where does the US have an advantage? And where does China have an advantage?” We believe the US has an advantage in talent right now. We definitely are the global kind of magnet for best talent. There’s a lot of things changing in that. And unless we’re smart about our immigration policies, et cetera, we could lose that.

Robert Work: (29:02)
But right now, we judge that we have better talent. Second, we know we have an advantage in hardware. The United States and the West more broadly. And we think we have an advantage in our algorithms, although the Chinese are really pushing hard. We think that they could catch up with us within five to 10 years. Now they have an advantage in our view in data. They have a lot of data and they don’t have the restrictions on privacy, et cetera, that we do. They have an advantage in applications. They’re very good at that. And we think they have an advantage in integration, because they have a coherent strategy to get all of the AI stack together, to give them a national advantage. Now, we judge, because talent, hardware, and algorithms are so central and important to the stack, we judge that the United States actually is ahead of China in AI technologies more broadly.

Robert Work: (30:10)
But what we’re saying is the Chinese are far more organized for a competition and have a strategy to win the competition and are putting in a lot of resources. So as Lieutenant General Groen said, they want to be the world leader in AI technology by 2030. As soon as they say that, that means to me they recognize that they are not the world AI leader now. And it’s going to take them about 10 years, they think, eight years or so to surpass the United States. That’s why we say, “Look, we better be in this competition full on by 2025. If we don’t, then we run the risk of them surpassing us.” So I just wanted to clarify that. I wasn’t saying that China is ahead of us in AI. The second part of your question is all you got to do is look at what they did with Huawei to say the way they think about becoming a global power is not by invading countries.

Robert Work: (31:16)
It is putting out AI technology platforms that allow their values to proliferate around the world. And that’s what happened with Huawei. And the other place they’re going really hog wild on are global standard setting, which is kind of the US… That’s in our wheelhouse. We’ve been doing that since the end of World War II. And the Chinese are actually coordinating with the Russians to set global standards in AI that prefer their type of technology. So without question, I agree with Lieutenant General Groen, the Chinese have ambitions to be a global power. They say by 2050, actually it’s 2049, it’s the 100 year anniversary, they want to be.

Robert Work: (32:03)
2050, actually, it’s 2049, it’s the 100 year anniversary. They want to have the largest economy in the world and they want to be the foremost military power in the world. That’s not a future that the United States should say, “Yeah, let’s just let that happen.” Let’s compete, because we want to be the world’s foremost military power, and we want to be the most dynamic, innovative economy in the world. So the Chinese definitely have global ambitions. They are a regional power now, but they’re really starting to move more broadly on the world stage.

Moderator: (32:41)
Next question goes to Sydney Freedberg from Breaking Defense. Go ahead.

Luis Martinez: (32:47)
Hi, thank you for what you’re doing. Sydney Freedburg, Breaking Defense here. Let me ask a question particularly for General Groen. Of the various recommendations in the AI commission final report, which ones is DOD contemplating, which ones are actually concurred with, that you guys are going to try to put forward by yourselves or by asking Congress for legislation, and which ones do you guys actually not concur with?

Luis Martinez: (33:16)
The things, the conditions like the steering committee, like setting the various targets, like coming up with a strategy annex and so forth. Can you go through the checklist of things the commission wants you to do that you guys are, green light, yellow light, or red light on proceeding with?

Michael Groen: (33:36)
Yeah. Great question, Sydney. Good morning. So really good question. Now that in a CAI report, if you look at it in its full breadth, addresses a lot of the recommendations are at the national level, a place where defense may play a part, but defense might not lead. There is a subset of recommendations, on the order of 40, that we have taken a hard look at, that are military specific, and that, really by all rights, defense would lead. So as we look at that list… I’m sorry, it’s closer to a hundred recommendations. As we look at that list, a good number of them, about half, maybe a little bit more, we’re already moving out on to a significant degree. So in those cases, it’s really just a matter for us of taking a look at the NSCAI recommendations in detail, to make sure that we’ve considered the full scope of what might appear in one of those recommendations. And then see if what we’re doing today aligns with those. So that’s kind of one large subset, which is the majority.

Michael Groen: (34:40)
Then there’s another set of recommendations that we’ve looked at, but we really don’t have a plan for yet. We recognize that it’s a problem, but we’re not quite ready to move out in that direction, just because of limited bandwidth here. So that’s, another subset that we’re looking at. And then there’s a third subset, those that we really have to look hard at. They’re things that we hadn’t thought about before, and we really need to kind of pull the strings on the implications of those. So there’s that third subset.

Michael Groen: (35:08)
When you talk about which ones we agree with or don’t agree with, I can’t think of any that we don’t agree with. The things that are most pressing, that most closely align with what we’re doing today, are the ideas associated with starting to create an enterprise of capabilities. All of the recommendations about the ethical foundations, we are all about fleshing out our ethical foundations and really integrating that into every aspect of our process. The recommendations about organizing, with defense priorities. That will be the subject of the department. So we, as an AI community, can advocate, but that’s the department process that’ll decide what the priorities are. And we’ll adhere to whatever those priorities are articulated.

Michael Groen: (35:59)
The recommendation about workforce development, the family of recommendations about workforce development, we could not agree more. So how do we have a full range of training, a training environment or an education environment, that includes just like short-duration, tactical training, for example, for a coder to get on a platform, all the way to building service academies or building ROTC scholarships and that sort of thing.

Michael Groen: (36:25)
So, across the department, as some of these recommendations, with large scale and large scope, it starts to supersede what just the AI community and the department does, too. So we work closely with the research and engineering department. We work closely with the personnel and readiness and the acquisition sustainment, to start to form the coalitions to get after the problems that are underneath those recommendations, to make sure that we understand them and that we are actually moving toward this new operational model for how we are going to operate as a department. Thank you, Sydney.

Robert Work: (37:07)
Sydney, I guess the way I would answer this, I can’t really add too much more to what Lieutenant General Groen said, is just a little while ago, Secretary of Defense Mark Esper said, “AI is the number one priority for me as the Secretary of Defense.” And he went on to say, the competitor that really wins in the AI competition will have a battlefield advantage for decades. Now, if you believe that, and I certainly do, and I believe the commission, I think that’s a unanimous consensus. If you really believe that, you can’t keep doing what we’re doing now. I mean, the Defense Science Board said in 2014, “The one thing you got to get right is AI and AI enabled autonomy.” So here we are, seven years later, and we’re saying, “Okay, if we really believe that AI is going to give a competitor an advantage for a decade, are we satisfied with the progress that has happened since 2014?”

Robert Work: (38:06)
And if the answer is no, then you have to say, “Then we got to change things up.” And of course, people are going to say, “Hey, why would you make the Undersecretary of Defense for R&E the Co-chair and the Chief Science Officer of the JROC? The JROC works perfect.” Well, does every single program have a plug in it for AI, and being able to receive data for machine learning chips? Does it have the ports to allow them to pass on information? If the answer is no, we’re not doing good enough. I think Lieutenant General, excuse me, General Hyten, the Vice Chairman, has said this very clearly. He’s not satisfied with the way that JROC is functioning and he wants to change it. So it really pushes these more broader joint system of system things that Lieutenant General Groen was talking about.

Robert Work: (38:58)
So from the commissioner’s point of view, look, right now, we do not believe we are moving as fast as we should. And if the department agrees with that general assessment, then they need to change things.

Moderator: (39:18)
Okay. Go ahead.

Kristina Anderson: (39:19)
Thank you for taking my question. Kristina Anderson, AWPS News. I wonder if you could speak to getting the data, the secure data fabric right? And then taking that up a notch, to kind of the global structure of AI. How can you think about building this structure so that security is one of the fundamental elements of that? That’s one of the criticisms of the internet right now, is that when it was built, not withstanding the tremendous benefits that we have, that it was not built with security in mind.

Michael Groen: (39:53)
Yeah. Thanks, Kristina. That’s an excellent question. And to me, that’s the operative question. Because I think there’s a good alignment as we talk about the operational effects that we want to achieve. There’s good alignment when we talk about building platforms and how we’re going to integrate data and share data. The very first question we start to ask at that point is, “Okay, how are we going to secure this? How do we secure this environment?” And so we have a full-court press on. Of course, we have native cloud security, additional security that we’ve been able to add. We’ve got lots of cyber security specialists helping us look at this problem set, but more importantly, we’re trying to keep an eye on the entire research and development ecosystem. So not just from a cybersecurity perspective, but how do we deal with adversarial AI, for example? How do we deal with purposeful intent to intervene or to interfere with our algorithms or spoof our algorithms?

Michael Groen: (40:53)
So this is probably, I would say this is certainly the top priority and probably our largest effort right now. From a research and development perspective, how do we make sure that as we build this out, we squeeze out all the vulnerabilities that we can? We will never have a perfect system. We will never have a perfect internet. But we need to protect it like we would protect any weapon system or any other critical node. Thank you.

Robert Work: (41:19)
It’s a central question, Kristina. As Lieutenant General Groen said, we’re moving into an era of AI competition, and poisoning data is a way to gain an advantage. We have to be able to guard against that. We need to red-team the heck out of our databases, we need to have people trying to break into the database and poison data often, so that we can identify vulnerabilities and fix them. We have to have means by which to check the data. And there’s all sorts of different things… The commercial sector is doing this also. They’re looking, how do you protect the data and how do you protect your algorithms to make sure that no biases are inserted? So, look, we don’t have all the answers for this yet, but it’s central to the thinking of the JAIC, I think you’ve heard.

Robert Work: (42:26)
And our AI has to be better than their AI. All you have to do is envision an AI-enabled cyber attack. And if their AI is better on offense than our AI is better on defense, that’s going to be a bad day for us. So constant red-teaming, constant development with DevSecOps in mind, constant testing and evaluation, validation, and verification. This is our future now. It’s going to be something we just have to take as a matter of course.

Moderator: (43:04)
Next question goes out to Tony from Bloomberg News. Go ahead, Tony.

Tony Capaccio: (43:09)
Hi, this is Tony Capaccio. I have a question, an operational-application question that I think most of the citizens can relate to. Next month marks the 10th anniversary of the Bin Laden raid by SEAL Team Six. Conceptually, if AI was in widespread use in 2011, how might it have been employed in planning and executing the raid? I’m thinking facial recognition, pinpointing the movements of activity in and around the compound, calculating the height of the walls and their thickness, et cetera. Can you think outside the box and give us a couple examples of how it might’ve been used in that raid?

Michael Groen: (43:47)
Yeah. Hey, so great question, Tony. And I think that raises… So when look at that, remember what I said, amateurs study apps, professionals study architectures. I think if we take any military operation, I can’t really speak to that particular event, but any military operation, it’s easy to get fixated on the applications that exist on the tactical edge. But when you walk back a military problem, you start with those tactical warnings on the objective or near the objective, and you back up a step and you need to be broadly situational aware. And you back up another step and you need to be aware of not just the red capabilities in the red force, but you also need to know where the blue forces and where your own forces are and their readiness and their availability.

Michael Groen: (44:39)
You also need to understand the green forces, those partner forces that we might have in the area, or the white forces, the innocent civilian populations who might be in the area. So all of those kinds of situational awareness activities can be worked through AI, right? That can be done much better than a human being can do it, by leveraging AI to work on all that data. And you start backing up even further. And you talk about, well, how do you have effects integration? When do you get onto the objective and how do you coordinate with an adjacent unit? How do you make sure that your fires are safe and are focused on good targets? Again, AI can help with the information flow that informs that decision-making.

Michael Groen: (45:28)
Back up further, weather effects. Do we have global weather that’s in a database that everybody can use and integrate into their application? Do we have threat picture that’s integrated into our applications in defense? Do we know threatening behavior? Have we modeled that? Do we use it for understanding the human populations, predictive modeling, and the list goes on and on. And the further you go back into the institution, you’re talking about modeling and simulation, platform maintenance, preventative maintenance for helicopter platforms, for example, integrated logistics, contingency management, fleet maintenance. Think of an electric car company that broadcasts updates to their entire fleet of vehicles. This is the sort of capabilities that AI brings to the department. And when you start stacking those up, you really see how it focuses.

Michael Groen: (46:23)
You focus that lens on a tactical military problem. It’s not just the AI at the tactical edge, but it’s all of the AI that has contributed all the way to the back office of the Pentagon, where we’re doing financial records, right? Or inventory management, or all of the business of defense, focused through data into that objective. So I hope that helps.

Michael Groen: (46:47)
You know what? I’ll just give you one other point. For almost every military activity, there’s a commercial analog to that activity. You think about a large-scale online shopping network, that has to deal with ordering and buying and recommending and presenting options and selecting options and delivering. Every one of those has a parallel in the military space. The AI that we integrate from commercial industry today, that technology that’s readily available, helps us do those same things with the efficiency and productivity that any large-scale, successful, commercial corporation does today. And from a business perspective, that’s exactly what we need to have. Thanks.

Robert Work: (47:30)
That’s a clever question, Tony. And to me, the biggest change would be our ability to look at enormous amounts of social media data, et cetera, to make predictive analysis and also make judgments. I’m a movie aficionado, so everything I know about the Bin Laden raid I learned in Zero Dark Thirty. And if Zero Dark Thirty is correct, what DCIA, the Director of CIA Panetta was-

Robert Work: (48:03)
… DCIA, the Director of CIA, Panetta, was constantly asking us, “How sure are we that he’s in the compound? Before we execute a raid in another sovereign country, how sure are we?” Well, I just go to the shoot down of the Ukrainian airliner, and we knew the Russians did it immediately through national technical means and other stuff, but we didn’t want to release that because of sources and methods. There was a company called Bellingcat who essentially put together the storyboard for the entire shoot down using social media. They had a picture of a TEL with three surface to air missiles on it, a picture of it crossing the border into eastern Ukraine with a serial number on the side. They had another picture of a missile contrail right next to the village where the shoot down occurred. They had another picture of the same TEL with the same serial number going back into Russia with two instead of three missiles.

Robert Work: (49:11)
They put together a storyboard just using social media. It was 100%. Any objective the person would say, “Whoa, the Russians really did shoot down that airliner.” And had we had the capability we have now to go through all sorts of data, then I think the analysts would have been able to tell Director Panetta, “We are 100% certain that Bin Laden is in that compound, and here’s all of the data that we can show you.” And then predictive analysis like Lieutenant General Groen said. The president might’ve asked, “What do we expect to be the reaction of the Muslim community if it becomes aware that we execute a raid and we kill Bin Laden?” AI is able to do that type of predictive INW. We’re doing it right now in Afghanistan, using AI to predict when attacks might occur or predict actions by our adversaries. I don’t think AI would have made that much difference in the raid force itself, unless they had specific applications that they needed to say, “What is the most up-to-date intelligence? What is happening? Do we need to change our plan,” et cetera.

Robert Work: (50:42)
But to me, we already have kind of an answer for you. AI gives you a tool that we’ve never, ever really had. One of our commissioners, Ken Ford, refers to this as, AI gives commanders eye glasses for the mind, and I thought it was such a pithy observation. It helps look through enormous amounts of data that a human would be incapable of interpreting, and the AI is able to find patterns, make inferences, et cetera. So that’s what we mean by human/machine collaboration. You let the machine do all that hard number crunching and stuff like that, and you leave the commander, the human commander to exercise their creative spirit and their initiative and their understanding of the broader strategic concept. Human/machine collaboration is a big, big deal in the future of AI.

Moderator: (51:41)
The next question goes out to Jasmine from National Defense.

Tony Capaccio: (51:46)
Hi. Thank you so much for doing this. My question has to do with comments that the Chairman of the Commission has made before, Eric Schmidt. He said that China is maybe two years behind the United States. Lieutenant General Groen, I was wondering if you agree with that assessment, or do you think that we have a bit more of an advantage?

Michael Groen: (52:08)
Yeah. Thanks, Jasmine. I think I would echo what Secretary Work articulated before. Trying to measure advantage in a space like this is a very difficult undertaking. I think you can look at places where there’s clear superiority on the US side, I think like our academic environment. I mean, the United States’ academic community is unsurpassed globally. You look at our small innovative companies and the things that they’re working, almost every company these days is an AI company, and a lot of them have really good vertical stove pipe capability, so there’s great innovation that goes all across the United States.

Michael Groen: (52:52)
You look at on the Chinese side, I mean, you do have the organizational efficiency of autocracy and you have all of the moral impacts of that as well. But I think the competition, if you really wanted to simplify it, might be, in a sense, the organizational efficiency versus innovation and innovation of efficiency. And so when you look at that competition through those two lenses, you really have to pay attention to both, right? How do we achieve organizational efficiency in our efforts so that we can keep pace with a bigger machine? But then also, how can we continue to innovate, so that we’re not stuck in yesterday’s technology and we continue to push the envelope. So it’s a really hard thing to measure, I think both countries have demonstrated significant global capabilities, and so we have to be in this fight for sure.

Robert Work: (53:50)
Yeah. I mean, I agree. This is really a tough thing to kind of judge. The way we did it, as I explained earlier, is we broke down the AI stack into its six components. We judge that we’re ahead, slightly ahead or ahead in three of the six and China is ahead or slightly ahead in three to six, so it’s a really, really tight competition. We admitted that the Chinese could probably catch up with us in algorithms within five to 10 years. We also say that we’re 100 miles away from being two generations ahead in hardware to being two generations behind, if, for example, China seized Taiwan and the chip fabricating facilities that are on Taiwan. So Eric Schmidt has been working in this area for a long time, and his judgment is, “Look, I think we’re about two years ahead.” But he will tell anyone who listens the Chinese are coming on fast. They’re ahead in some, we’re ahead in some. We need to take this competition like a politician takes a political race. You have to run like you’re losing, and so it’s important that we really gear up and go.

Moderator: (55:12)
Okay, we have time for one more question. That’ll go out to Jackson from FedScoop. Go ahead, Jackson.

Luis Martinez: (55:18)
Thank you so much. I hope I have my dates right here, but for Lieutenant General Groen, I believe we’re six months out from your announcement of JAIC 2.0 and shifting to be more of an enabling force. Hoping you can give us just an update on how that change is going, and if I could ask specifically about are you now sending out officials to kind of be the liaisons to specific AI offices across the force? How is that going? Is there any tension with JAIC maybe showing up and offering help? How is that successful? Maybe how are things? Are there any things that you might change in the future? And then if I could also ask Mr. Work, previously you’ve said that the JAIC should take a naval nuclear reactor Rickover type strategy to be in kind of an AI coordination office. Do you think that holds any tension between the kind of thousand flowers blooming approach that’s being taken? What is your current stance on that? Thank you.

Michael Groen: (56:18)
So I’ll start with … Thanks, Jackson. Great question, and so as we and you I think accurately describe, what we want to do in JAIC 2.0, so we realized kind of our initial business model wasn’t getting us where we needed to go. It was not transformational enough, and so we really started focusing on broad enablement, and I think we’ve been fairly successful in that space. We do have great outreach organizations. We do pay keen attention to all of the service developments, and we try to partner with all of them. We pay keen attention to the demand signals from the combatant commands, and we want to work with anybody who is doing AI today, but here’s how we approach that problem set, right?

Michael Groen: (57:02)
Our first duty, I think, or one of the things that we do well is measure our success and the success of others, and the second thing that I think we do well is we don’t go to these organizations or partner with these organizations from a position of teacher-student. We come in as archivists of best practice across the department and say, “Hey, show us how you’re doing that. Let us learn from you,” and then we can share, “Hey, there’s another agency in the department that has a problem very similar to yours, and here’s how they’re addressing that.” So we played broker for information and expertise across agencies, across services, across combat commands, and then what we can do is then turn that into, because of our congressional authority, now to do our own acquisition. For example, now we can actually start providing a much broader array of support service and services and enabling services that help make all of those customers successful.

Michael Groen: (58:02)
We think we’re a force for good here. We approach the challenge with humility and we are, we measure our success in the success of others, and so that has gotten us a long way. I will say this. As I look at the challenge that Secretary Work has laid out so effectively, even now I wonder is JAIC 2.0 enough, right? Are we moving fast enough? Are we moving fast enough to create enterprises of capability and overcome stovepipe developments? Are we moving fast enough to really change our operating model to data-driven and data visibility across the department? Are we moving fast enough in integrating innovative technology into the department? And sometimes I lay awake at night and say the answer’s no, and that challenge and feeling the hot breath on the back of our necks is what keeps the JAIC motivated and keeps us working hard every day because we recognize how big this is and the scale of the Department of Defense and how necessary this transformation is at scale. Thanks for the question. That was great.

Robert Work: (59:08)
Jackson, every now and then somebody asks me a question like yours, and I go, “God, did I really say that?” But at the time, what I was saying is we really believe that we’re going to build the department around the capabilities of AI and AI-enabled autonomy and nuclear reactors made the … You’re going to build a submarine around the reactor, and you’re going to have the people who understand everything about how that reactor works and how it interfaces with all the other systems on the submarines. We’re going to make sure that we pick the people who are in charge. We’re going to set the standards. No one can touch the standards except for us. And so at the time, I was saying there’s a lot of advantages of this.

Robert Work: (59:56)
But over the last two years working with 14 other brilliant commissioners, the recommendations that we put into the commission I’m fully behind, and I personally think that if you use … Well, I’ll just lay my cards on the table. We thought about this as a blueprint. We said, look, you really shouldn’t look at all of our recommendations and say, “Hmm, I kind of like that one. I’ll pull that off the wall.” You have to do them all together to get the effect that the commission feels is important. So right now, I would say I’ve changed from the nuclear reactor model to the National Commission on Artificial Intelligence Model.

Robert Work: (01:00:41)
And I would just like to say, thank again for all of the people who listened in. The report is voluminous. It’s over 760 pages, but our staff, which is a world-class staff, did everything they could for it to be interactive for you to be able to go into that final report and find the information that you would like. There are so many recommendations. This is why I have so much paper. I mean, I can’t keep track of all of the recommendations in the report. I need to be reminded of them, but I would ask all of you to read the report because we feel it is so important for our economic competitiveness and our military competitiveness. And I want to thank you for hosting us today.

Michael Groen: (01:01:26)
Yes, sir.

Robert Work: (01:01:26)
And allowing us to kind of pitch our product.

Michael Groen: (01:01:31)
Thank you for the 760-page to do list, sir.

Moderator: (01:01:36)
Great. We’re out of time, but for those of you that we didn’t get to with questions, please submit your questions to OSD Public Affairs, and we can answer those. So thanks everyone on the lines and everybody here today for attending, and thank you very much. ( silence).

Robert Work: (01:02:05)
Thank you, sir.

Michael Groen: (01:02:15)
Well done, sir. I forgot. I haven’t stood up in front of them like this for a ling time. I have some compression socks that I always try to …

Robert Work: (01:02:22)
Oh, yeah. That’s a good idea.

Michael Groen: (01:02:25)
I forgot my compression socks.

Robert Work: (01:02:28)
Yeah. I need to pick up that habit, because I hate standing in one place for a long time. [inaudible 01:02:42]. Are you going to do a [inaudible 01:02:56]?

Moderator: (01:02:56)
[inaudible 01:02:56].

Robert Work: (01:02:57)
Thank you, sir.

Moderator: (01:03:00)
Thank you, gentlemen. [crosstalk 01:03:00].

Transcribe Your Own Content

Try Rev and save time transcribing, captioning, and subtitling.