Transcript: Digital Transformation of the Military with Michle Flournoy, Eric Schmidt & Brandon Ts

MR. IGNATIUS: Welcome to Washington Post Live. I’m David Ignatius, a columnist for The Post.

Today we're going to focus on the digital transformation of the military with several very knowledgeable guests. We are going to begin with Eric Schmidt, the former chief executive of Google, who has just co-authored a book with Henry Kissinger and MIT's Daniel Huttenlocher, called "The Age of AI And Our Human Future."

Eric will be joined by Michèle Flournoy, the former Under Secretary of Defense for Policy in the Obama administration and currently managing partner of WestExec Advisors, which tries to bring smaller high-tech companies into the sometimes-overwhelming world of Pentagon procurement.

Welcome to both Eric and Michèle.

MS. FLOURNOY: Good to see you, David.

MR. SCHMIDT: Thank you, David, and good to see you, Michèle.

Advertisement

MR. IGNATIUS: Eric, let me begin with your book. You and Dr. Kissinger and your co-author foresee, and I'm quoting here, "a class of technology that augers a revolution in human affairs," and perhaps, as Kissinger has written previously, the end of the Age of Enlightenment as we think about it. I'd like to ask you to summarize your and Dr. Kissinger's, your co-author's biggest concerns about AI in our political and cultural life, and especially in military competition.

MR. SCHMIDT: Well, first, thanks to The Washington Post for doing this with us.

The book basically says that AI is going to be incredible, for all the reasons that everybody knows. It's going to transform biology--we have examples of new drugs that humans could never have designed--new materials, much safer, much stronger, solutions to climate change because of scale and the way AI works. I could just go on and on and on. It is also a wave that's taking over our entire industry. So AI will be something that will be around you, whether you like it or not. Everything will have it embedded in it.

Advertisement

In the book, we also say that we are playing with fire, in the sense that we are changing assumptions that humans have made for a very long time. In the case of military conflict, one of the core assumptions is human decision time, and in the book, we speak about the problem of the compression of time. And in particular, since these AI systems are neither reliable enough nor predictable enough--they have emergent behavior and they are still learning while they are doing things--we have a real problem with understanding what they're going to do, and it can be destabilizing in a military grand strategy sense.

We also say that AI will be used, or misused, by our opponents to, for example, change the misinformation space. In other words, we already saw this with election interference. You could imagine this at a scale that's inconceivably large all around us. That has got to get addressed.

And we also talk about the definition of what does it mean to be human. We're very concerned that we've never had a human-like but not human intelligence to deal with that's similar to our own. We've always been the top dog, if you will, in the intelligence hierarchy. And now the reason we think it's a new age and not just a new technology is because humans will grow up in the presence of these new AI capabilities which will be different from human but also very powerful and very important.

Advertisement

MR. IGNATIUS: It's a very provocative book. I urge people to take a look.

I want to turn to Michèle and ask you about something you wrote in Foreign Affairs back in June. The title of the article was "America's Military Risks Losing Its Edge." Your argument is basically that we are locked in inertia and legacy systems for our weapons. Tell us what you think needs to be done to prevent that scenario of America losing its edge.

MS. FLOURNOY: Well, we are the best military in the world and we've long thought of ourselves as that. But if we simply rest on our laurels, that won't remain the case. We are in a real competition with China, in particular, but also other powers like Russia, who are making major technological investments that will change how we're able to prevent conflict, to deter conflict, and, if necessary, fight in the future.

Advertisement

And so we have to invest in new technologies and operational concepts, new ways of thinking, new ways of doing business, if we are going to keep our edge. And the name of the game here really is preventing conflict, deterring conflict with another nuclear-armed power. And so what I argued is that we really need to be moving much more quickly, making investments in a number of key areas, including artificial intelligence. And here I would just refer everybody to the recommendations of the National Security Commission on AI, which Eric co-chaired with Bob Work, probably the most important commission report since 9/11, that virtually lays out a roadmap, essentially, for how the nation can keep its edge and how we can leverage AI responsibly in the military domain.

And I just emphasize that word "responsibly." This is going to be a real challenge. We have to be responsible. We have to be ethical. And there are certain applications of AI in the military space that will not be consistent with American values, or our interests, for that matter, and we've got to sort through those.

MR. IGNATIUS: Eric, you write in your book that, and I'm quoting here, "War has always been uncertain, but it has been guided by one logic as well as one set of limitations, that of humans." I want to ask you about an AI-driven war. One of the things that frightens me as I think about it is that linkages in the chain of escalation could be tighter. Dr. Kissinger and other strategists of nuclear war have always feared those tight linkages driving us toward conflict.

Advertisement

I want to ask you about that, and I also want to ask you about whether you think AI-driven warfare will focus on machines killing other machines or machines killing humans.

MR. SCHMIDT: We're just at the beginning of the answer to this question. We started with the question of proliferation. So if you assume that AI will eventually be very powerful, do we have a proliferation problem? And the answer is yes, because we and our opponents are all tracking together in a virtual arms race where we're essentially competing and competing and competing to build out this. Now today those arms are not focused on each other but they could easily be because the technology is dual use. So that's the first problem.

The second problem is you have to have a theory of how war will emerge, AI-enabled war, and we don't really know. If you talk to the military, and Michèle is an expert at this obviously, they'll say that the first war is always in space and in communications. It's cutting off the opponent's communications. We saw this in South Georgia and couple of other conflicts involving the Russians a few years ago. So let's assume we're going to have that happen. Let's assume that we survive that initial attack. Now you have a cyber war, and the decisions in a cyber war have to be made faster than human time. So even before we get to the robots killing people, we have this massive set of infrastructure questions about how quickly things attack, who controls them, and so forth.

Advertisement

When you get to actual automation, the consensus of our National Security Commission was that we wanted AI weapons to be human guided, that improvements in precision are welcome--because a lot of deaths in war are collateral in the sense of unintended or innocent victims and that sort of thing--but that wars in the future are likely going to be significantly deadlier because of AI, because of targeting and that sort of thing.

We took a while looking at this question of command and control, which is what the military would really like. The current command and control systems are so complicated and so poorly built that it's unlikely that in the short term AI can really solve that problem. But the military goal eventually is to have a system that watches everything and gives them alerts, and that makes sense to us.

The issue here--all of that is straightforward, and the thing to worry about is the destabilizing nature of launch on warning. And so what's going to happen, in our view, and we say this in the book, is that eventually you're going to have two sides, both of whom have unknown strengths and unknown weaknesses in their AI systems. One will get jumpy and alert the other that they're about to attack, and the other will actually cause the attack. That's the Strangelove scenario. That is incredibly destabilizing and it's got to get addressed now, in terms of military strategy.

Advertisement

MR. IGNATIUS: Eric, let me just follow up with an additional question. You and I have talked in the past about an effort that you led at the Pentagon a couple of years ago to try to develop ethical AI standards. You worked with some of the leading computer scientists at the top companies and universities around the country. How did that go, and did that leave you more or less confident that you could build ethics into this AI future?

MR. SCHMIDT: Well, I was really pleasantly surprised at the military. We issued our report, which, by law, had to be done to the public, and then the military considered it and they actually endorsed it and adopted it as their AI framework for military action. So I was really impressed with the sincerity with which our military leadership takes its moral and legal duties. There is no question that they care about this.

It's interesting that in the conversations that we had we always assumed--this is just an assumption--that in a real, hard, kinetic war all of the rules would be thrown away, and that basically people would do whatever it took to win. And I'm not talking about the U.S. I'm talking about our opponents.

Advertisement

If that's true then that's an inherently destabilizing assumption, and since we don't know, we have not had these sorts of AI wars, we don't have a precedent. Dr. Kissinger talks a lot about the nuclear aspect, because, of course, he was at the dawn of all of these policies and helped define them. And he points out that the doctrine, where there were two, Nagasaki and Hiroshima, and there have not otherwise been uses of nuclear war, means that we have not proven these doctrines. We just think they're correct. And since we have not engaged in these automatic AI kind of wars, we don't really know how humans will behave in the middle of them, and that's, unfortunately, going to be something we're going to have to learn the hard way.

MR. IGNATIUS: Michèle, you have been one of the top officials at the Pentagon. You know how our military works intimately. Let me ask you two questions. First, what do you think is the risk, or likelihood, I want to say, of the kind of AI accidental decision-making, destabilizing effective AI in warfighting? And second, in your current work with WestExec, are there examples of where you see companies that have good ideas in this AI space that just can't break through the wall of Pentagon procurement to get their ideas examined and taken seriously?

MS. FLOURNOY: Yeah. So I think the biggest risk, David, is that the policy and the strategy and the approach to international discussions about this with other countries doesn't keep up with the technological evolution and adoption. That's the biggest risk, is that we find ourselves with the technology ahead of our own thinking and conceptualization of how we are going to manage this and how we are going to avoid some of the biggest risks that Eric cited.

The good news is I do think we are still many, many years away from what Eric is describing in terms of, you know, a robot-on-robot AI-driven warfare. When I look at the Pentagon, they are starting with applications like, you know, can we use AI to sort through the overwhelming amount of intelligence and information, classified, unclassified, that we get from a zillion different sources, and separate the wheat from the chaff? Find the insight, find the thing that's going to help us see that the Russians might be preparing to move against Ukraine, so that we can get out there and use diplomacy to try to prevent that. Using AI to help cyber connections and communications and command and control, using AI for things like predictive maintenance, making better resource allocations and using taxpayer dollars to support readiness.

I mean, these are the kinds of applications that I'm seeing, and those are the companies that are getting traction. It's really, I would say, argued benign uses of AI that are very far away from lethal applications. One of the ones you'll hear from today is Brandon Tseng from Shield AI, who is using AI to enable special operations teams to map who is inside a potential enemy compound before the first guy goes through the door, and that will hopefully greatly reduce the risk to our men and women in uniform who are in harm's way.

So those are the kinds of near-term applications that I think are progressing, not fast enough maybe, but I don't think they pose the kinds of risks that Eric is talking about when we look to the future and where this could go, over time.

MR. IGNATIUS: Eric, the history of arms races is that countries often misperceive their adversaries' strengths and overstate them, sometimes to the opposite, but many cases of overstating. I want to ask you, honestly, as you look at Chinese and Russian capabilities in this AI space, do you think that we should worry that we're behind or do we seem to be keeping up in an adequate way?

MR. SCHMIDT: So we looked at this question very carefully. We concluded that the Russian teams are quite good but relatively subscale, in other words, just not enough of them. This is a scale business, in terms of people and deployment. But we were quite alarmed by the build-up on the Chinese side of core technology, money, and programs. And this is not just a military thing. We concluded that we were somewhat ahead--and I will say that I think "somewhat ahead" is sort of a year kind of number, not a ten-year kind of number--but that at the moment China has prioritized this very high. They are producing more papers, more PhDs. The most recent analysis indicates that the papers that they're producing are of a similar quality to the very good papers from the West. And so I think a fair reading of this is they are very close to us and their goal is to beat us or catch up and exceed what we're currently doing.

In our report we offer a long list of ways to address this, with very strong recommendations including increasing funding, a national research network, working with our partners in the West, as Michèle mentioned, working on consistent with our own values, et cetera.

One of things that's paradoxical here is that you want to think of a military response, but, in fact, most of the work is being done in the private sector, and that's true in the commercial and the military spaces. So we call for very, very tight links, such as Michèle mentioned, between the military and small companies that can bring this technology to them.

Overall, what I would say is we have time to redouble our investment in this overall space. We made an estimate that this correlated with $50 trillion of expansion of businesses and stock market wealth if we win this battle, and winning is defined as staying ahead. We also said that we need to stay at least two generations of semiconductor cycles ahead of China, which we are today, but they are investing heavily in that space. That's an argument for the government to help the semiconductor industry with more funds to build domestic plants, et cetera. So we've made all those recommendations.

At the moment, we are in a few-year period where the outcome to your question will be determined. Once the gap is really opened it will be very hard for us to catch up unless we resolve it now. I view this as a national emergency.

MR. IGNATIUS: So, Eric, just a brief follow-up. What's your judgment about whether President Xi Jinping's recent attacks, really, through regulatory bodies on China's biggest and most successful AI companies, Alibaba most notably? What effect is that going to have on China's ability to compete? That could have an intimidating effect on the very people that they're going to need.

MR. SCHMIDT: Well, that is the narrative that a lot of people in the West have. So I asked some of my Chinese friends who are very nationalistic and they said, "You guys are wrong. There is a gazillion of these entrepreneurs, and China will regulate the internet and the excesses because the West has not." The way they say it is, "You guys have all sorts of problems. You are not managing your people right. The democracies are failing," is what they say, and furthermore, "You're not managing the internet correctly." The combination of the privacy rules that have been put in place now in China, which are effective November 1, the emergent algorithmic regulations process that China has undertaken, they claim will be used to make a safe and appropriate internet, and that there are tremendous opportunities for innovation for the next generation of entrepreneurs underneath that. Of course, we all know this is their propaganda, and we don't know if it's true or not.

Share this articleShare

But what I thought was interesting was we have always treated China as the Wild West of the internet, in terms of commercial products, but they are grappling with the regulation issues that we have here in the West, and they're going to solve it in their own way. Whatever China does with respect to regulating the internet and AI and so forth, they'll do it consistent with their doctrine, which includes making sure that the CCP remains in power.

MR. IGNATIUS: Michèle, want to turn to another issue that involves China. When strategists think about the danger of war in the future they often focus on China, but not specifically China, about a contest between the United States and China over the future of Taiwan. I want to ask you, you have thought deeply about this for years. What do you think the United States should do and can do to help Taiwan prepare for, deter such a conflict, and how explicit do you think we should be in giving some clarity about what we would do if China attacked?

MS. FLOURNOY: So I do think that we are in a period where the danger of miscalculation is very real. As Eric noted, if you go to Beijing and you turn on the evening news, the narrative of U.S. decline--you know, we are inwardly focused, we are a mess, democracy isn't serving the people, we are polarized, we are down, we are out, we're not getting up--I mean, obviously, I don't believe any of that, but that's the Chinese narrative. And if they really start to believe that, it could cause them to miscalculate in terms of learning into a more assertive or even aggressive stance.

I don't think that's likely in the near term, because Xi is really focused on consolidating his power and having his next term of leadership validated next fall into late 2022. But we need to use this time, as the United States, to shore up deterrence. So it means showing up in the region--diplomatically, foremost, but also militarily. It means clarifying our interests and our values and what we are committed to defending. And it means really making sure we have the capabilities in place to create enough doubt in the Chinese mind about their ability to succeed at low cost, that it makes them, you know, defer the decision to another day in the future.

So there's a lot of work to be done there. I see some signs that the Pentagon, and the Biden administration more broadly, are leaning forward in this direction. But there are also things I'm looking for that maybe aren't happening as quickly as they should. So there is a window here, and it's all about conveying our resolve and our capabilities to the Chinese.

I also think, I would just note, something that was really important in Eric's book, we need to be talking to China and to Russia about some of the more dangerous, escalatory scenarios, the ways in which we could get into a crisis and have things get out of control. We need to talk about those scenarios directly and try to take some of the bad ideas of how to use cyberattacks or anti-space attacks off the table as much as possible.

MR. IGNATIUS: Eric, let me close, because we're running out of time, with a question for you that follows directly on what Michèle was talking about. We now have strategic stability talks with Russia that are just beginning, that, in theory, will explore new technologies, new weapons. Should discussion of AI warfare be part of those strategic stability talks, and second, how on earth do we begin such a conversation with China?

MR. SCHMIDT: The answer is yes, and I have no idea how to get started with China, but I think it's crucial. And the core thing we have to do is collectively design the equivalent of the rules that were put in place after both the Cuban missile crisis and the original 1949 explosion in the Soviet Union.

So during that period, they worked out what the language was. I was shocked. Michèle, of course, knew this, that whenever you launch a missile, you let all the other governments you know do it, because that way they don't think it's a threat, and they also use it to tune their satellite observation launch systems. So think of that as a bargain between competitors. It's an agreed-upon bargain to stand down and lower the, essentially, launch readiness level.

We're going to have to do the same thing. We don't have the language, let alone the concept, to even describe what that looks like for AI-enabled warfare. In our book we call for these issues to be discussed not just by the technical people and the diplomats but by psychologists, behavioral scientists, and so forth. These issues are too big for any single group to dominate them. We got ourselves into the current predicament in our democracies because we let the tech companies, including the ones I've been associated with, to do roughly whatever they thought best against their incentives. These AI systems are so powerful that they need to be collectively discussed and collectively managed, both at a local as well as a global basis.

MR. IGNATIUS: Absolutely fascinating conversation. We'll have to leave it there. I want to thank Michèle Flournoy and Eric Schmidt for joining us.

I'll be back in a few minutes with a former Navy SEAL who now helps run a company on the cutting edge of AI in national security uses, so please stay with us.

[Video plays]

MS. MESERVE: Hello. I'm Jeanne Meserve. The transformation in warfare is enabled, of course, by technology, and the defense industry which produces that technology is changing as well. Here with me to discuss this is Wes Kremer. He is President of Raytheon Missiles and Defense. Great to have you with us.

MR. KREMER: Well, thanks. It's my pleasure to be here today.

MS. MESERVE: The defense industry, of course, has always been known for innovation. Talk about the shift towards digital design and the difference that is making.

MR. KREMER: Well, you know, digital design goes back a long time, and you look on the software side we started with Agile or Scrum and we evolved to DevSecOps, I think what's different for now is two things. One, we are seeing an emerging threat and we're seeing capabilities from other countries that are, in some ways, potentially exceeding what we have, and that's driving us to go faster. And at the same time, the technology to enable all of these digital tools to where you could actually do a design in a completely digital environment are also rapidly maturing. And I think it's the nexus of these two things that really has the defense industry right now on the leading edge of digital design or digital engineering.

MS. MESERVE: And so does this transformation reduce costs and does it get technologies to market faster?

MR. KREMER: Yeah, Jeanne, I think that's really the part that's most exciting about this, is both of those. You know, we're seeing opportunities for significant reductions in schedule, which obviously translates to cost. I'll give you one simple example. On our recent flight tests where we successfully flew a hypersonic scramjet engine, what we saw on that was that the results of the flight test almost perfectly followed what we had modeled or predicted for that. And so now it begs the question of can you go faster to fielding? Do you have to do as much flight testing as maybe we did in the past? Can you truncate that with a smaller number of flight tests that would cost less and that would accelerate your schedule to be able to field the capability? And I think that's just one small example of where we're starting to see the benefits of digital engineering.

MS. MESERVE: Given the integrated nature of digital transformation, has it changed the relationship between government and industry, or should it?

MR. KREMER: Well I think it's going to have to, and I would say that, you know, the DoD is certainly partners in this, and they've been the ones driving for us to do this in industry. So it's definitely a partnership. But we're going to have to make changes on both sides. You know, on the government side we're going to have to contract differently. You can't specify an old set of contract deliverables when you're actually going to operate in a digital environment. So, for example, today, on a critical design review, the artifact is literally thousands of pages of drawings and PowerPoint slides, whereas in a digital environment it's a model. It's a 3D model of the system that's then agreed upon.

On the defense side, we have to update our business processes, because our business processes are around those old ways of doing things. And so we have to update that.

And finally, something that we have to jointly work through is how to protect intellectual property. I mean if you're going to open up a design and everything that your suppliers do and everything that industry does is all in one model, then we have to figure out how do you protect intellectual property.

And so those are some of the things that we're all trying to work through right now.

MS. MESERVE: As we continue to move towards these digital battlefields, do you have any concerns about how the defense industry itself is adapting or changing?

MR. KREMER: Well, you know, in many ways we are a reflection of our customer, and there's certainly bureaucracy in the government, and in any large defense contractor there's also bureaucracy in that. So what I was talking about earlier, this need to break down those barriers to be able to contract in a different way, to be able to protect intellectual property, to be able to refine our business processes such that we can go faster. To be able to, through this, integrate small businesses and emerging technology companies, those are all challenges that we have to do.

But I think that digital design and this digital transformation is a way to do that, to not only bring things to market faster but to do it at a reduce cost, and to bring leading technologies to the forefront quicker.

MS. MESERVE: Thank you, Wes Kremer, President of Raytheon Missiles and Defense. Now back to The Washington Post.

[Video plays]

MR. IGNATIUS: Hello. For those of you who are just joining us, I'm David Ignatius, a columnist at The Washington Post. I want to continue our program on the digital transformation of the military with Brandon Tsang, the co-founder and chief executive officer of Shield AI, a defense technology company. Welcome to Washington Post Live, Brandon.

MR. TSENG: Hi, David. Great to see you again. Thank you for having me.

MR. IGNATIUS: Good to see you again. So you and I met last spring, discussed your company. I wrote a column about it that people can find online, last May. Just share with people your story. You deployed to Afghanistan as a Navy SEAL. Your unit suffered unnecessary casualties in a deployment in Uruzgan Province, where you couldn't target a building because you didn't know if civilians were inside.

When you got out you knew that AI could solve the kind of problem that you and other American troops had faced in these combat zones. Tell us what you did and how Shield AI is trying to solve the problems you encountered in the field.

MR. TSENG: Yeah, sure. I guess if I could start, Shield AI builds autonomy for everything that flies. We're applying self-driving car technology to aircraft with a core focus on defense and helping the military operate without GPS or communications, which Eric Schmidt, in the previous segment, acknowledged those will be the first to go in any sort of conflict. And this is what the military will call a denied environment, and it's arguably the largest challenge the military has as it thinks about great power competition with China.

But denied environments is something that I had a lot of experience with as a Navy SEAL, specifically going inside buildings where GPS didn't work, communications could fail you, and there were extremely high threats. Whether it was barricaded shooters, dynamic shooters, or house-borne IEDs, they are just extremely dangerous environments that autonomous aircraft can help play a massive role in, whether it's inside a building or over a large geographic area that is covered with surface-to-air missiles.

And so when I was first starting Shield, I had a number of conversations with folks from special operations community all the way up to the pilots, and this denied environment threat--this denied environment problem became self-evident the more and more I talked to folks. Whether it was a Navy SEAL, an Army Special Forces, a Ranger, or an F-18 pilot or an F-22 pilot talking about the challenges that they experienced and that they saw coming, it was how do we operate in denied environments, and this new technology, AI, self-driving car technology, was really a solution that scaled up very nicely for the military's problems.

MR. IGNATIUS: So I should explain for our viewers, when you say "denied area," Brandon, what you mean basically is that there is an EW, electronic warfare cloud, if you will, that's preventing the system from communicating, taking directions from some central driver. You have a product that you call Hivemind that your people showed me that puts the AI thinking part at the edge, so to speak, with each individual quadcopter or even airplane, big predator-like drone. Explain how that edge technology, edge AI technology works in practice and why it's important.

MR. TSENG: Sure. So again, very similar to self-driving car technology, and the best way to think of that is there are sensors on board an aircraft, a robot, a drone, and these sensors are taking in information about the world, much the same way that you or I use our sensors--our eyes, our nose, our sense of hearing, sense of smell, sense of taste--to take in information about the world. And then there is a computer on board that processes the information, creates a map of the world, or a perception of the world, and informs the aircraft, the drone, the robot of what to do next. Very similar to you or I, our brain takes in all our sensor information, and then we make decisions about where to go in the world. And that is essentially what Hivemind is doing on board these aircraft.

MR. IGNATIUS: I should tell viewers I watched a drone equipped with this technology essentially do surveillance of a building, a building it had never seen, it had no guidance from outside. It just went room to room, mapped it, and then gave the hypothetical warfighter outside a clear picture of what there was inside.

I want to ask you about other things that Shield, your company, is doing, in part through acquisitions. You just acquired, from a company called Martin UAV, something called V-BAT, which is vertical takeoff and landing system. Tell us about that, what advantages it has, why you thought it was attractive, and how you hope to use your AI technology to make it smarter and more effective.

MR. TSENG: Sure. The V-BAT, it's a really incredible aircraft. It is a Goldilocks aircraft. There's really no other aircraft out there like it, of its size and class. It stands vertically and takes off like a SpaceX rocket and lands like a SpaceX rocket as well. And it has up to 12 hours of endurance while carrying 20 pounds of payload, that could be modified. If you carried more payload, you could fly it for less hours, or if you carried less payload you can fly for longer hours. And it is replacing many of the legacy aircraft that the U.S. military uses today via many of their different programs of record.

And when we looked at the aircraft, when we looked at the asset and we took a very broad view, we studied all the other competitive aircraft out there, and when we eventually made the decision to pull the trigger, it had to be a great piece of hardware, which it is. But it also had to be very adaptable to our AI and autonomy stack, Hivemind. Because our focus over the next two years--and we'll operationalize this by 2023--is to integrate Hivemind on the V-BAT. And what that will do is allow the aircraft to swarm intelligently with other V-BATs. It will enable GPS-denied and communications-denied flight. It will enable map-of-the-earth flight profiles, and then it will also enable the ability to learn new mission sets, anything from escorting an aircraft to penetrating an integrated air defense system.

And for us it was really the next evolution on our roadmap. You know, proving something out on a quadcopter is what we did initially, proving that Hivemind could run on that, and now we're taking the next step in operationalizing Hivemind on board a 135-pound aircraft. And then, you know, our aim, as Eric Schmidt spoke earlier, is to really have Hivemind running on all aircraft at some point in time. You know, that will take a very long time to get to, but those are the steps that we're taking.

MR. IGNATIUS: I want to ask you specifically, Brandon, about putting your AI system that you call Hivemind in aircraft, that you have made another acquisition that interested me, looking online, in effect developing AI pilots for aircraft from a company called Heron Systems. And reading about that I read about DARPA's sponsorship of what sounded like basically AI pilot dogfights, where different AI pilots compete to see which one can shoot down more of the adversary.

Just explain to people who don't know anything about this space what's going on, and are we going to get to the point where we'll have F-35s that will be driven by an AI pilot, not a human pilot?

MR. TSENG: Yes, and yes, I'm happy to give a quick overview. At least that's the aim, and I think that's the direction that we're going. And I do want to state, I believe human pilots will continue to play an incredibly important role.

But we bought Heron largely to pursue our objective of getting Hivemind and AI pilots on all aircraft. And Heron Systems had proven out its AI in a competition sponsored by DARPA, where it was to design an AI to take on human pilots. They called it AlphaDogfight, and what Heron proved out was they had built an AI that was dominant. It could defeat extremely experienced pilots. Its win record is in the 99th percentile going up against humans and other AI pilots. And so for that reason we made the acquisition. But the aim, that's a longer play to get our AI Hivemind on board these larger, more exquisite aircraft.

MR. IGNATIUS: So I just want to pause and ask our viewers to reflect on that. We have computers that can now beat humans in chess. We have computer deep learning systems, that can beat humans in a far more complicated game called Go. And if I understand what Brandon just said, we're going to have AI systems, computers, that will be able to outperform human pilots in a dogfight in some future combat scenario. Is that right?

MR. TSENG: That is correct, and Eric spoke to why AI is such a powerful technology. It is because it unlocks super-human performance. And what's really exciting is for the first time in history you can actually take that AI and you can put it on physical systems.

So we've had our quadcopter beat teams of clearance operators in the clearance of a building, based on the time on the target. We've had Heron Systems beat F-16 pilots in air-to-air combat.

And so AI is an incredibly powerful tool. You can train it to do certain things very, very well, and I think that will unlock transformational capability for the warfighter, for the military, but at the same time it unlocks a whole host of other capabilities as we think about the commercial sector.

MR. IGNATIUS: Brandon, the last quick question. Our previous panel, with Eric Schmidt and Michèle Flournoy, really stressed the dangers of AI. You have served in combat as a Navy SEAL. You know the face and the cost of warfare. Do you think that AI can make future warfare less lethal for human beings?

MR. TSENG: Yes. Yes, I do think it will make warfare less lethal for human beings. First, I think it's nothing like Hollywood, which is where everybody's mind goes to. It's not "Terminator" or "The Matrix." But on a serious note, as a former SEAL, I know how fundamentally human the decision is to take a life in combat. I also understand the role that technology, AI, and autonomy can play and how it can be a literal lifesaver for a warfighter or for a civilian that's stuck in a conflict zone.

And I also understand the care to which our military deploys combat power and powerful technologies, and I want this audience to understand, or help realize how much care this actually is, and how much thought, and how much process goes into wielding these technologies, wielding these capabilities on the battlefield. It is an astounding amount of care and thought and process, and what AI will enable is our warfighters to operate with greater speed, greater accuracy, persistence, precision, reach, and coordination, and ultimately help protect our warfighters, help protect civilians, and prevent collateral damage.

MR. IGNATIUS: Unfortunately, we are out of time. We'll have to leave it there. Brandon, thank you so much for explaining what your company, Shield AI, does, and we really appreciate you coming.

As always, we’d ask our audience to check out what interviews we have got coming. Please head to WashingtonPostLive.com to find out more information and register for the programs that interest you.

I'm David Ignatius. Thank you so much for joining us.

[End recorded session.]

ncG1vNJzZmivp6x7uK3SoaCnn6Sku7G70q1lnKedZMSiv8eipaCsn6N6sbvSrWSloaaafHN8kWpmamlfZn5wwNGapaybop69tXnDop6irJGherW%2BwKeqn6eioq61tc6nZKahnJ7Bor7YZq6irJhiuqqvx6WcZp6cpMKzus6yZJ6qmZh6tK%2FHpqCdrF2Xv6K6w6ilZqyjmruoew%3D%3D