- Paul Scharre, a former defense official, argues AI dominance will determine the next global power.
- The battle for AI power will revolutionize world militaries and economies.
- His book, “Four Battlegrounds: Power in the Age of Artificial Intelligence,” was released on February 28.
The global battle for AI dominance is underway, according to author Paul Scharre, a former Army Ranger and current VP and director of studies at the Center for New American Security — a think tank specializing in national security issues.
Scharre previously served as a strategic planner at the Office of the Secretary of Defense, working to establish policies on unmanned and autonomous systems and emerging weapons technologies, and established DOD policies on intelligence, surveillance, and reconnaissance programs.
In his latest book, “Four Battlegrounds: Power in the Age of Artificial Intelligence,” Scharre explores how the international battle for the most powerful AI technology is changing global power dynamics. That battle, he says, is a global competition to seek the best and most efficient data, computing hardware, human talent, and institutions adopting AI technology — which will determine the next global superpower.
This conversation with Scharre has been lightly edited for length and clarity
In your new book, you argue there’s a battle for global power going on in the form of a revolution brought about by artificial intelligence. What are the stakes of that battle?
So we saw during the Industrial Revolution that nations rose and fell on the global stage based on how rapidly they industrialized. Now, technology is this key enabler of political, economic, and military power. And I think that AI technologies are extremely powerful tools for a country’s or society’s ability to shape global progress and the international environment.
I do not want to live in a world where the Chinese Communist Party has that level of influence over global affairs. I find that concerning given their egregious human rights abuses at home, their bullying of their neighbors and abroad, their threatening of Taiwan and other countries in the region with military aggression, their militarization of the South China Sea. And I think it’s part of the risk is not just about economic and military power and political power, but the spread of China’s model of governance, their techno-authoritarianism, through adoption of their AI systems.
Other countries are increasingly emulating China’s laws and norms for how to use technology for surveillance and repression at home — some of the social software that sits on top of the hardware of this surveillance technology itself. And of course, this fits into a broader trend we’ve seen of democratic backsliding in a number of countries and so I do think that we shouldn’t take democracy for granted at home or abroad. And it’s really important that democracies push back against these authoritarian trends.
And it isn’t a typical battle, as we aren’t directly at war, so where is the action happening?
So the book explores the US/China rivalry in artificial intelligence and the book comes looks at what’s happening with artificial intelligence as a general purpose technology, much like electricity or computer networks or the internal combustion engine. And like those other technologies, it has a wide variety of applications across society. The book concludes that the four key battlegrounds of global competition: in artificial intelligence, or data, computing hardware or compute, human talent, and the institutions necessary to successfully adopt AI systems.
Probably the most foundational application of AI that will help advance national power is the widespread application of AI to enhanced economic productivity and scientific research. When AI technology is adopted in a wide range of industries and increases economic productivity, that’s going to have second and third order effects on national power by increasing the GDP, and then allowing the nation to turn economic productivity into other, sometimes more tangible tools of national power — whether it’s for the intelligence services building satellites and processing intelligence information that’s collected or building military forces.
What practical applications does AI have for the intelligence community or military operations?
AI tech has a lot of potential in intelligence applications. I think it’s quite likely that the US Defense Department and intelligence community use AI to analyze imagery whether from satellites or drones and, while they haven’t acknowledged this publicly, it’s very possible that’s playing a role in the information that the US is sharing with Ukraine. We have seen some other examples of AI technology being used directly by Ukrainian forces on the ground, particularly some of the civilian drone operators. And this highlights just really how ubiquitous AI technology is that you don’t need to be the world’s most advanced person to use the technology, it’s pretty widely available.
We’re also seeing in the war in Ukraine the importance of logistics, for example, and maintenance operations, and that’s the kind of thing where the bulk of what militaries do on a day-to-day basis is move people and things from point A to point B — it looks a lot like Walmart or Amazon does, it’s just what happens at the end that’s different. And so that’s a place where militaries can improve their logistics, readiness, and finance. and personnel, and maintenance by 5%, the sum total of those effects in terms of military effectiveness could be quite significant and quite transformative.
And is AI transforming traditional warfare, as well?
As far as military advancements in warfare, there are also some examples. In the DARPA Alpha Dog competition, the goal was to build an AI agent that could achieve superhuman performance in a simulator against a human in dogfighting, which is considered by humans to be sort of the ultimate crucible of human pilots. To make a long story short, the AI succeeded, went head to head against an experienced Air Force pilot and absolutely crushed the human pilot, 15 to zero — the human didn’t get a single shot off against the AI.
In particular, head-to-head gunshots are actually banned in training by human pilots because there’s a high risk of collision if the pilot is trying to maneuver the aircraft when you’re racing each other hundreds of miles an hour. And it’s extremely difficult to do in any case and requires superhuman levels of precision, but none of that was a problem for the AI agent. It can achieve these split-second shots while also avoiding a collision. And the AI agent learned to do this all on its own — it wasn’t trained to do that. The AI system that won was trained in a simulator and had over 30 years of simulated flying time and this was one of the things that it simply learned on its own from all of these years of simulated dogfights.
And what about use of AI outside of war?
I think there’s a variety of places where AI is likely to have pretty significant effects over time on economic productivity. One is, of course, you know, specific AI applications that might improve finance or medicine or transportation or other industries — so self-driving cars should be at the point where they’re really effective and viable on roads are one example of this, certainly improvements in medicine, using AI for imaging, for example, that kind of thing that could be very beneficial across society. But it’s one of the most exciting things about AI is just its ability to increase productivity across a whole wide range of places.
When I think of AI, I think of things like Clearview AI facial recognition, where law enforcement can use facial recognition scans of civilians. What are the surveillance applications of this kind of tech?
One really critical component of this global competition in artificial intelligence is the emerging struggle over how AI is used within countries for domestic security and surveillance and the creeping spread that we’re seeing globally of Chinese-style models of techno-authoritarianism that China has pioneered. It’s very dystopian, tech-enhanced repression. China has half of the world’s 1 billion surveillance cameras, and they’re increasingly using AI tools like facial recognition, or gait recognition, to identify who people are based on their gait patterns and how they walk. And in connecting that with other types of data, like license plate data, calls, or geolocation data on phones and people’s purchasing behaviors to monitor and track Chinese citizens.
And China is exporting a lot of the technology abroad, so 80 countries around the world have purchased Chinese surveillance technology. And perhaps even more troubling, we’re increasingly seeing the export of the norms and laws that China uses to other countries to use this technology. So the social software, if you will, that’s coupled with the physical hardware of some of the surveillance technology. And it’s deeply troubling because it’s not just that technology is being used for human rights abuses in China, and potentially can be used by other countries, but one of the concerning things about AI-enabled oppression is that it enhances the state’s ability to monitor citizens and therefore accelerate the system of oppression itself.
What would a competing model look like?
Well, that’s part of the problem. One of the challenges that democracies face is that we don’t have that model yet. So for facial recognition, for example, we have a patchwork system that’s developing in the United States of different regulations on for example, the law enforcement use of facial recognition depending on where you live in the United States and a number of different cities and states have passed bans on law enforcement use of facial recognition you know, I think one of the challenges that democracies face is that the decentralized nature of power inside democracies, means that coming up with a solution for tech governance takes longer. It takes longer because sometimes the quick answer is not the right one.
So what kind of guardrails do you think should be placed on this technology?
Over time, regulation in some fashion of AI technology; probably much of which will be sector-specific. The regulations for AI in medical applications will fall within the sort of the broader paradigm of how we regulate tools and safety for medical devices, and the way that we regulate AI in finance applications will fall from how we regulate trading and other financial things. But I do think that AI doesn’t get a pass on regulations and it’s worth reflecting on the fact that we only have clean air and water, and safe highways, and safe air travel, and safe food to eat in America because of government regulation of industry.
We need some regulatory framework in place for very large training models. I think that’s the conversation that we need to start having, in part because these large models are in many cases dual-use. They could be used for generating text and having chats with them or writing essays — but they could also be used to find cyber vulnerabilities, they could, in principle, if they were more sophisticated, perhaps used to help someone plan a terrorist attack to cause harm or figure out how to, you know, not today, but in the future, make a biological weapon or some other kind of destructive device. And that’s a conversation we need to start having now in part because the pace of the technology is much, much, much faster than the pace of policy discussions in Washington.
What do you see as potential drawbacks to AI, especially as we get to a point where we’re really developing new tech first and asking questions later?
That’s a great point. I think we should have reservations about AI technology because there are a lot of problems with AI technology today: it’s unreliable, it is often very brittle in the sense that the AI systems can do well in some contexts and then it can fail often quite dramatically in others.
One of my favorite examples of this brittleness was reportedly one of the early versions of AlphaGo, the AI agent that achieved superhuman performance at the game Go, if you change the size of the board slightly, proportionately its performance dropped off very dramatically because it was not trained on data of that size. It was only trained on a certain size and so that’s a good example of the problems that come up often quite brittle and failing in generalizing within one to other situations.
We’re seeing some of this on display very recently with the implementation of these chatbots into search, for example. I mean, both Microsoft and Google most recently deployed AI chatbots publicly that weren’t ready. And the problem isn’t that Bing is declaring its love for users and saying that someone that is chatting with should leave their wife to be with it — I mean, that’s kind of odd — but the real problem is that the best AI scientists in the world don’t know how to stop these chatbots from doing that.
And so to me, that is the kind of thing that should give us pause as we’re thinking about how we use these AI systems in more real-world applications. In the immediate time being, the risk that a chatbot hurt someone’s feelings is not world-shattering, but as we see AI integrated into more high-consequence applications, we absolutely want to make sure that these systems are going to do what we want them to do.
And, you know, the behavior of some of the major companies here has not exactly been super responsible. And so we’re already seeing with Open AI, and Microsoft, and Google saw the rush over the last couple months to hastily deploy AI chatbots that were not at all ready and the companies responding to each other in this competitive dynamic that’s really harmful, this sort of race to the bottom on safety.
Do you have a positive outlook on the future of AI at this point? What are you hopeful for?
It’s funny. I would not say that I’m optimistic about the technology. I mean, I’m fairly bullish about where AI is going in terms of capabilities. I just think we’ve seen tremendous progress and I think there’s no sign of it slowing down in the near term. But there are a lot of risks that come with AI. I would just say that I’m optimistic about society’s ability to handle those risks. I think that when we take a step back and look at human progress, this is the best time to be alive.
We’ve seen tremendous progress over the last several hundred years in lifting up people’s standard of living, in reducing poverty, in improving life expectancy, and reducing infant mortality. There are good reasons to be optimistic about the future and our ability to continue to use technology to improve our lives in a very meaningful way.
I do think that there are very significant challenges with artificial intelligence technology. The troubling thing is these problems seem very difficult. These technical problems of getting AI to do what you want it to do is quite hard. And I think it’s important that we’re clear-eyed about some of these technical problems. I’m feeling quite encouraged the actually that some of these relatively public failures have brought some of these concerns about AI more mainstream and increased the number of people that are paying attention to the problem.
Four Battlegrounds: Power in the Age of Artificial Intelligence was released on February 28.
- Paul Scharre, a former defense official, argues AI dominance will determine the next global power.
- The battle for AI power will revolutionize world militaries and economies.
- His book, “Four Battlegrounds: Power in the Age of Artificial Intelligence,” was released on February 28.
The global battle for AI dominance is underway, according to author Paul Scharre, a former Army Ranger and current VP and director of studies at the Center for New American Security — a think tank specializing in national security issues.
Scharre previously served as a strategic planner at the Office of the Secretary of Defense, working to establish policies on unmanned and autonomous systems and emerging weapons technologies, and established DOD policies on intelligence, surveillance, and reconnaissance programs.
In his latest book, “Four Battlegrounds: Power in the Age of Artificial Intelligence,” Scharre explores how the international battle for the most powerful AI technology is changing global power dynamics. That battle, he says, is a global competition to seek the best and most efficient data, computing hardware, human talent, and institutions adopting AI technology — which will determine the next global superpower.
This conversation with Scharre has been lightly edited for length and clarity
In your new book, you argue there’s a battle for global power going on in the form of a revolution brought about by artificial intelligence. What are the stakes of that battle?
So we saw during the Industrial Revolution that nations rose and fell on the global stage based on how rapidly they industrialized. Now, technology is this key enabler of political, economic, and military power. And I think that AI technologies are extremely powerful tools for a country’s or society’s ability to shape global progress and the international environment.
I do not want to live in a world where the Chinese Communist Party has that level of influence over global affairs. I find that concerning given their egregious human rights abuses at home, their bullying of their neighbors and abroad, their threatening of Taiwan and other countries in the region with military aggression, their militarization of the South China Sea. And I think it’s part of the risk is not just about economic and military power and political power, but the spread of China’s model of governance, their techno-authoritarianism, through adoption of their AI systems.
Other countries are increasingly emulating China’s laws and norms for how to use technology for surveillance and repression at home — some of the social software that sits on top of the hardware of this surveillance technology itself. And of course, this fits into a broader trend we’ve seen of democratic backsliding in a number of countries and so I do think that we shouldn’t take democracy for granted at home or abroad. And it’s really important that democracies push back against these authoritarian trends.
And it isn’t a typical battle, as we aren’t directly at war, so where is the action happening?
So the book explores the US/China rivalry in artificial intelligence and the book comes looks at what’s happening with artificial intelligence as a general purpose technology, much like electricity or computer networks or the internal combustion engine. And like those other technologies, it has a wide variety of applications across society. The book concludes that the four key battlegrounds of global competition: in artificial intelligence, or data, computing hardware or compute, human talent, and the institutions necessary to successfully adopt AI systems.
Probably the most foundational application of AI that will help advance national power is the widespread application of AI to enhanced economic productivity and scientific research. When AI technology is adopted in a wide range of industries and increases economic productivity, that’s going to have second and third order effects on national power by increasing the GDP, and then allowing the nation to turn economic productivity into other, sometimes more tangible tools of national power — whether it’s for the intelligence services building satellites and processing intelligence information that’s collected or building military forces.
What practical applications does AI have for the intelligence community or military operations?
AI tech has a lot of potential in intelligence applications. I think it’s quite likely that the US Defense Department and intelligence community use AI to analyze imagery whether from satellites or drones and, while they haven’t acknowledged this publicly, it’s very possible that’s playing a role in the information that the US is sharing with Ukraine. We have seen some other examples of AI technology being used directly by Ukrainian forces on the ground, particularly some of the civilian drone operators. And this highlights just really how ubiquitous AI technology is that you don’t need to be the world’s most advanced person to use the technology, it’s pretty widely available.
We’re also seeing in the war in Ukraine the importance of logistics, for example, and maintenance operations, and that’s the kind of thing where the bulk of what militaries do on a day-to-day basis is move people and things from point A to point B — it looks a lot like Walmart or Amazon does, it’s just what happens at the end that’s different. And so that’s a place where militaries can improve their logistics, readiness, and finance. and personnel, and maintenance by 5%, the sum total of those effects in terms of military effectiveness could be quite significant and quite transformative.
And is AI transforming traditional warfare, as well?
As far as military advancements in warfare, there are also some examples. In the DARPA Alpha Dog competition, the goal was to build an AI agent that could achieve superhuman performance in a simulator against a human in dogfighting, which is considered by humans to be sort of the ultimate crucible of human pilots. To make a long story short, the AI succeeded, went head to head against an experienced Air Force pilot and absolutely crushed the human pilot, 15 to zero — the human didn’t get a single shot off against the AI.
In particular, head-to-head gunshots are actually banned in training by human pilots because there’s a high risk of collision if the pilot is trying to maneuver the aircraft when you’re racing each other hundreds of miles an hour. And it’s extremely difficult to do in any case and requires superhuman levels of precision, but none of that was a problem for the AI agent. It can achieve these split-second shots while also avoiding a collision. And the AI agent learned to do this all on its own — it wasn’t trained to do that. The AI system that won was trained in a simulator and had over 30 years of simulated flying time and this was one of the things that it simply learned on its own from all of these years of simulated dogfights.
And what about use of AI outside of war?
I think there’s a variety of places where AI is likely to have pretty significant effects over time on economic productivity. One is, of course, you know, specific AI applications that might improve finance or medicine or transportation or other industries — so self-driving cars should be at the point where they’re really effective and viable on roads are one example of this, certainly improvements in medicine, using AI for imaging, for example, that kind of thing that could be very beneficial across society. But it’s one of the most exciting things about AI is just its ability to increase productivity across a whole wide range of places.
When I think of AI, I think of things like Clearview AI facial recognition, where law enforcement can use facial recognition scans of civilians. What are the surveillance applications of this kind of tech?
One really critical component of this global competition in artificial intelligence is the emerging struggle over how AI is used within countries for domestic security and surveillance and the creeping spread that we’re seeing globally of Chinese-style models of techno-authoritarianism that China has pioneered. It’s very dystopian, tech-enhanced repression. China has half of the world’s 1 billion surveillance cameras, and they’re increasingly using AI tools like facial recognition, or gait recognition, to identify who people are based on their gait patterns and how they walk. And in connecting that with other types of data, like license plate data, calls, or geolocation data on phones and people’s purchasing behaviors to monitor and track Chinese citizens.
And China is exporting a lot of the technology abroad, so 80 countries around the world have purchased Chinese surveillance technology. And perhaps even more troubling, we’re increasingly seeing the export of the norms and laws that China uses to other countries to use this technology. So the social software, if you will, that’s coupled with the physical hardware of some of the surveillance technology. And it’s deeply troubling because it’s not just that technology is being used for human rights abuses in China, and potentially can be used by other countries, but one of the concerning things about AI-enabled oppression is that it enhances the state’s ability to monitor citizens and therefore accelerate the system of oppression itself.
What would a competing model look like?
Well, that’s part of the problem. One of the challenges that democracies face is that we don’t have that model yet. So for facial recognition, for example, we have a patchwork system that’s developing in the United States of different regulations on for example, the law enforcement use of facial recognition depending on where you live in the United States and a number of different cities and states have passed bans on law enforcement use of facial recognition you know, I think one of the challenges that democracies face is that the decentralized nature of power inside democracies, means that coming up with a solution for tech governance takes longer. It takes longer because sometimes the quick answer is not the right one.
So what kind of guardrails do you think should be placed on this technology?
Over time, regulation in some fashion of AI technology; probably much of which will be sector-specific. The regulations for AI in medical applications will fall within the sort of the broader paradigm of how we regulate tools and safety for medical devices, and the way that we regulate AI in finance applications will fall from how we regulate trading and other financial things. But I do think that AI doesn’t get a pass on regulations and it’s worth reflecting on the fact that we only have clean air and water, and safe highways, and safe air travel, and safe food to eat in America because of government regulation of industry.
We need some regulatory framework in place for very large training models. I think that’s the conversation that we need to start having, in part because these large models are in many cases dual-use. They could be used for generating text and having chats with them or writing essays — but they could also be used to find cyber vulnerabilities, they could, in principle, if they were more sophisticated, perhaps used to help someone plan a terrorist attack to cause harm or figure out how to, you know, not today, but in the future, make a biological weapon or some other kind of destructive device. And that’s a conversation we need to start having now in part because the pace of the technology is much, much, much faster than the pace of policy discussions in Washington.
What do you see as potential drawbacks to AI, especially as we get to a point where we’re really developing new tech first and asking questions later?
That’s a great point. I think we should have reservations about AI technology because there are a lot of problems with AI technology today: it’s unreliable, it is often very brittle in the sense that the AI systems can do well in some contexts and then it can fail often quite dramatically in others.
One of my favorite examples of this brittleness was reportedly one of the early versions of AlphaGo, the AI agent that achieved superhuman performance at the game Go, if you change the size of the board slightly, proportionately its performance dropped off very dramatically because it was not trained on data of that size. It was only trained on a certain size and so that’s a good example of the problems that come up often quite brittle and failing in generalizing within one to other situations.
We’re seeing some of this on display very recently with the implementation of these chatbots into search, for example. I mean, both Microsoft and Google most recently deployed AI chatbots publicly that weren’t ready. And the problem isn’t that Bing is declaring its love for users and saying that someone that is chatting with should leave their wife to be with it — I mean, that’s kind of odd — but the real problem is that the best AI scientists in the world don’t know how to stop these chatbots from doing that.
And so to me, that is the kind of thing that should give us pause as we’re thinking about how we use these AI systems in more real-world applications. In the immediate time being, the risk that a chatbot hurt someone’s feelings is not world-shattering, but as we see AI integrated into more high-consequence applications, we absolutely want to make sure that these systems are going to do what we want them to do.
And, you know, the behavior of some of the major companies here has not exactly been super responsible. And so we’re already seeing with Open AI, and Microsoft, and Google saw the rush over the last couple months to hastily deploy AI chatbots that were not at all ready and the companies responding to each other in this competitive dynamic that’s really harmful, this sort of race to the bottom on safety.
Do you have a positive outlook on the future of AI at this point? What are you hopeful for?
It’s funny. I would not say that I’m optimistic about the technology. I mean, I’m fairly bullish about where AI is going in terms of capabilities. I just think we’ve seen tremendous progress and I think there’s no sign of it slowing down in the near term. But there are a lot of risks that come with AI. I would just say that I’m optimistic about society’s ability to handle those risks. I think that when we take a step back and look at human progress, this is the best time to be alive.
We’ve seen tremendous progress over the last several hundred years in lifting up people’s standard of living, in reducing poverty, in improving life expectancy, and reducing infant mortality. There are good reasons to be optimistic about the future and our ability to continue to use technology to improve our lives in a very meaningful way.
I do think that there are very significant challenges with artificial intelligence technology. The troubling thing is these problems seem very difficult. These technical problems of getting AI to do what you want it to do is quite hard. And I think it’s important that we’re clear-eyed about some of these technical problems. I’m feeling quite encouraged the actually that some of these relatively public failures have brought some of these concerns about AI more mainstream and increased the number of people that are paying attention to the problem.
Four Battlegrounds: Power in the Age of Artificial Intelligence was released on February 28.