Artificial Intelligence & its application in military & defence domain : News & Discussion

Paro

Bloom17
Dec 2, 2017
1,223
1,063
United States
THE DAWN OF ARTIFICIAL INTELLIGENCE IN NAVAL WARFARE
CONNOR S. MCLEMORE AND HANS LAUZEN

JUNE 12, 2018

COMMENTARY

NAVYAI2-1024x684.jpg


The U.S. Navy is investing real money to integrate artificial intelligence (AI) into the force, requesting $62.5 million in the FY19 Defense Department budget for AI and rapid prototyping. As the technology matures, the Navy needs to adapt by displacing human intelligence in roles for which AI is better suited while being aware of the many roles in which human intelligence will still have an edge. The Navy should identify candidates for automation where, relative to human intelligence, AI is likely to be increasingly fast, agile, or low-cost. But leadership should also understand where AI isn’t likely to be applicable and comprehend the implementation difficulties the Navy faces relative to other government and commercial organizations. Discussions around AI need to evolve from “we need it” to “this is how we get it.” It’s time to think about some fundamental concepts Navy decision makers need to know to help get AI right.

The Navy should invest in these capabilities for tasks with rules or patterns that are predictable and difficult to disrupt, and should avoid automating tasks with rules and patterns that change unpredictably. The service should focus on collecting data efficiently, finding effective communications pathways in the absence of reliable internet access, especially at sea, and customizing already available algorithms for naval purposes. More broadly, leadership should foster trust in these new capabilities through transparent and deliberate acquisition processes and by making clear to the rank and file how human and artificial intelligence will work together in future combat.

Which Tasks is Artificial Intelligence Suited For?

The pursuit of “perfect AI” is a fool’s errand. To paraphrase George E.P. Box, “Essentially all AI is wrong, but some is useful.” Useful AI generates good enough results at least a little faster, better, or cheaper than those produced by human intelligence. AI isn’t inherently better or worse than human intelligence, but, depending on the task, can perform considerably better or worse than human intelligence

Algorithms can be trained to complete narrow, specific, anticipated tasks with describable and predictablerules. When those conditions are in place, algorithms will improve faster than any sailor. This makes tasks involving repetitive and manually intensive training likely candidates for automation. AI can already outperform humans at many tremendously complicated tasks that meet these criteria. In 2017, for the first time, AI was able to consistently beat professionals at heads-up (one-on-one) poker, a game involving hidden information. AI overcame the “fog of war” inherent in the game by recognizing and taking advantage of winning patterns by self-generating extensive data on poker hands. In poker, pattern recognition is useful—thus, AI provides an advantage over humans. Naval tasks that meet these criteria and could benefit from AI include scheduling combat logistics force replenishments at sea and planning daily aircraft routing for amphibious-ready groups.

Although “narrow” AI can learn to train, it can’t learn to think. Thinking AI, or artificial general intelligence, exists only in science fiction. By contrast, narrow AI can find correlations in data but can’t actually comprehend its own actions. Both humans and AI can accumulate experience, knowledge, and skills, but only humans can put them into context. This is why IBM’s Watson, which was successfully trained to win Jeopardy, can’t engage in the sort of unstructured learning essential to earning a college diploma. Because AI can’t learn to think, even as the capability matures, human intelligence will remain more useful for tasks with indescribable or unpredictable rules. Imagine a poker competition where, instead of playing variants of poker with fixed rules, players are allowed to change rules each hand to make up new games. Unpredictable rule changes fool even the best current AI approaches because they can’t put them in context without human help. The future of man-machine teaming involves AI performing the repetitive drudge work teamed with humans to think through unpredictable tasks. Autonomous cars are a good example—their autopilot systems can be trusted in predictable driving conditions, but when the unanticipated occurs, they can’t adapt without human oversight. Autonomous naval ships and aircraft will be no different.

Automated pattern recognition is an important subset of AI that is becoming integral to many widely used AI applications. Pattern recognition, once automated, is useful for rapidly conducting intensive pattern searches in large amounts of data and can often produce faster (and cheaper) results than traditional statistical techniques. Typically, the most labor-intensive part of automating a pattern recognition process is preparing the data that essentially “trains” a machine learning algorithm to recognize patterns. Before algorithms can be trained, they require large amounts of clean (error-free), organized data. The dirty secret of machine learning is that human labor is often the only way to efficiently clean training data. For example, before a machine learning algorithm can be trained to find a warship in an image or video, humans first have to categorize thousands of images of warships. However, once trained, a machine learning algorithm typically only needs new training data when the patterns of interest change.

However, automated pattern recognition can be highly susceptible to small changes in patterns and can be deceived by opponents with the ability to disrupt those patterns. The pattern recognition software trained to recognize a warship should not be trusted to reliably recognize a vessel whose visual characteristics can be changed in unanticipated ways. The Navy should only use automated pattern recognition in situations where the patterns are less vulnerable to adversary manipulation. On the other hand, for stable, difficult-to-disrupt patterns, automated searches are often useful for generating valuable insights at speed.

What the Navy Needs

Beyond broad conceptual recognition of the opportunities and limits of naval AI, building and integrating these systems will require four things: data sources, communication paths and databases, algorithms, and interfaces. While addressing these four requirements during the acquisition process does not guarantee the resulting capabilities will be useful, ignoring any of them will make it significantly more likely that the resulting AI system will not be useful.

Access to the right data is necessary for any successful naval applications of AI. For the Navy, the relevant data is often stored in databases that are difficult to access. The long-term solution may involve more integration of the Navy’s legacy databases or transfer of databases to a “cloud” or “data lake.” More value can probably be unlocked faster by simply making the Navy’s legacy databases more accessible. But because data requires resources to collect, even organizations that have efficient data environments must be selective in the data they choose to gather.

The challenge is not just about data access, but about the economy of that data—the value versus the price of acquisition. In naval warfare, data is expensive because the data sources and communication paths required to collect and transmit data are expensive. Other services face similar problems: Last year, the Air Force cancelled its Air Operations Center 10.2 contract to convert “raw data into actionable information that is used to direct battlefield activities” after project costs surged from an original $374 million to $745 million. The Navy should explicitly define which data are necessary for generating the desired information for specific AI applications, and determine how much that data will cost. Unnecessary additional features can become unexpected drivers of data requirements. By weighing the costs and benefits of its data early, the Navy can pay only for the data it needs.

Another challenge facing the Navy’s AI efforts is that, at sea, it can’t take advantage of the speed, high bandwidth, and low cost of the internet as its primary communications path. As such, the service will remain dependent on communication paths that are extremely expensive, manually intensive, “stovepiped,” and low-bandwidth, such as radios and data links. To make matters worse, adversaries are expected to contest those paths. The Navy’s continued reliance on relatively tiny streams of expensive data will be a challenge even aside from issues surrounding AI.

The Navy, not contractors, should own all data and analysis generated from it. Relinquishing that ownership will prevent the service from switching contractors without risking data loss. For example, Google recently announced that it would not be renewing its contract on Project Maven, a program designed to automate object recognition in imagery acquired from drone surveillance. The Defense Department owned the data, though, so several other capable AI companies are likely poised to step into Google’s shoes.

The Navy’s costs to acquire algorithms should remain low relative to the costs of sensors, communication paths, and databases. Decision makers should think of algorithms as things to be customized, not built from scratch. A handful of problems will be exceptions and require completely new algorithms, but for many tasks, analogous algorithms capable of quickly providing insights with minor modifications are already freely available from academia and industry. Additionally, the most widely used AI software, including TensorFlow, is free and open-source. Free software and the same widely published algorithms—such as the sort used by UPS to route its delivery truck fleet—can, with minor modifications, be used to create a real-time naval weapon-target assignment plan.

As AI applications grow in the Navy, it will be critical to monitor and exercise oversight of contractors. Defense contractors will have to make implicit ethical choices in their algorithm designs and selection of training data. The Navy can provide honest governance and oversight of those choices by having its own algorithm experts set rules for contractors to follow because, if contractor-produced algorithms go wrong, it is the service itself that will be held accountable.

Finally, AI systems require good interfaces to efficiently connect actors, human and machine, to timely, understandable results. Without usable interfaces, information generated by algorithms can’t be acted upon. Beware: Sleek interfaces may be used to mask terrible AI!

A Final Element: Building Trust

For the Navy, too much trust in AI is likely to result in more harm than too little trust. Still operators may have cause for skepticism. Once integrated with naval weapon systems, AI applications will have potential to do enormous damage quickly. Leadership shouldn’t be too quick to conclude that sailors “don’t get it” if they raise questions. Currently, most AI applications involve techniques that are not easily explained. In commercial AI products, explainability may be less important as long as the algorithms work. How does Uber assign drivers and determine routes? how does Google generate search results? How does Tesla’s autopilot work? Many AI consumers don’t know. Sailors shouldn’t be expected to be experts on algorithmic techniques either, but they will need a stronger understanding of AI’s capabilities and limitations. Designing naval AI acquisition with understandable benchmarks that describe a “good enough, fast enough” solution will help build trust in the capabilities as those benchmarks are achieved.

Tasks the Navy should start to automate include dynamic frequency allocation in communications and electronic warfare plans, real-time shipboard weapon pairings to swarming threats, and coordinating swarming systems to efficiently target distributed moving contacts. In these cases, AI could provide solutions comparable in quality to those currently produced through staff work. However, AI solutions would be available orders of magnitude faster than manual solutions and could quickly update as conditions change. Algorithms to perform each task are already freely available from academia, the rules of each task are described in detail in Navy doctrine and tactical publications, and the Navy already collects the necessary data to perform the tasks manually. Automating these tasks would be useful because the Navy’s existing manual command and control decision structure is so slow that it risks being overwhelmed in a real fight against a capable, swarming adversary.

AI is both imperfect and useful. It can be used to unlock great value in some tasks while it may be useless, or even dangerous, in others. Useful naval AI systems will require data sources, communication paths and databases, algorithms, and interfaces. Tasks involving repetitive and manually intensive training will be increasingly automated, and tasks with indescribable or unpredictable rules won’t. The Navy will still need educated, thinking sailors who aren’t easily fooled by changing rules and patterns. By teaming human and artificial intelligence the right way, the Navy can create a more lethal fighting force, fit for the future of naval combat.



Lieutenant Commander Connor McLemore is an E-2C naval flight officer with numerous operational deployments during 18 years of service in the U.S. Navy. He is a graduate of the United States Navy Fighter Weapons School (Topgun) and an operations analyst with Master’s degrees from the Naval Postgraduate School in Monterrey, California and the Naval War College in Newport, Rhode Island. In 2014, he returned to the Naval Postgraduate School as a Military Assistant Professor and the Operations Research Program Officer. He is currently with the Office of the Chief of Naval Operations Assessment Division (OPNAV N81) in Washington D.C.

Lieutenant Hans Lauzen is a Navy Information Professional officer, and currently is serving as an assured communication analyst within the Office of the Chief of Naval Operations Assessment Division (OPNAV N81) in Washington D.C., where he interprets scientific studies and wargame results to guide strategic investments. He previously has served as a surface warfare officer. He is a candidate for a Master’s Degree in Business Administration at the University of Virginia.

The views expressed here are theirs alone and do not reflect those of the U.S. Navy.

randomradio
 
  • Like
Reactions: randomradio
Last March, Chinese researchers announced an ingenious and potentially devastating attack against one of America’s most prized technological assets—a Tesla electric car.

The team, from the security lab of the Chinese tech giant Tencent, demonstrated several ways to fool the AI algorithms on Tesla’s car. By subtly altering the data fed to the car’s sensors, the researchers were able to bamboozle and bewilder the artificial intelligence that runs the vehicle.

In one case, a TV screen contained a hidden pattern that tricked the windshield wipers into activating. In another, lane markings on the road were ever-so-slightly modified to confuse the autonomous driving system so that it drove over them and into the lane for oncoming traffic.
Tesla’s algorithms are normally brilliant at spotting drops of rain on a windshield or following the lines on the road, but they work in a way that’s fundamentally different from human perception. That makes such “deep learning” algorithms, which are rapidly sweeping through different industries for applications such as facial recognition and cancer diagnosis, surprisingly easy to fool if you find their weak points.
Leading a Tesla astray might not seem like a strategic threat to the United States. But what if similar techniques were used to fool attack drones, or software that analyzes satellite images, into seeing things that aren’t there—or not seeing things that are?

Artificial intelligence-gathering
Around the world, AI is already seen as the next big military advantage.
Early this year, the US announced a grand strategy for harnessing artificial intelligence in many areas of the military, including intelligence analysis, decision-making, vehicle autonomy, logistics, and weaponry. The Department of Defense’s proposed $718 billion budget for 2020 allocates $927 million for AI and machine learning. Existing projects include the rather mundane (testing whether AI can predict when tanks and trucks need maintenance) as well as things on the leading edge of weapons technology (swarms of drones).
The Pentagon’s AI push is partly driven by fear of the way rivals might use the technology. Last year Jim Mattis, then the secretary of defense, sent a memo to President Donald Trump warning that the US is already falling behind when it comes to AI. His worry is understandable.
In July 2017, China articulated its AI strategy, declaring that “the world’s major developed countries are taking the development of AI as a major strategy to enhance national competitiveness and protect national security.” And a few months later, Vladimir Putin of Russia ominously declared: “Whoever becomes the leader in [the AI] sphere will become the ruler of the world.”
The ambition to build the smartest, and deadliest, weapons is understandable, but as the Tesla hack shows, an enemy that knows how an AI algorithm works could render it useless or even turn it against its owners. The secret to winning the AI wars might rest not in making the most impressive weapons but in mastering the disquieting treachery of the software.

Battle bots
On a bright and sunny day last summer in Washington, DC, Michael Kanaan was sitting in the Pentagon’s cafeteria, eating a sandwich and marveling over a powerful new set of machine-learning algorithms.
A few weeks earlier, Kanaan had watched a video game in which five AI algorithms worked together to very nearly outmaneuver, outgun, and outwit five humans in a contest that involved controlling forces, encampments, and resources across a complex, sprawling battlefield. The brow beneath Kanaan’s cropped blond hair was furrowed as he described the action, though. It was one of the most impressive demonstrations of AI strategy he’d ever seen, an unexpected development akin to AI advances in chess, Atari, and other games.
The war game had taken place within Dota 2, a popular sci-fi video game that is incredibly challenging for computers. Teams must defend their territory while attacking their opponents’ encampments in an environment that is more complex and deceptive than any board game. Players can see only a small part of the whole picture, and it can take about half an hour to determine if a strategy is a winning one.
The AI combatants were developed not by the military but by OpenAI, a company created by Silicon Valley bigwigs including Elon Musk and Sam Altman to do fundamental AI research. The company’s algorithmic warriors, known as the OpenAI Five, worked out their own winning strategies through relentless practice, and by responding with moves that proved most advantageous.
AI-guided missiles could be blinded by adversarial data, and perhaps even steered back toward friendly targets.
It is exactly the type of software that intrigues Kanaan, one of the people tasked with using artificial intelligence to modernize the US military. To him, it shows what the military stands to gain by enlisting the help of the world’s best AI researchers. But whether they are willing is increasingly in question.

Kanaan was the Air Force lead on Project Maven, a military initiative aimed at using AI to automate the identification of objects in aerial imagery. Google was a contractor on Maven, and when other Google employees found that out, in 2018, the company decided to abandon the project. It subsequently devised an AI code of conduct saying Google would not use its AI to develop “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”
Workers at some other big tech companies followed by demanding that their employers eschew military contracts. Many prominent AI researchers have backed an effort to initiate a global ban on developing fully autonomous weapons.
To Kanaan, however, it would be a big problem if the military couldn’t work with researchers like those who developed the OpenAI Five. Even more disturbing is the prospect of an adversary gaining access to such cutting-edge technology. “The code is just out there for anyone to use,” he said. He added: “war is far more complex than some video game.”

The AI surge
Kanaan is generally very bullish about AI, partly because he knows firsthand how useful it stands to be for troops. Six years ago, as an Air Force intelligence officer in Afghanistan, he was responsible for deploying a new kind of intelligence-gathering tool: a hyperspectral imager. The instrument can spot objects that are normally hidden from view, like tanks draped in camouflage or emissions from an improvised bomb-making factory. Kanaan says the system helped US troops remove many thousands of pounds of explosives from the battlefield. Even so, it was often impractical for analysts to process the vast amounts of data collected by the imager. “We spent too much time looking at the data and not enough time making decisions,” he says. “Sometimes it took so long that you wondered if you could’ve saved more lives.”
A solution could lie in a breakthrough in computer vision by a team led by Geoffrey Hinton at the University of Toronto. It showed that an algorithm inspired by a many-layered neural network could recognize objects in images with unprecedented skill when given enough data and computer power.
Training a neural network involves feeding in data, like the pixels in an image, and continuously altering the connections in the network, using mathematical techniques, so that the output gets closer to a particular outcome, like identifying the object in the image. Over time, these deep-learning networks learn to recognize the patterns of pixels that make up houses or people. Advances in deep learning have sparked the current AI boom; the technology underpins Tesla’s autonomous systems and OpenAI’s algorithms.
Kanaan immediately recognized the potential of deep learning for processing the various types of images and sensor data that are essential to military operations. He and others in the Air Force soon began lobbying their superiors to invest in the technology. Their efforts have contributed to the Pentagon’s big AI push.
But shortly after deep learning burst onto the scene, researchers found that the very properties that make it so powerful are also an Achilles’ heel.
Just as it’s possible to calculate how to tweak a network’s parameters so that it classifies an object correctly, it is possible to calculate how minimal changes to the input image can cause the network to misclassify it. In such “adversarial examples,” just a few pixels in the image are altered, leaving it looking just the same to a person but very different to an AI algorithm. The problem can arise anywhere deep learning might be used—for example, in guiding autonomous vehicles, planning missions, or detecting network intrusions.
Amid the buildup in military uses of AI, these mysterious vulnerabilities in the software have been getting far less attention.

Moving targets
One remarkable object serves to illustrate the power of adversarial machine learning. It’s a model turtle.
To you or me it looks normal, but to a drone or a robot running a particular deep-learning vision algorithm, it seems to be … a rifle. In fact, at one point the unique pattern of markings on the turtle’s shell could be recrafted so that an AI vision system made available through Google’s cloud would mistake it for just about anything. (Google has since updated the algorithm so that it isn’t fooled.)
The turtle was created not by some nation-state adversary, but by four guys at MIT. One of them is Anish Athalye, a lanky and very polite young man who works on computer security in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). In a video on Athalye’s laptop of the turtles being tested (some of the models were stolen at a conference, he says), it is rotated through 360 degrees and flipped upside down. The algorithm detects the same thing over and over: “rifle,” “rifle,” “rifle.”
The earliest adversarial examples were brittle and prone to failure, but Athalye and his friends believed they could design a version robust enough to work on a 3D-printed object. This involved modeling a 3D rendering of objects and developing an algorithm to create the turtle, an adversarial example that would work at different angles and distances. Put more simply, they developed an algorithm to create something that would reliably fool a machine-learning model.
The military applications are obvious. Using adversarial algorithmic camouflage, tanks or planes might hide from AI-equipped satellites and drones. AI-guided missiles could be blinded by adversarial data, and perhaps even steered back toward friendly targets. Information fed into intelligence algorithms might be poisoned to disguise a terrorist threat or set a trap for troops in the real world.
Athalye is surprised by how little concern over adversarial machine learning he has encountered. “I’ve talked to a bunch of people in industry, and I asked them if they are worried about adversarial examples,” he says. “The answer is, almost across the board, no.”
Fortunately, the Pentagon is starting to take notice. This August, the Defense Advanced Research Projects Agency (DARPA) announced several big AI research projects. Among them is GARD, a program focused on adversarial machine learning. Hava Siegelmann, a professor at the University of Massachusetts, Amherst, and the program manager for GARD, says these attacks could be devastating in military situations because people cannot identify them. “It’s like we’re blind,” she says. “That’s what makes it really very dangerous.”
The challenges presented by adversarial machine learning also explain why the Pentagon is so keen to work with companies like Google and Amazon as well as academic institutions like MIT. The technology is evolving fast, and the latest advances are taking hold in labs run by Silicon Valley companies and top universities, not conventional defense contractors.
Crucially, they’re also happening outside the US, particularly in China. “I do think that a different world is coming,” says Kanaan, the Air Force AI expert. “And it’s one we have to combat with AI.”
The backlash against military use of AI is understandable, but it may miss the bigger picture. Even as people worry about intelligent killer robots, perhaps a bigger near-term risk is an algorithmic fog of war—one that even the smartest machines cannot peer through.

Military artificial intelligence can be easily and dangerously fooled
 

@Gautam @nair @Ashwin Please merge the threads with revised thread name as "Artificial Intelligence & its application in military & defence domain : News & Discussion". Thank you.