In the coming years, the Navy will gain access to a rapidly growing profusion of sensors, not just through new fleets of unmanned vehicles combined with existing systems, but through multiservice sensors as well, as part of a joint operating environment. If the Navy is to maintain dominance in the U.S. Indo-Pacific Command Area of Responsibility (INDOPACOM AOR), it must be able to extract maximum insight from those sensor assets.
One of the key challenges in gaining that insight is resolving the inconsistencies that frequently arise when multiple sensors are looking at the same contact. Different sensors often have their own inherent strengths and weaknesses. One sonar sensor might have more precise bearing resolution on a contact, for example, allowing for a better targeting solution. But a different sonar sensor might have better narrowband frequency information, making contact classification more accurate. The greater the number of sensors, the more valuable data is available—but also the greater number of differences in the data, and the more noise that operators have to sort out to make the best identification.
Machine learning and other forms of artificial intelligence will aid this process, but they also contribute to the problem themselves. In many cases there will be multiple algorithms looking at the same stream of sensor data, each making its own prediction of classification, location track, and mission intent—all based on the algorithm’s particular strengths and weaknesses. It may not be easy to reconcile their differences.
One advantage of machine learning is its ability to present a confidence value, or score, that a commander can use in decision-making. For example, machine learning algorithms—based on data from multiple surfaces and undersea sensors—might say that there is a 99.99% chance the contact is a manmade object, a 95% chance the contact is a Chinese submarine, and an 85% chance the contact is a Han Class SSN. But how do you know if the conclusion is reliable if there is so much variability between the sensors, and between the algorithms themselves?
The Navy can address this challenge by using AI in another way. The AI fuses the algorithms that process the sensor data (algorithm fusion) and then fuses that result with the results of other sensors using non-linear models such as deep neural networks (sensor-data fusion). The AI then refines that result with a third layer (context fusion), which brings together and analyzes additional Navy datasets for contact identification.
The result of this multilayer, AI-enabled fusion is a far more accurate score for the commander—and one that can rapidly bring together a large number of sensors from manned and unmanned systems, significantly shortening the time to decision-making and action.
The three-step process works in a particular order—first algorithm fusion, then sensor fusion, then context fusion. Each step is critical to the final score.