Jump to content
  • The AI-Powered, Totally Autonomous Future of War Is Here

    aum

    • 2 comments
    • 465 views
    • 19 minutes
     Share


    • 2 comments
    • 465 views
    • 19 minutes

    A FLEET OF robot ships bobs gently in the warm waters of the Persian Gulf, somewhere between Bahrain and Qatar, maybe 100 miles off the coast of Iran. I am on the nearby deck of a US Coast Guard speedboat, squinting off what I understand is the port side. On this morning in early December 2022, the horizon is dotted with oil tankers and cargo ships and tiny fishing dhows, all shimmering in the heat. As the speedboat zips around the robot fleet, I long for a parasol, or even a cloud.

     

    The robots do not share my pathetic human need for shade, nor do they require any other biological amenities. This is evident in their design. A few resemble typical patrol boats like the one I’m on, but most are smaller, leaner, lower to the water. One looks like a solar-powered kayak. Another looks like a surfboard with a metal sail. Yet another reminds me of a Google Street View car on pontoons.

     

    These machines have mustered here for an exercise run by Task Force 59, a group within the US Navy’s Fifth Fleet. Its focus is robotics and artificial intelligence, two rapidly evolving technologies shaping the future of war. Task Force 59’s mission is to swiftly integrate them into naval operations, which it does by acquiring the latest off-the-shelf tech from private contractors and putting the pieces together into a coherent whole.

     

    The exercise in the Gulf has brought together more than a dozen uncrewed platforms—surface vessels, submersibles, aerial drones. They are to be Task Force 59’s distributed eyes and ears: They will watch the ocean’s surface with cameras and radar, listen beneath the water with hydrophones, and run the data they collect through pattern-matching algorithms that sort the oil tankers from the smugglers.

     

    A fellow human on the speedboat draws my attention to one of the surfboard-style vessels. It abruptly folds its sail down, like a switchblade, and slips beneath the swell. Called a Triton, it can be programmed to do this when its systems sense danger. It seems to me that this disappearing act could prove handy in the real world: A couple of months before this exercise, an Iranian warship seized two autonomous vessels, called Saildrones, which can’t submerge. The Navy had to intervene to get them back.

     

    The Triton could stay down for as long as five days, resurfacing when the coast is clear to charge its batteries and phone home. Fortunately, my speedboat won’t be hanging around that long. It fires up its engine and roars back to the docking bay of a 150-foot-long Coast Guard cutter. I head straight for the upper deck, where I know there’s a stack of bottled water beneath an awning. I size up the heavy machine guns and mortars pointed out to sea as I pass.

     

    The deck cools in the wind as the cutter heads back to base in Manama, Bahrain. During the journey, I fall into conversation with the crew. I’m eager to talk with them about the war in Ukraine and the heavy use of drones there, from hobbyist quadcopters equipped with hand grenades to full-on military systems. I want to ask them about a recent attack on the Russian-occupied naval base in Sevastopol, which involved a number of Ukrainian-built drone boats bearing explosives—and a public crowdfunding campaign to build more. But these conversations will not be possible, says my chaperone, a reservist from the social media company Snap. Because the Fifth Fleet operates in a different region, those on Task Force 59 don’t have much information about what’s going on in Ukraine, she says. Instead, we talk about AI image generators and whether they’ll put artists out of a job, about how civilian society seems to be reaching its own inflection point with artificial intelligence. In truth, we don’t know the half of it yet. It has been just a day since OpenAI launched ChatGPT, the conversational interface that would break the internet.

     

    Back at base, I head for the Robotics Operations Center, where a group of humans oversees the distributed sensors out on the water. The ROC is a windowless room with several rows of tables and computer monitors—pretty characterless but for the walls, which are adorned with inspirational quotes from figures like Winston Churchill and Steve Jobs. Here I meet Captain Michael Brasseur, the head of Task Force 59, a tanned man with a shaved head, a ready smile, and a sailor’s squint. (Brasseur has since retired from the Navy.) He strides between tables as he cheerfully explains how the ROC operates. “This is where all the data that’s coming off the unmanned systems is fused, and where we leverage AI and machine learning to get some really exciting insights,” Brasseur says, rubbing his hands together and grinning as he talks.

     

    The monitors flicker with activity. Task Force 59’s AI highlights suspicious vessels in the area. It has already flagged a number of ships today that did not match their identification signal, prompting the fleet to take a closer look. Brasseur shows me a new interface in development that will allow his team to perform many of these tasks on one screen, from viewing a drone ship’s camera feed to directing it closer to the action.

     

    “It can engage autonomously, but we don’t recommend it. We don’t want to start World War III.”

     

    Brasseur and others at the base stress that the autonomous systems they’re testing are for sensing and detection only, not for armed intervention.

     

    “The current focus of Task Force 59 is enhancing visibility,” Brasseur says. “Everything we do here supports the crew vessels.” But some of the robot ships involved in the exercise illustrate how short the distance between unarmed and armed can be—a matter of swapping payloads and tweaking software. One autonomous speedboat, the Seagull, is designed to hunt mines and submarines by dragging a sonar array in its wake. Amir Alon, a senior director at Elbit Systems, the Israeli defense firm that created the Seagull, tells me that it can also be equipped with a remotely operated machine gun and torpedoes that launch from the deck. “It can engage autonomously, but we don’t recommend it,” he says with a smile. “We don’t want to start World War III.”

     

    No, we don’t. But Alon’s quip touches on an important truth: Autonomous systems with the capacity to kill already exist around the globe. In any major conflict, even one well short of World War III, each side will soon face the temptation not only to arm these systems but, in some situations, to remove human oversight, freeing the machines to fight at machine speed. In this war of AI against AI, only humans will die. So it is reasonable to wonder: How do these machines, and the people who build them, think?

     

    GLIMMERINGS OF AUTONOMOUS technology have existed in the US military for decades, from the autopilot software in planes and drones to the automated deck guns that protect warships from incoming missiles. But these are limited systems, designed to perform specified functions in particular environments and situations. Autonomous, perhaps, but not intelligent. It wasn’t until 2014 that top brass at the Pentagon began contemplating more capable autonomous technology as the solution to a much grander problem.

     

    Bob Work, a deputy secretary of defense at the time, was concerned that the nation’s geopolitical rivals were “approaching parity” with the US military. He wanted to know how to “regain overmatch,” he says—how to ensure that even if the US couldn’t field as many soldiers, planes, and ships as, say, China, it could emerge victorious from any potential conflict. So Work asked a group of scientists and technologists where the Department of Defense should focus its efforts. “They came back and said AI-enabled autonomy,” he recalls. He began working on a national defense strategy that would cultivate innovations coming out of the technology sector, including the newly emerging capabilities offered by machine learning.

     

    This was easier said than done. The DOD got certain projects built—including Sea Hunter, a $20 million experimental warship, and Ghost Fleet Overlord, a flotilla of conventional vessels retro-fitted to perform autonomously—but by 2019 the department’s attempts to tap into Big Tech were stuttering. The effort to create a single cloud infrastructure to support AI in military operations became a political hot potato and was dropped. A Google project that involved using AI to analyze aerial images was met with a storm of public criticism and employee protest. When the Navy released its 2020 shipbuilding plan, an outline of how US fleets will evolve over the next three decades, it highlighted the importance of uncrewed systems, especially large surface ships and submersibles-—but allocated relatively little money to developing them.

     

    In a tiny office deep in the Pentagon, a former Navy pilot named Michael Stewart was well aware of this problem. Charged with overseeing the development of new combat systems for the US fleet, Stewart had begun to feel that the Navy was like Blockbuster sleepwalking into the Netflix era. Years earlier, at Harvard Business School, he had attended classes given by Clay Christensen, an academic who studied why large, successful enterprises get disrupted by smaller market entrants—often because a focus on current business causes them to miss new technology trends. The question for the Navy, as Stewart saw it, was how to hasten the adoption of robotics and AI without getting mired in institutional bureaucracy.

     

    Others at the time were thinking along similar lines. That December, for instance, researchers at RAND, the government-funded defense think tank, published a report that suggested an alternate path: Rather than funding a handful of extravagantly priced autonomous systems, why not buy up cheaper ones by the swarm? Drawing on several war games of a Chinese invasion of Taiwan, the RAND report stated that deploying huge numbers of low-cost aerial drones could significantly improve the odds of US victory. By providing a picture of every vessel in the Taiwan Strait, the hypothetical drones—which RAND dubbed “kittens”—might allow the US to quickly destroy an enemy’s fleet. (A Chinese military journal took note of this prediction at the time, discussing the potential of xiao mao, the Chinese phrase for “kitten,” in the Taiwan Strait.)

     

    In early 2021, Stewart and a group of colleagues drew up a 40-page document called the Unmanned Campaign Framework. It outlined a scrappy, unconventional plan for the Navy’s use of autonomous systems, forgoing conventional procurement in favor of experimentation with cheap robotic platforms. The effort would involve a small, diverse team—specialists in AI and robotics, experts in naval strategy—that could work together to quickly implement ideas. “This is not just about unmanned systems,” Stewart says. “It is as much—if not more—an organizational story.”

     

    Stewart’s plan drew the attention of Vice Admiral Brad Cooper of the Fifth Fleet, whose territory spans 2.5 million square miles of water, from the Suez Canal around the Arabian Peninsula to the Persian Gulf. The area is filled with shipping lanes that are both vital to global trade and rife with illegal fishing and smuggling. Since the end of the Gulf War, when some of the Pentagon’s attention and resources shifted toward Asia, Cooper had been looking for ways to do more with less, Stewart says. Iran had intensified its attacks on commercial vessels, swarming them in armed speed boats and even striking with drones and remotely operated boats.

     

    Cooper asked Stewart to join him and Brasseur in Bahrain, and together the three began setting up Task Force 59. They looked at the autonomous systems already in use in other places around the world—for gathering climate data, say, or monitoring offshore oil platforms—and concluded that leasing and modifying this hardware would cost a fraction of what the Navy normally spent on new ships. Task Force 59 would then use AI-driven software to put the pieces together. “If new unmanned systems can operate in these complex waters,” Cooper told me, “we believe they can be scaled to the other US Navy fleets.”

     

    As they were setting up the new task force, those waters kept getting more complex. In the early hours of July 29, 2021, an oil tanker called Mercer Street was headed north along the coast of Oman, en route from Tanzania to the United Arab Emirates, when two black, V-shaped drones appeared on the horizon, sweeping through the clear sky before exploding in the sea. A day later, after the crew had collected some debris from the water and reported the incident, a third drone dive-bombed the roof of the ship’s control room, this time detonating an explosive that ripped through the structure, killing two members of its crew. Investigators concluded that three “suicide drones” made in Iran were to blame.

     

    The main threat on Stewart’s mind was China. “My goal is to come in with cheap or less expensive stuff very quickly—inside of five years—to send a deterrent message,” he says. But China is, naturally, making substantial investments in military autonomy too. A report out of Georgetown University in 2021 found that the People’s Liberation Army spends more than $1.6 billion on the technology each year—roughly on par with the US. The report also notes that autonomous vessels similar to those being used by Task Force 59 are a major focus of the Chinese navy. It has already developed a clone of the Sea Hunter, along with what is reportedly a large drone mothership.

     

    Stewart hadn’t noticed much interest in his work, however, until Russia invaded Ukraine. “People are calling me up and saying, ‘You know that autonomous stuff you were talking about? OK, tell me more,’” he says. Like the sailors and officials I met in Bahrain, he wouldn’t comment specifically on the situation—not about the Sevastopol drone-boat attack; not about the $800 million aid package the US sent Ukraine last spring, which included an unspecified number of “unmanned coastal defense vessels”; not about Ukraine’s work to develop fully autonomous killer drones. All Stewart would say is this: “The timeline is definitely shifting.”

     

    Hivemind is designed to fly the F-16 fighter jet, and it can beat most human pilots who take it on in the simulator.

     

    I AM IN San Diego, California, a main port of the US Pacific Fleet, where defense startups grow like barnacles. Just in front of me, in a tall glass building surrounded by palm trees, is the headquarters of Shield AI. Stewart encouraged me to visit the company, which makes the V-BAT, an aerial drone that Task Force 59 is experimenting with in the Persian Gulf. Although strange in appearance-—shaped like an upside-down T, with wings and a single propeller at the bottom-—it’s an impressive piece of hardware, small and light enough for a two-person team to launch from virtually anywhere. But it’s the software inside the V-BAT, an AI pilot called Hivemind, that I have come to see.

     

    I walk through the company’s bright-white offices, past engineers fiddling with bits of drone and lines of code, to a small conference room. There, on a large screen, I watch as three V-BATS embark on a simulated mission in the Californian desert. A wildfire is raging somewhere nearby, and their task is to find it. The aircraft launch vertically from the ground, then tilt forward and swoop off in different directions. After a few minutes, one of the drones pinpoints the blaze, then relays the information to its cohorts. They adjust flight, moving closer to the fire to map its full extent.

     

    The simulated V-BATs are not following direct human commands. Nor are they following commands encoded by humans in conventional software—the rigid If this, then that. Instead, the drones are autonomously sensing and navigating their environment, planning how to accomplish their mission, and working together in a swarm. -Shield AI’s engineers have trained Hivemind in part with reinforcement learning, deploying it on thousands of simulated missions, gradually encouraging it to zero in on the most efficient means of completing its task. “These are systems that can think and make decisions,” says Brandon Tseng, a former Navy SEAL who cofounded the company.

     

    This version of Hivemind includes a fairly simple sub-algorithm that can identify simulated wildfires. Of course, a different set of sub-algorithms could help a drone swarm identify any number of other targets—vehicles, vessels, human combatants. Nor is the system confined to the V-BAT.

     

    Hivemind is also designed to fly the F-16 fighter jet, and it can beat most human pilots who take it on in the simulator. (The company envisions this AI becoming a “copilot” in more recent generations of warplanes.) Hivemind also operates a quadcopter called Nova 2, which is small enough to fit inside a backpack and can explore and map the interiors of buildings and underground complexes.

     

    For Task Force 59—or any military organization looking to pivot to AI and robotics relatively cheaply—the appeal of these technologies is clear.

     

    They offer not only “enhanced visibility” on the battlefield, as Brasseur put it, but the ability to project power (and, potentially, use force) with fewer actual people on the job. Rather than assigning dozens of human drone operators to a search-and-rescue effort or a reconnaissance mission, you could send in a team of V-BATs or Nova 2s. Instead of risking the lives of your very expensively trained pilots in an aerial assault, you could dispatch a swarm of cheap drones, each one piloted by the same ace AI, each one an extension of the same hive mind.

     

    Still, as astonishing as machine-learning algorithms may be, they can be inherently inscrutable and unpredictable. During my visit to Shield AI, I have a brief encounter with one of the company’s Nova 2 drones. It rises from the office floor and hovers about a foot from my face. “It’s checking you out,” an engineer says. A moment later, the drone buzzes upward and zips through a mocked-up window on one side of the room. The experience is unsettling. In an instant, this little airborne intelligence made a determination about me. But how? Although the answer may be accessible to Shield AI’s engineers, who can replay and analyze elements of the robot’s decisionmaking, the company is still working to make this information available to “non-expert users.”

     

    One need only look to the civilian world to see how this technology can go awry—face-recognition systems that display racial and gender biases, self-driving cars that slam into objects they were never trained to see. Even with careful engineering, a military system that incorporates AI could make similar mistakes. An algorithm trained to recognize enemy trucks might be confused by a civilian vehicle. A missile defense system designed to react to incoming threats may not be able to fully “explain” why it misfired.

     

    These risks raise new ethical questions, akin to those introduced by accidents involving self-driving cars. If an autonomous military system makes a deadly mistake, who is responsible? Is it the commander in charge of the operation, the officer overseeing the system, the computer engineer who built the algorithms and networked the hive mind, the broker who supplied the training data?

     

    One thing is for sure: The technology is advancing quickly. When I met Tseng, he said Shield AI’s goal was to have “an operational team of three V-BATs in 2023, six V-BATs in 2024, and 12 V-BATs in 2025.” Eight months after we met, Shield AI launched a team of three V-BATs from an Air Force base to fly the simulated wildfire mission. The company also now boasts that Hivemind can be trained to undertake a range of missions—hunting for missile bases, engaging with enemy aircraft—and it will soon be able to operate even when communications are limited or cut off.

     

    Before I leave San Diego, I take a tour of the USS Midway, an aircraft carrier that was originally commissioned at the end of World War II and is now permanently docked in the bay. For decades, the ship carried some of the world’s most advanced military technology, serving as a floating runway for hundreds of aircraft flying reconnaissance and bombing missions in conflicts from Vietnam to Iraq. At the center of the carrier, like a cavernous metal stomach, is the hangar deck. Doorways on one side lead into a rabbit’s warren of corridors and rooms, including cramped sailors’ quarters, comfy officers’ bedrooms, kitchens, sick bays, even a barbershop and a laundry—a reminder that 4,000 sailors and officers at a time used to call this ship home.

     

    Standing here, I can sense how profound the shift to autonomy will be. It may be a long time before vessels without crews outnumber those with humans aboard, even longer than that before drone mother-ships rule the seas. But Task Force 59’s robot armada, fledgling as it is, marks a step into another world. Maybe it will be a safer world, one in which networks of autonomous drones, deployed around the globe, help humans keep conflict in check. Or maybe the skies will darken with attack swarms. Whichever future lies on the horizon, the robots are sailing that way.

     

    Source


    User Feedback

    Recommended Comments



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...