Hairpin Letter Press promo

Silicon Valley is debating if AI weapons should be allowed to decide to kill


In late September, Shield AI co-founder Brandon Tseng swore that weapons in the U.S. would never be fully autonomous — meaning an AI algorithm would make the final decision to kill someone. “Congress doesn’t want that,” the defense tech founder told TechCrunch. “No one wants that.” 

But Tseng spoke too soon. Five days later, Anduril co-founder Palmer Luckey expressed an openness to autonomous weapons — or at least a heavy skepticism of arguments against them. The U.S.’s adversaries “use phrases that sound really good in a sound bite: Well, can’t you agree that a robot should never be able to decide who lives and dies?” Luckey said during a talk earlier this month at Pepperdine University. “And my point to them is, where’s the moral high ground in a landmine that can’t tell the difference between a school bus full of kids and a Russian tank?” 

When asked for further comment, Shannon Prior, a spokesperson for Anduril, said that Luckey didn’t mean that robots should be programmed to kill people on their own, just that he was concerned about “bad people using bad AI.”

In the past, Silicon Valley has erred on the side of caution. Take it from Luckey’s co-founder, Trae Stephens. “I think the technologies that we’re building are making it possible for humans to make the right decisions about these things,” he told Kara Swisher last year. “So that there is an accountable, responsible party in the loop for all decisions that could involve lethality, obviously.” 

The Anduril spokesperson denied any dissonance between Luckey (pictured above) and Stephens’ perspectives, and said that Stephens didn’t mean that a human should always make the call, but just that someone is accountable. 

To be fair, the stance of the U.S. government itself is similarly ambiguous. The U.S. military currently does not purchase fully autonomous weapons. Though some argue weapons like mines and missiles can operate autonomously, this is a qualitatively different form of autonomy than, say, a turret that identifies, acquires, and fires on targets without human intervention.

The U.S. does not ban companies from making fully autonomous lethal weapons nor does it explicitly ban them from selling such things to foreign countries. Last year, the U.S. released updated guidelines for AI safety in the military that have been endorsed by many U.S. allies and requires top military officials to approve of any new autonomous weapon; yet the guidelines are voluntary (Anduril said it is committed to following the guidelines), and U.S. officials have continuously said it’s “not the right time” to consider any binding ban on autonomous weapons. 

Last month, Palantir co-founder and Anduril investor Joe Lonsdale also showed a willingness to consider fully autonomous weapons. At an event hosted by the think tank Hudson Institute, Lonsdale expressed frustration that this question is being framed as a yes-or-no at all. He instead presented a hypothetical where China has embraced AI weapons, but the U.S. has to “press the button every time it fires.” He encouraged policymakers to embrace a more flexible approach to how much AI is in weapons. 

“You very quickly realize, well, my assumptions were wrong if I just put a stupid top-down rule, because I’m a staffer who’s never played this game before,” he said. “I could destroy us in the battle.” 

When TechCrunch asked Lonsdale for further comment, he emphasized that defense tech companies shouldn’t be the ones setting the agenda on lethal AI. “The key context to what I was saying is that our companies don’t make the policy, and don’t want to make the policy: it’s the job of elected officials to make the policy,” he said. “But they do need to educate themselves on the nuance to do a good job.” 

He also reiterated a willingness to consider more autonomy in weapons. “It’s not a binary as you suggest — ‘fully autonomous or not’ isn’t the correct policy question. There’s a sophisticated dial along a few different dimensions for what you might have a soldier do and what you have the weapons system do,” he said. “Before policymakers put these rules in place and decide where the dials need to be set in what circumstance, they need to learn the game and learn what the bad guys might be doing, and what’s necessary to win with American lives on the line.”

Activists and human rights groups have long tried and failed to establish international bans on autonomous lethal weapons — bans that the U.S. has resisted signing. But the war in Ukraine may have turned the tide against activists, providing both a trove of combat data and a battlefield for defense tech founders to test on. Currently, companies integrate AI into weapons systems, although they still require a human to make the final decision to kill. 

Meanwhile, Ukrainian officials have pushed for more automation in weapons, hoping it’ll give them a leg-up over Russia. “We need maximum automation,” said Mykhailo Fedorov, Ukraine’s minister of digital transformation, in an interview with The New York Times. “These technologies are fundamental to our victory.”

For many in Silicon Valley and D.C., the biggest fear is that China or Russia rolls out fully autonomous weapons first, forcing the U.S.’s hand. At a UN debate on AI arms last year, a Russian diplomat was notably coy. “We understand that for many delegations the priority is human control,” he said. “For the Russian Federation, the priorities are somewhat different.”

At the Hudson Institute event, Lonsdale said that the tech sector needs to take it upon itself to “teach the Navy, teach the DoD, teach Congress” about the potential of AI to “hopefully get us ahead of China.” 

Lonsdale’s and Luckey’s affiliated companies are working on getting Congress to listen to them. Anduril and Palantir have cumulatively spent over $4 million in lobbying this year, according to OpenSecrets. 

Editor’s note: this story was updated with more language to describe autonomous weapons.



Source link

About The Author

Scroll to Top