Should a robot be allowed to kill you? – POLITICO

How the next wave of technology is upending the global economy and its power structures
How the next wave of technology is upending the global economy and its power structures
By signing up you agree to allow POLITICO to collect your user information and use it to better recommend content to you, send you email newsletters or updates from POLITICO, and share insights based on aggregated user information. You further agree to our privacy policy and terms of service. You can unsubscribe at any time and can contact us here. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Loading
You will now start receiving email updates
You are already subscribed
Something went wrong
By signing up you agree to allow POLITICO to collect your user information and use it to better recommend content to you, send you email newsletters or updates from POLITICO, and share insights based on aggregated user information. You further agree to our privacy policy and terms of service. You can unsubscribe at any time and can contact us here. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
By MATT BERG 

Presented by
With help from Derek Robertson

A Ukrainian soldier walks over rubble in a destroyed building after testing a drone on Jan. 23, 2023, in Kupiansk, Ukraine. | Spencer Platt/Getty Images
In the 1866 novel Crime and Punishment, Russian writer Fyodor Dostoevsky drills straight into a dark and perplexing question: Is it ever acceptable for a human to take another human’s life?
More than a century and a half later, a fitting reinterpretation would cast Raskolnikov, the homicidal main character, as a robot. That’s because military analysts and human rights advocates have been battling over a newer moral frontier: Is it ever okay for a fully autonomous machine to kill a human?
The answer, right now, seems to be no — not in any official sense, but by informal, global consensus. That’s despite experts believing fully autonomous weapons have already been deployed on the battlefield in past years.
But that question may be pushed to the official forefront very quickly in Europe: Ukrainian officials are developing so-called “killer robots,” possibly to be used in the country’s fight against Russia. Military experts warn that the longer the war goes on — we’re approaching the one-year anniversary in February — the more likely we’ll see drones that can target, engage and kill targets without an actual human finger on the trigger.
Fully autonomous killer dronesare “a logical and inevitable next step” in weapons development, Mykhailo Fedorov, Ukraine’s digital transformation minister, told the Associated Press earlier this month. Ukraine has been doing “a lot” of research and development on the topic, and he believes “the potential for this is great in the next six months.”
You might think someone would be frantically trying to prevent this, and you’d be right: the Campaign to Stop Killer Robots, an international coalition of non-governmental organizations, has for a dozen years been pressuring governments and United Nations members to call for a preemptive ban on the weapons.And right now it is very worried about Ukraine.
Deploying fully autonomous weapons “changes the relationship between people and technology by handing over life and death decision-making to machines,” Catherine Connolly, the group’s automated decision research manager, told Digital Future Daily.
The United Nations has been discussing the issue for yearswithout coming to any kind of consensus. Groups like Stop Killer Robots, Human Rights Watch and the International Committee of the Red Cross have called for an international legally binding treaty on autonomous weapons systems. That requires agreement among U.N. members, which has so far been impossible to achieve.
But there seems to be momentum in the anti-killer robot camp.
In October, 70 states delivered a joint statement on autonomous weapons systems at the U.N. General Assembly. In it, they called for “adopting appropriate rules and measures” for the weapons. It’s the largest ever cross-regional statement made to the U.N. on the issue, with signers including the United States, Germany, the United Kingdom and other highly militarized nations.
Not everyone’s in agreement, though. So far in the U.N., some nations believe a preemptive ban could hinder their militaries’ ability to use AI tech in the future. And in the academic world, there’s some skepticism that the moral distinction is as clear as advocates assume. One provocative study even argues they could be “good news,” going so far as to say concerns surrounding killer robots are totally unfounded.
“The reality is war is horrifying, horrible,and there’s always going to be [soldiers] shooting a bullet through someone’s head and splattering their guts all over the wall. Like, that’s not particularly pleasant, right? And it doesn’t matter too much if it’s a human doing it,” Zak Kallenborn, a George Mason University weapons innovation analyst, told Digital Future Daily.
For now, the pace of technology is saving us from having to decide.Many countries already have the fully autonomous technology developed, but it’s been hard to work out the kinks,Kallenborn said. Deploying killing machines that might accidentally mistake a school bus full of children for an enemy tank, for instance, wouldbe a bad idea.
“Some of the issues that you’ve run into are that they’re not trustworthy or reliable, and it’s often tough to explain why they made a decision,” Kallenborn said. “It’s really tough to align the system and use it if you don’t really know” how it makes a decision.
One key question,as weapons stumble forward without clear regulations, is who would be held accountable for actions undertaken by a robot with a mind of its own.
Neither criminal law nor civil law guarantees that people directly involved in the use of killer robots would be held accountable, per a report from Human Rights Watch. If a civilian is mistakenly killed, it’s unclear who should face the consequences when there was no human input.
“When people say it doesn’t matter if it’s a machine that’s used … [humans] still have accountability and responsibility. It is humans who have the moral responsibility to make targeting decisions, and we can’t delegate that to machines,” Connolly said.
For now, the decade-long arguments rage on. The U.N. will meet again in March and May to discuss provisions for the technology, but if they can’t come to a consensus, the issue will be punted another year.
“At this point, the time for talking is kind of done,” Connolly said. “It’s time for action.”
A message from American Edge Project:
Protect America’s Tech Edge. U.S. competitiveness and exceptionalism depend on our ability to strengthen our technological leadership. The stakes are too high to get this wrong. See our congressional toolkit.
We all understand the worries about crypto and other unregulated blockchain products “contaminating” traditional finance, adding new risks and potential for FTX-style market failures.
But what about the art world?
Vanity Fair art columnist Nate Freeman reported last week on what happened when the musical chairs stopped in the world of dizzyingly high-price NFTs last year, where legacy art collectors were making multi-million-dollar bids for some of the hottest tokens. As it turns out, a more familiar crypto-world presence might have been behind one of the biggest sales, when 107 Bored Ape tokens were sold at Sotheby’s for $24.4 million in 2021: Sam Bankman-Fried’s FTX, which some crypto sleuths on Twitter tied to the digital transaction chain behind the sale.
Which poses a bit of a legal problem, as this would represent as Freeman puts it “a major Yuga Labs investor inflating the value of Yuga Labs’ most valuable asset by bidding it up at auction.” A slew of lawsuits, paranoid recriminations, and convoluted efforts at creating tax write-offs have, of course, followed; Freeman’s report is well worth reading for the details. —Derek Robertson

A message from American Edge Project:
Advertisement Image
A man using Google’s Daydream View VR headset. | Spencer Platt/Getty Images
What’s taking VR so long to get here — that is, into the average American’s daily life?
Spectacularly realistic 3D and AR technology already exists, as do immersive full-VR headsets like Meta’s Quest series. But the arrival of a “metaverse” in any meaningful sense is still very much in the future tense.
Metaverse evangelist Matthew Ball took on this question recently, and he blames — in part — our attachment to devices. A venture capitalist and author of “The Metaverse: And How It Will Revolutionize Everything,” Ball wrote on his website Monday about “Why VR/AR Gets So Far Away As It Comes Into Focus.” As he puts it, there are simply quite a number of electronic devices that we’re already attached to — not to mention more traditional offline hobbies (remember those?) — that will be hard for a clunky, expensive headset to elbow out.
“To drive adoption, VR games need to be better than the alternatives, such as TV, reading, board games, Dungeons & Dragons, video games, and whatever else,” Ball writes. “But for the most part, VR loses the leisure war. Yes, it offers greater immersion, more intuitive inputs, and more precise (or at least complex) controls. But the downsides are many… the average VR user can only play with a subsection of their friends — a significant drawback given the nature of VR’s applications.”
Ball ends on a somewhat optimistic note for the metaverse, however, noting that many of the augmented reality applications that it might require are already in play on our smartphones — citing Neal Stephenson’s remarks last year that “a lot of Metaverse content will be built for screens (where the market is) while keeping options open for the future growth of affordable headsets.” —Derek Robertson

JOIN POLITICO ON 2/9 TO HEAR FROM AMERICA’S GOVERNORS: In a divided Congress, more legislative and policy enforcement will shift to the states, meaning governors will take a leading role in setting the agenda for the nation. Join POLITICO on Thursday, Feb. 9 at World Wide Technology’s D.C. Innovation Center for The Fifty: America’s Governors, where we will examine where innovations are taking shape and new regulatory red lines, the future of reproductive health, and how climate change is being addressed across a series of one-on-one interviews. REGISTER HERE.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Steve Heuser ([email protected]); and Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

A message from American Edge Project:
Misguided Tech Policies Would Empower China
Misguided policies threaten America’s tech innovation. If the new Congress takes the wrong steps, they will empower China, hurt consumers, and undermine our economic and national security.
See our new ad.
DOWNLOAD THE POLITICO MOBILE APP: Stay up to speed with the newly updated POLITICO mobile app, featuring timely political news, insights and analysis from the best journalists in the business. The sleek and navigable design offers a convenient way to access POLITICO’s scoops and groundbreaking reporting. Don’t miss out on the app you can rely on for the news you need, reimagined. DOWNLOAD FOR iOSDOWNLOAD FOR ANDROID.
© 2023 POLITICO LLC

source

Leave a Reply

Your email address will not be published. Required fields are marked *