So You Want to Prevent a Dystopia?

When it comes to artificial intelligence there's good news and bad news. On the plus side, AI could save millions of lives a year by putting robots behind the wheels of cars or helping scientists discover new medicines. On the other hand, it could put you under surveillance, because a computer thinks your recent behavior patterns suggest you might be about to commit a crime.

So, how to reap the benefits and avoid the dystopia? It's a question of how AI systems are built, what companies and governments do with them, and how they handle basic problems of privacy, fairness, and accountability. Here's a quick rundown of how different countries (or groups of countries) are approaching the challenge of putting ethical guardrails around AI.

The European Union is trying to do the same thing in AI that it's already done on digital privacy: Putting citizens' rights first – but without scaring off the tech companies that can also deliver AI's benefits. A new set of ethical guidelines published this week gives AI engineers checklists they can use to make sure they are on the right track on issues like privacy and data quality, though it stopped short of blacklisting certain applications. Toothy regulation this is not, but just getting these ethical questions mapped out on official EU letterhead is a start. Although the guidelines are voluntary, one of the architects behind the bloc's data privacy policies has argued that legal heft will eventually be required to keep AI safe for people and to uphold democracy.

The US, meanwhile, is taking its usual hands-off approach. The Trump administration has asked bureaucrats to develop better technical standards for "trustworthy" AI, but it doesn't directly broach the subject of ethics. But in the private sector there's been progress: the IEEE, an international standards organization, recently dropped a 300-page bomb of "Ethically Aligned Design" thinking, which lists eight general principles that designers should follow, including respect for human rights, giving people control over their data, and guarding against potential abuse. Still, it's a thorny challenge. Google's AI ethics board was recently scuttled after employees objected to conservative board member's views on transgender rights and immigration.

Then there's China, where bureaucrats are wrestling with ethical issues like data privacy and transparency in AI algorithms, too. Like the EU, China wants to get out front on global regulation – partly because it thinks its internet companies will grow faster if it can set standards for AI, and partly because Beijing doesn't want a rerun of the situation from 30 years ago when other counties set the rules of the road for the internet first. But while China may share European views on policing bias in algorithms, there is likely to be a sharper difference on issues like privacy, "moral" or "ethical" definitions in the AI world, and how ethics norms should be enforced.

The bottom line: Defining and enforcing acceptable boundaries of AI is a long term challenge, but the guardrails that governments and industry put in place early on may determine whether we're heading for a new era of human progress or a mash-up of Blade Runner and Minority Report.

More from GZERO Media

Israel seems intent on Rafah invasion despite global backlash | Ian Bremmer | World In :60

How will the international community respond to an Israeli invasion of Rafah? How would a Trump presidency be different from his first term? Are growing US campus protests a sign of a chaotic election in November? Ian Bremmer shares his insights on global politics this week on World In :60.

Former President Donald Trump speaks to members of the media in New York City, U.S., April 30, 2024.
REUTERS/Eduardo Munoz

The judge in the so-called hush money case in New York against presumptive Republican presidential nominee Donald Trump has fined the former president for repeatedlyviolating a gag order that bars him from publicly criticizing witnesses and jurors.

FILE PHOTO: A view shows parts of an unidentified missile, which Ukrainian authorities believe to be made in North Korea and was used in a strike in Kharkiv earlier this week, amid Russia's attack on Ukraine, in Kharkiv, Ukraine January 6, 2024.
REUTERS/Vyacheslav Madiyevskyy/File Photo

The United Nations found evidence that Russia struck the Ukrainian city of Kharkiv with a North Korean Hwaseong-11 missile in January, according to a new report.

An Israeli soldier looks on from a vehicle near the Israel-Gaza border, amid the ongoing conflict between Israel and Hamas, in Israel, April 30, 2024.
REUTERS/Amir Cohen

Despite offering a watered-down hostage deal proposal to Hamas, Israeli Prime Minister Benjamin Netanyahu on Tuesday said an invasion of Rafah — the southern Gaza city where over a million Palestinians are sheltering — would move forward “with or without” a cease-fire.

FILE PHOTO: OpenAI logo is seen near computer motherboard in this illustration taken January 8, 2024.
REUTERS/Dado Ruvic/Illustration/File Photo

Eight major newspapers, all owned by the hedge fund Alden Global Capital, are suing ChatGPT maker OpenAI in federal court in Manhattan, alleging copyright infringement.