Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
A Saudi tech institute chooses the US over China
Sir Edward Byrne, recently named the head of King Abdullah University of Science and Technology in Saudi Arabia, or KAUST, signaled that the institution will prioritize US technology and cut off ties with China if it jeopardizes its access to chips made in the US.
Byrne, an Australian neuroscientist, served as president at King’s College London from 2014 to 2021, and he was named president of KAUST last month. KAUST’s researchers depend on high-end chips from US companies such as Nvidia and AMD to train and run powerful artificial intelligence models and applications.
Both the US and China have vied for power in the Arabian peninsula, but the US has a clear advantage with most of the top chip and AI companies in the world — as well as strict export controls for US companies shipping to China or intermediaries.
Byrne is following the lead of others in the kingdom. In May, the chief executive officer of the Saudi Public Investment Fund-backed fund Alat also said that if asked to choose between the US and China, the fund would divest from China.
At a global AI summit in Riyadh last month, the Saudi Data and Artificial Intelligence Authority recently announced a deal to buy 5,000 Nvidia graphics chips to help develop an Arabic large language model, pending US government approval. The future of Saudi tech depends on the US and, it seems, the government and its most important institutions are signaling that while they don’t want to choose sides, the answer is clear as to who would win if they did.
When to worry about AI and the election
There are 21 days until Election Day in the United States — and voters in numerous states have already begun early voting. So far, artificial intelligence applications have had minimal effects on the election, though it’s reared its head a few times.
During this US election cycle, generative AI has been used in an RNC ad, a fraudulent Joe Biden robocall for New Hampshire voters, and deepfake photos of Taylor Swift endorsing Donald Trump.
Microsoft and OpenAI say they’ve disrupted foreign influence campaigns from China, Iran, and Russia seeking to sow discord in the US, including around hot-button political issues such as Israel’s war with Gaza.
While malicious actors haven’t yet used AI tools in very novel ways, the technology has made it easier, quicker, and cheaper to generate online propaganda and disseminate it over social media. In Indonesia, for example, notorious Defense Minister Prabowo Subianto used a chubby-cheeked, friendly AI-generated avatar to appeal to voters in the presidential election. In Pakistan, Imran Khan used AI voice cloning to spread his political message and support his party’s candidates from prison.
Now, with the US election looming, there’s a very real possibility of a more malicious and effective AI campaign targeting Americans. So GZERO AI asked experts what they’re most concerned about in the run-up to Nov. 5. Their overriding concern revolved around misinformation – and how AI is used to create and distribute it – impacting whether and how people vote.
Valerie Wirtschafter, a fellow in the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution, for example, said she was concerned by the onslaught of generative AI images circulating on social media in the aftermath of Hurricane Helene — including ones alleging that the Biden administration wasn’t doing enough to support residents affected by the storm.
“These images were clearly AI, and when pointed out as such, the response was a simple shrug – that the images resonated because they ‘felt accurate’ anyways,” she said.
There hasn’t been any new federal legislation in the US regarding AI use around elections, and the Federal Election Commission recently chose to forgo new rulemaking on the matter ahead of November. That said, OpenAI, Anthropic, and most major AI companies have self-regulated, instituting rules preventing users from using their tools to generate election-related materials, such as images of presidential candidates. Many of them will refuse to provide voting information as well. That said, many of these rules are porous.
Wirtschafter said she’s most concerned about AI-generated media — particularly audio — being used not to affect how people vote but rather if people vote. Audio-generated content, she said, could be used to “try to prevent a targeted but vital subset of the population from voting” or “sow confusion about where and how to vote,” she said.
“While swing states have prepared for this possibility, it is still such a difficult task, and AI-generated content is most impactful at the local and highly targeted level.”
Scott Bade, a senior analyst in Eurasia Group’s geo-technology practice, said he’s concerned not only by the use of generative AI in the lead-up to the election but also by how politicians might invoke the technology to help cast doubt on things that are, in fact, true.
Like Wirtschafter, Bade said he’s most worried about anything that “muddies the waters and creates fear and confusion that can suppress votes on election day.”
But the threat won’t end after Americans go to the polls. The 2020 election and aftermath showed how conspiracy theories abound even without generative AI.
Politicians, especially those aligned with Trump, falsely claimed there was widespread voter fraud. Bade warned that AI might be used to affect how voters feel about the “sanctity of the ballot.”
So, what should we do about it? Around the elections, it’s important to keep an eye on the source of the materials you’re viewing, check government websites for reliable voting information, and take everything you hear or see in this age of AI with a grain of salt – even if it confirms your prior assumptions.
“This type of content can be obviously AI-generated but still ‘feel’ correct,” Wirtschafter said.
Hard Numbers: Viruses everywhere, TikTok content moderation cuts, Nevada’s “at-risk” student saga, TSMC on the rise
70,500: Researchers used artificial intelligence to identify 70,500 new viruses using metagenomics, in which scientists sequence entire environments based on individual samples. This research, led by University of Toronto researchers, uses a machine learning tool developed by Meta to find new viruses and predict their protein structures.
700: TikTok reportedly cut 700 jobs, including many in Malaysia, and will transition much of its content moderation work to artificial intelligence. This change only affects .6% of the social media company’s 110,000-person global workforce. Social media companies have long used a mix of human and automated systems to monitor user-generated content posted on their platforms.
200,000: Last year, the state of Nevada used an AI system to help it better identify what students in the state are “at risk” for falling behind academically and socially. But the AI, run by an outside contractor, developed a much higher bar for that determination, incorporating factors far beyond income levels, formerly the most important metric, and the number of “at-risk” students plummeted by about 200,000, leading the state to cut funding to many districts in need.
40: Stock analysts expect Taiwan Semiconductor Manufacturing Company to report a 40% profit increase when the chip fabrication giant reports its third-quarter earnings on Oct. 17. TSMC’s stock has already surged 77% this year due to surging demand from chip designers hungry to sell their products to AI companies.
DonBot is ready to debate
Bentley Hensel, a longshot candidate for the US House of Representatives in Virginia, wants his opponent to debate him. His rival is Rep. Don Beyer, who has spoken to GZERO AI in the past about going to graduate school in his 70s to study machine learning. (Read our April interview with Beyer here).
But Hensel, a software engineer running as an independent, told Reuters he was frustrated that Beyer wouldn’t appear in any debates between now and Election Day — the congressman appeared in a September forum with other candidates. So Hensel took a unique approach to get “Beyer” to debate him. He created DonBot, an artificial intelligence chatbot trained to represent Beyer in a debate — without Beyer’s permission, of course.
The debate will stream online on Oct. 17 and will feature Hensel, fellow independent David Kennedy, and DonBot. Representatives for Beyer did not respond to a request for comment from GZERO AI but told Reuters that the congressman still has no plans to participate in the October debate.
AMD has a fancy new chip to rival Nvidia
The US semiconductor designer AMD launched a new chip on Oct. 10. The Instinct MI325X is meant to compete with the upcoming Blackwell line of chips from market leader Nvidia.
Graphics processing chips from Nvidia, AMD, and Intel have been the lifeblood of the artificial intelligence boom, allowing the technology’s developers to train their powerful models and deploy them worldwide to users. Major tech companies have clamored to buy up valuable chips or pay to access large data centers full of them remotely through the cloud.
Lisa Su, CEO of AMD, claimed that the market for AI data centers will balloon by 60% a year and hit $500 billion by 2028. Still, investors weren’t convinced by what AMD showcased: The company’s stock fell 4% in trading Thursday, perhaps because AMD didn’t announce any big new deals with customers, though it bounced back 2% on Friday.
AMD’s new chips feature increased memory and a new architecture that the company promises will improve performance relative to prior models. Nvidia is expected to release its much-anticipated Blackwell chips by early next year, as the rivalry between the two most important AI chip designers in the world only gets hotter.
What two Nobel Prizes mean for AI
Artificial intelligence researchers won big at the Nobel Prizes this year, taking home not one but two of the esteemed international awards.
First, John Hopfield and Geoffrey Hinton won the Nobel Prize in physics for developing artificial neural networks, the machine-learning technique that has powered the current AI boom by replicating how the human brain processes information. Then, the Nobel committee awarded the chemistry prize to University of Washington biochemist David Baker as well as Google DeepMind’s Demis Hassabis and John Jumper. (Hassabis is DeepMind’s co-founder and CEO.) The trio was honored for developing techniques to use artificial intelligence to model and design proteins.
The Nobel wins come with cash prizes (11 million Swedish crowns, or $1.06 million), but also international recognition that could fuel further research and funding in artificial intelligence. Academic papers on innovative subjects tend to increase after the Nobel committee honors a discovery, Wired noted, as seen with the award for the isolation of the carbon structure graphene in 2010.
Of course, AI is already the subject of a global industrial boom, but the Nobel prizes are celebrations of what AI can do at its best — not a warning of how it can go wrong. Hinton, for his part, issued a warning after winning the physics prize. AI, he told CNN in an interview, “will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us.”
South Korea banned deepfakes. Is that a realistic solution for the US?
On Sept. 26, South Korea revised its law that criminalizes deepfake pornography. Now, it’s not just illegal to create and distribute this lewd digital material, but also to view it. Anyone found to possess, save, or even watch this content could face up to three years in jail or a $22,000 fine.
Deepfakes are AI mashups in which a person’s face or likeness is superimposed onto explicit content without their consent. It’s an issue that’s afflicted celebrities like Taylor Swift, but also private individuals targeted by people they know.
South Korea’s law is a particularly aggressive approach to combating a serious issue. It’s also a problem that’s much older than artificial intelligence itself. Fake nude images have been created with print photo cutouts as far back as the 19th century, but they have flourished in the computer age with PhotoShop and other photo-editing tools. And it’s a problem that’s only been supercharged by the rise and widespread availability of deep learning models in recent years. Deepfakes can be weaponized to embarrass, blackmail, or hurt people — typically, women — whether they’re famous or not.
While South Korea’s complete prohibition may seem attractive to those desperate to eliminate deepfakes, experts warn that such a ban — especially on viewing the material — is difficult to enforce and likely wouldn’t pass legal muster in the United States.
“I think some form of regulation is definitely needed in this space, and South Korea's approach is very comprehensive,” says Valerie Wirtschafter, a fellow at the Brookings Institution. “I do think it will be difficult to fully enforce just due to the global nature of the internet and the widespread availability of VPNs.”
In the US, at least 20 states have already passed laws addressing nonconsensual deepfakes, but they’re inconsistent. “Some are criminal in nature, others only allow for civil penalties. Some apply to all deepfakes, others only focus on deepfakes involving minors,” says Kevin Goldberg, a First Amendment specialist at the Freedom Forum.
“Creators and distributors take advantage of these inconsistencies and often tailor their actions to stay just on the right side of the law,” he adds. Additionally, many online abuses happen across state lines — if not across national borders — making state laws difficult to sue under.
Congress has introduced bills to tackle deepfakes, but none have yet passed. The Defiance Act, championed by Rep. Alexandria Ocasio-Cortez and Sens. Dick Durbin and Lindsey Graham, would create a civil right to action, allowing victims to sue people who create, distribute, or receive nonconsensual deepfakes. It passed the Senate in July but is still pending in the House.
But a full prohibition on sexually explicit deepfakes would likely run afoul of the First Amendment, which makes it very difficult for the government to ban speech — including explicit media.
“A similar law in the United States would be a complete nonstarter under the First Amendment,” says Corynne McSherry, legal director at the Electronic Frontier Foundation. She thinks that current US law should protect Americans from some harms of deepfakes, much of which could be defamatory, an invasion of privacy, or violate citizens’ rights to publicity.
Many states, including California, have a right of publicity law that allows individuals to sue if their likeness is being used without their consent, especially for commercial purposes. For a new law to take action on deepfakes and pass First Amendment scrutiny, it would need to be narrowly tailored to address a very specific harm without infringing on protected speech, something that McSherry says would be very hard to do.
Despite the tricky First Amendment challenges, there is growing recognition of the need for some form of regulation, Wirtschafter says. “It is one of the most pernicious and damaging uses of generative AI, and it disproportionately targets women.”
Employing AI fraud: Fake job applicants and fake employers
For one, employment scams surged in 2023, up 118% from the year prior, according to the Identity Theft Resource Center — largely due to the rise of AI. Scammers often pose as recruiters, advertising fake jobs to entice victims to cough up personal information. In 2022, consumers told the US Federal Trade Commission that they lost $367 million to these kinds of scams. And that was largely before the generative AI boom.
On the other side, real businesses are also wary of fake job applicants who can take advantage of remote work policies to interview and even get hired in order to steal money, collect an unearned salary, or gain access to company information. In 2022, the FBI reported an uptick in complaints regarding the use of deepfakes and stolen personal information to apply for remote work positions. “In these interviews, the actions and lip movement of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking,” the FBI warned. “At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually.”
Two years later, the technology is only more sophisticated, with more convincing text generation, text-to-speech tools, deepfake audio, and personal avatars. AI tools, even if intended to make life and business easier for people and companies, can easily be weaponized by bad actors.