Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
AI
Sir Edward Byrne, recently named the head of King Abdullah University of Science and Technology in Saudi Arabia, or KAUST, signaled that the institution will prioritize US technology and cut off ties with China if it jeopardizes its access to chips made in the US.
Byrne, an Australian neuroscientist, served as president at King’s College London from 2014 to 2021, and he was named president of KAUST last month. KAUST’s researchers depend on high-end chips from US companies such as Nvidia and AMD to train and run powerful artificial intelligence models and applications.
Both the US and China have vied for power in the Arabian peninsula, but the US has a clear advantage with most of the top chip and AI companies in the world — as well as strict export controls for US companies shipping to China or intermediaries.
Byrne is following the lead of others in the kingdom. In May, the chief executive officer of the Saudi Public Investment Fund-backed fund Alat also said that if asked to choose between the US and China, the fund would divest from China.
At a global AI summit in Riyadh last month, the Saudi Data and Artificial Intelligence Authority recently announced a deal to buy 5,000 Nvidia graphics chips to help develop an Arabic large language model, pending US government approval. The future of Saudi tech depends on the US and, it seems, the government and its most important institutions are signaling that while they don’t want to choose sides, the answer is clear as to who would win if they did.
There are 21 days until Election Day in the United States — and voters in numerous states have already begun early voting. So far, artificial intelligence applications have had minimal effects on the election, though it’s reared its head a few times.
During this US election cycle, generative AI has been used in an RNC ad, a fraudulent Joe Biden robocall for New Hampshire voters, and deepfake photos of Taylor Swift endorsing Donald Trump.
Microsoft and OpenAI say they’ve disrupted foreign influence campaigns from China, Iran, and Russia seeking to sow discord in the US, including around hot-button political issues such as Israel’s war with Gaza.
While malicious actors haven’t yet used AI tools in very novel ways, the technology has made it easier, quicker, and cheaper to generate online propaganda and disseminate it over social media. In Indonesia, for example, notorious Defense Minister Prabowo Subianto used a chubby-cheeked, friendly AI-generated avatar to appeal to voters in the presidential election. In Pakistan, Imran Khan used AI voice cloning to spread his political message and support his party’s candidates from prison.
Now, with the US election looming, there’s a very real possibility of a more malicious and effective AI campaign targeting Americans. So GZERO AI asked experts what they’re most concerned about in the run-up to Nov. 5. Their overriding concern revolved around misinformation – and how AI is used to create and distribute it – impacting whether and how people vote.
Valerie Wirtschafter, a fellow in the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution, for example, said she was concerned by the onslaught of generative AI images circulating on social media in the aftermath of Hurricane Helene — including ones alleging that the Biden administration wasn’t doing enough to support residents affected by the storm.
“These images were clearly AI, and when pointed out as such, the response was a simple shrug – that the images resonated because they ‘felt accurate’ anyways,” she said.
There hasn’t been any new federal legislation in the US regarding AI use around elections, and the Federal Election Commission recently chose to forgo new rulemaking on the matter ahead of November. That said, OpenAI, Anthropic, and most major AI companies have self-regulated, instituting rules preventing users from using their tools to generate election-related materials, such as images of presidential candidates. Many of them will refuse to provide voting information as well. That said, many of these rules are porous.
Wirtschafter said she’s most concerned about AI-generated media — particularly audio — being used not to affect how people vote but rather if people vote. Audio-generated content, she said, could be used to “try to prevent a targeted but vital subset of the population from voting” or “sow confusion about where and how to vote,” she said.
“While swing states have prepared for this possibility, it is still such a difficult task, and AI-generated content is most impactful at the local and highly targeted level.”
Scott Bade, a senior analyst in Eurasia Group’s geo-technology practice, said he’s concerned not only by the use of generative AI in the lead-up to the election but also by how politicians might invoke the technology to help cast doubt on things that are, in fact, true.
Like Wirtschafter, Bade said he’s most worried about anything that “muddies the waters and creates fear and confusion that can suppress votes on election day.”
But the threat won’t end after Americans go to the polls. The 2020 election and aftermath showed how conspiracy theories abound even without generative AI.
Politicians, especially those aligned with Trump, falsely claimed there was widespread voter fraud. Bade warned that AI might be used to affect how voters feel about the “sanctity of the ballot.”
So, what should we do about it? Around the elections, it’s important to keep an eye on the source of the materials you’re viewing, check government websites for reliable voting information, and take everything you hear or see in this age of AI with a grain of salt – even if it confirms your prior assumptions.
“This type of content can be obviously AI-generated but still ‘feel’ correct,” Wirtschafter said.
Hard Numbers: Viruses everywhere, TikTok content moderation cuts, Nevada’s “at-risk” student saga, TSMC on the rise
70,500: Researchers used artificial intelligence to identify 70,500 new viruses using metagenomics, in which scientists sequence entire environments based on individual samples. This research, led by University of Toronto researchers, uses a machine learning tool developed by Meta to find new viruses and predict their protein structures.
700: TikTok reportedly cut 700 jobs, including many in Malaysia, and will transition much of its content moderation work to artificial intelligence. This change only affects .6% of the social media company’s 110,000-person global workforce. Social media companies have long used a mix of human and automated systems to monitor user-generated content posted on their platforms.
200,000: Last year, the state of Nevada used an AI system to help it better identify what students in the state are “at risk” for falling behind academically and socially. But the AI, run by an outside contractor, developed a much higher bar for that determination, incorporating factors far beyond income levels, formerly the most important metric, and the number of “at-risk” students plummeted by about 200,000, leading the state to cut funding to many districts in need.
40: Stock analysts expect Taiwan Semiconductor Manufacturing Company to report a 40% profit increase when the chip fabrication giant reports its third-quarter earnings on Oct. 17. TSMC’s stock has already surged 77% this year due to surging demand from chip designers hungry to sell their products to AI companies.
Bentley Hensel, a longshot candidate for the US House of Representatives in Virginia, wants his opponent to debate him. His rival is Rep. Don Beyer, who has spoken to GZERO AI in the past about going to graduate school in his 70s to study machine learning. (Read our April interview with Beyer here).
But Hensel, a software engineer running as an independent, told Reuters he was frustrated that Beyer wouldn’t appear in any debates between now and Election Day — the congressman appeared in a September forum with other candidates. So Hensel took a unique approach to get “Beyer” to debate him. He created DonBot, an artificial intelligence chatbot trained to represent Beyer in a debate — without Beyer’s permission, of course.
The debate will stream online on Oct. 17 and will feature Hensel, fellow independent David Kennedy, and DonBot. Representatives for Beyer did not respond to a request for comment from GZERO AI but told Reuters that the congressman still has no plans to participate in the October debate.
The US semiconductor designer AMD launched a new chip on Oct. 10. The Instinct MI325X is meant to compete with the upcoming Blackwell line of chips from market leader Nvidia.
Graphics processing chips from Nvidia, AMD, and Intel have been the lifeblood of the artificial intelligence boom, allowing the technology’s developers to train their powerful models and deploy them worldwide to users. Major tech companies have clamored to buy up valuable chips or pay to access large data centers full of them remotely through the cloud.
Lisa Su, CEO of AMD, claimed that the market for AI data centers will balloon by 60% a year and hit $500 billion by 2028. Still, investors weren’t convinced by what AMD showcased: The company’s stock fell 4% in trading Thursday, perhaps because AMD didn’t announce any big new deals with customers, though it bounced back 2% on Friday.
AMD’s new chips feature increased memory and a new architecture that the company promises will improve performance relative to prior models. Nvidia is expected to release its much-anticipated Blackwell chips by early next year, as the rivalry between the two most important AI chip designers in the world only gets hotter.
US President Joe Biden on Monday signed an expansive executive order about artificial intelligence, ordering a bevy of government agencies to set new rules and standards for developers with regard to safety, privacy, and fraud. Under the Defense Production Act, the administration will require AI developers to share safety and testing data for the models they’re training — under the guise of protecting national and economic security. The government will also develop guidelines for watermarking AI-generated content and fresh standards to protect against “chemical, biological, radiological, nuclear, and cybersecurity risks.”
The US order comes the same day that G7 countries agreed to a “code of conduct” for AI companies, an 11-point plan called the “Hiroshima AI Process.” It also came mere days before government officials and tech-industry leaders meet in the UK at a forum hosted by British Prime Minister Rishi Sunak. The event will run tomorrow and Thursday, Nov. 1-2, at Bletchley Park. While several world leaders have passed on attending Sunak’s summit, including Biden and Emmanuel Macron, US Vice President Kamala Harris and European Commission President Ursula von der Leyen plan to participate.
When it comes to AI regulation, the UK is trying to differentiate itself from other global powers. Just last week, Sunak said that “the UK’s answer is not to rush to regulate” artificial intelligence while also announcing the formation of a UK AI Safety Institute to study “all the risks, from social harms like bias and misinformation through to the most extreme risks of all.”
The two-day summit will focus on the risks of AI and its use of large language models trained by huge amounts of text and data.
Unlike von der Leyen’s EU, with its strict AI regulation, the UK seems more interested in attracting AI firms than immediately reining them in. In March, Sunak’s government unveiled its plan for a “pro-innovation” approach to AI regulation. In announcing the summit, the government’s Department for Science, Innovation, and Technology boasted the country’s “strong credentials” in AI: employing 50,000 people, bringing £3.7 billion to the domestic economy, and housing key firms like DeepMind (now owned by Google), while also investing £100 million in AI safety research.
Despite the UK’s light-touch approach so far, the Council on Foreign Relations described the summit as an opportunity for the US and UK, in particular, to align on policy priorities and “move beyond the techno-libertarianism that characterized the early days of AI policymaking in both countries.”- UK AI Safety Summit brings government leaders and AI experts together - GZERO Media ›
- AI agents are here, but is society ready for them? - GZERO Media ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity - GZERO Media ›
- Should we regulate generative AI with open or closed models? - GZERO Media ›
- Podcast: Talking AI: Sociologist Zeynep Tufekci explains what's missing in the conversation - GZERO Media ›
- OpenAI is risk-testing Voice Engine, but the risks are clear - GZERO Media ›
Artificial intelligence researchers won big at the Nobel Prizes this year, taking home not one but two of the esteemed international awards.
First, John Hopfield and Geoffrey Hinton won the Nobel Prize in physics for developing artificial neural networks, the machine-learning technique that has powered the current AI boom by replicating how the human brain processes information. Then, the Nobel committee awarded the chemistry prize to University of Washington biochemist David Baker as well as Google DeepMind’s Demis Hassabis and John Jumper. (Hassabis is DeepMind’s co-founder and CEO.) The trio was honored for developing techniques to use artificial intelligence to model and design proteins.
The Nobel wins come with cash prizes (11 million Swedish crowns, or $1.06 million), but also international recognition that could fuel further research and funding in artificial intelligence. Academic papers on innovative subjects tend to increase after the Nobel committee honors a discovery, Wired noted, as seen with the award for the isolation of the carbon structure graphene in 2010.
Of course, AI is already the subject of a global industrial boom, but the Nobel prizes are celebrations of what AI can do at its best — not a warning of how it can go wrong. Hinton, for his part, issued a warning after winning the physics prize. AI, he told CNN in an interview, “will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us.”