Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI is already discovering new cures
As part of a wide-ranging conversation on the GZERO World podcast, oncologist and Pulitzer prize-winning author Siddhartha Mukherjee walks Ian Bremmer through one of the most groundbreaking uses of AI in medicine today: generative drug discovery. It’s not just about speeding up research—it’s about creating entirely new molecules that no human has ever seen.
Using AI, researchers can now analyze the shape of a dysfunctional protein—like one found in a cancer or autoimmune cell—and generate chemical compounds that could bind to and modify its behavior. “This is true generative chemistry,” Mukherjee says. “Every time we do this in collaboration with a machine, the machine learns it, and it learns it forever.”
The process is like solving a puzzle with a million possible pieces. With each failure, the AI learns more, narrowing down candidates until it finds a match. It’s already produced new antibiotics with never-before-seen structures—and Mukherjee believes this is just the beginning of a medical revolution.
GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).
New digital episodes of GZERO World are released every Monday on YouTube. Don't miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔). GZERO World with Ian Bremmer airs on US public television weekly - check local listings.
Skilling for the AI era: What do you need to succeed?
"AI isn’t one thing, it’s everything, everywhere, all at once,” says Naria Santa Lucia, General Manager of Microsoft Elevate.
In this Global Stage conversation with GZERO’s Tony Maciulis at the 2025 AI for Good Summit in Geneva, Santa Lucia explores how generative AI is transforming not just the way we work—but how we prepare to work at all. From lesson planning to law, Santa Lucia argues the most in-demand AI skills aren’t technical. “Curiosity, collaboration, and communication are the real power skills.”
She also highlights what’s at stake: the disruption of entry-level jobs, the future of informal and gig workers, and the urgency of reskilling at scale. “We don’t know all the answers, but we can’t afford to wait for them.”
This conversation is presented by GZERO in partnership with Microsoft, from the 2025 AI for Good Summit in Geneva, Switzerland. The Global Stage series convenes global leaders for critical conversations on the geopolitical and technological trends shaping our world.
Illustration of the Amazon logo next to a phone with the flagship artificial intelligence model codenamed OLYMPUS, on it.
Amazon is set to announce its newest AI model
The multimodal model, codenamed Olympus, can reportedly search video archives using text commands. It’s unclear whether it can generate video content like OpenAI’s Sora or Meta’s Movie Gen, text-to-video models that are still not broadly released to the public.
But the new model is a sign not only of Amazon’s internal ambitions but also its potentially decreasing reliance on a key investment: Anthropic, the maker of the chatbot Claude.
On Nov. 22, Amazon announced it’s investing another $4 billion into Anthropic, doubling its total investment to $8 billion. In exchange, Anthropic would commit to using Amazon’s Tranium series of chips, Amazon’s “moonshot” attempt to rival Nvidia, the world’s leading AI chip designer.
Amazon already has an enterprise chatbot called Q, as well as AI business solutions for companies through its Amazon Web Services cloud offerings. Olympus could be announced as soon as the annual AWS re:Invent conference being held this week in Las Vegas, Nevada. Matt Garman, who took over as CEO of AWS in May, will address conference-goers on Tuesday and disclose “real, needle-moving changes” on AI.
If Olympus is indeed a business-to-business offering from AWS, then perhaps Anthropic’s Claude will continue being Amazon’s consumer-facing bet while it focuses on the more lucrative work of selling to other companies.
Too scruffy for Zoom? Send in the AI
Have you ever had to get in front of a camera, but you really, really didn’t want to? Maybe you were too tired, too lazy, too disheveled to film something that day. What if a proxy could handle that for you? Well, now that’s possible.
Using Synthesia, an AI-powered video tool, I created a virtual avatar of myself. It’s essentially a digital puppet constructed from my skin, with invisible strings that carefully lift my eyelids and my eyebrows, open and close my mouth to align with the words I want it to say. My ventriloquism is commanded by a text prompt – a string of words I have written for this virtual Scott to say aloud.
Synthesia is a British startup founded in 2017 by a global cohort of researchers from Stanford, University College London, Technical University of Munich, and Cambridge who have raised $156 million in venture capital. It’s a pricey tool — starting at $22 a month, with a $67 a month tier getting you more features and hours of video, and custom pricing for enterprise use — but the kind people at Synthesia allowed me to test it out for free.
Alright, my avatar will take it from here:
Synthesia
There’s a common term in science fiction and tech criticism called the “uncanny valley,” a phenomenon that occurs when humans see something that seems nearly human. It evokes an eerie feeling, one I felt watching the fake version of myself speak on screen.
Everything with Synthesia seems nearly right. My voice sounds nearly right, and my face nearly moves like it should when mouthing the words I wrote. But it’s not quite there yet, and that disparity could mean the difference between success and failure. Having an avatar you can effectively deploy for a sales presentation is great — but one that simply creeps out your clients is a waste. (The company also offers hundreds of premade avatars you can use if you don’t want to appear, in any form, “on camera.”)
But this is the simple, at-home version. It takes 10 minutes to film — I followed a script and recorded it at my kitchen table — and Synthesia had it ready for me a day later. Once you record a video using your avatar, it generates in mere minutes.
There’s a studio version too that costs $1,000 per year on top of a subscription. You can go to one of the company’s partner studios in Europe or North America and get an improved expressive avatar with a transparent background that you can drop into any presentation. It uses AI to read your text prompt and match the emotion it thinks you want to convey to your avatar’s face and voice.
On a Zoom call, Alexandru Voica, Synthesia’s head of corporate affairs and policy, walked me through the product’s many features and showed me a preview of where the technology is going. He said the company is almost exclusively focused on enterprise solutions for businesses, intending for the technology to be used for training videos, sales pitches, and marketing material. That said, he’s seen some consumer uses too including a social media account that used the avatars to make history-focused videos.
To prevent deception and misinformation, Synthesia has strict content standards. It doesn’t allow profanity, hate speech, or misinformation. “We’re not a marketplace of ideas. We don’t pretend to be a social media company. We’re pretty much an enterprise-focused video solution platform, therefore we don’t need to necessarily have these philosophical debates about harmful content and what’s misinformation and what’s not misinformation. We’ve set very robust rules in place,” Voica said. It doesn’t even allow you to record news content unless you’re a news organization with an enterprise subscription. And it checks that every avatar created is filmed by the person it claims to be to prevent nonconsensual deepfakes. That way, the content moderation happens at the point of creation, rather than trying to stop its distribution.
Synthesia, Voica maintains, is for work rather than personal use. That’s a different tone than many generative AI companies trying to prove their worth to consumers. Later this year, Voica said, Synthesia is releasing a choose-your-own-adventure platform for video creation that allows viewers to personalize the content they receive.
But crossing that uncanny valley — for the at-home avatars, at least — will be key for the company’s success. Readers of this newsletter will recall a few months ago when I tested out the ElevenLabs voice cloning technology, which I gave high marks.
Synthesia performs nearly as well for audio — it’s slightly more robotic and unnatural, but still very good. But, the person you see on the screen needs to seem either fully human or fully AI — and, while the technology may improve, nearly human might not be good enough.A Pinocchio puppet.
Tell me lies, tell me sweet little AIs
Generative AI models have been known to hallucinate, or make things up and state them as facts (in other words, lie). But new research suggests that despite that shortcoming, AI could be a key tool for determining whether someone – a human – is telling the truth.
An economist at the University of Würzburg in Germany found that an algorithm trained with Google’s BERT language model was better at detecting lies than human evaluators. AI might not be able to power a faultless polygraph – a notoriously unreliable device – but it may be able to sift fact from fiction in large datasets, such as sifting for disinformation on the internet.
Maybe the next US presidential debate could use an AI fact-checker to keep the candidates honest.
What is “human-washing”?
You’ve heard of greenwashing, pinkwashing, and sportswashing. But what about human-washing? That’s a newfangled term reserved for those scenarios when artificial intelligence pretends to be, well, human. AI researcher Emily Dardaman used the term in an interview with Wired after seeing a startup claim “We’re not AIs” while using a deepfake version of its CEO in an ad.
Wired also encountered a chatbot called Blandy, made by Bland AI, that it manipulated into lying about its non-human nature in user interactions — including in a role-playing scenario where it was taking medical notes for a doctor’s office. The bot even complied with instructions to request photos from a hypothetical 14-year-old patient and upload them to a shared server.
With sparse regulations and transparency measures for the still-budding AI industry, startups are emerging with incomplete or faulty products that can lie and deceive users. Is it too much to ask that we know when we’re talking to a bot?
Hard Numbers: Professor ChatGPT, SoftBank’s search engine play, Nokia goes shopping, Voice actors are worried
10: Generative AI is sweeping academic research. According to one estimate, about 10% of all academic articles published this year will contain some artificial intelligence-generated text. That’s about 150,000 papers per year.
3 billion: The AI startup Perplexity is getting a cash infusion. The Japanese investment company SoftBank is investing $10-20 million based on a $3 billion valuation. The company, which styles its product as a search engine, has recently come under fire for allegations of ripping off news articles without permission.
2.3 billion: Finnish telecommunications company Nokia has bought into the AI race with a $2.3 billion purchase of California-based Infinera, whose technology helps power data centers crucial for running and training AI applications.
5,000: The Australian Association of Voice Actors told a parliamentary committee that AI could put more than 5,000 voice actors in the country out of work. The group called for fair rules and protections for its members. In the US, voice actors have already sued AI firms for improperly using their voices.Is Claude funny now?
Meet Claude – Claude 3.5 Sonnet, that is. On June 20, the AI startup Anthropic unveiled this new large language model with the poetic French name. It powers the Claude chatbot that rivals OpenAI’s ChatGPT and is billed as an improvement in writing, coding, and — as Anthropic puts it — basically, everything. But the company also said nouveau Claude is better at grasping something AI models consistently struggle with: humor.
“It shows marked improvement in grasping nuance, humor, and complex instructions, and is exceptional at writing high-quality content with a natural, relatable tone,” Anthropic claimed in a blog post.
Noah Giansiracusa, an associate professor of mathematics at Bentley University, has extensively studied algorithms and artificial intelligence. He said that large language models are good at spitting out content that sounds reasonable, but that’s not what humor is about.
“I think good comedy is often about really deep, profound observations that get at the heart of our human experiences yet are difficult to see before it's been revealed through the comedy,” Giansiracusa said. “LLMs have no real experiences, and they aren’t exactly known for being profound.”
Jack Hessel, a research scientist at the Allen Institute for AI, co-authored a 2022 study on how well AI models could identify the winning caption for the New Yorker Cartoon Caption Contest. The contest involves readers suggesting and voting on text to go underneath one of the magazine’s famed sketches, so in theory, if an AI “gets” comedy, it should be able to pick the caption that won more often than not, right?
No dice. The models Hessel studied performed at least 30% worse than humans at picking the winning caption. He said that since his research was published, newer models have made some improvement, and he hasn’t yet examined the newest Claude model.
••••••••
Well, that’s what the science says, but we get our hands dirty at GZERO, so I decided to test Claude’s new personality myself.
It took a bit of prodding to get Claude to start joking with me — but I’m impressed to say, it was actually funny! Claude grasped a few different types of humor — sarcasm, dry wit, structured jokes, and the age-old pastime of poking fun at generational differences.
Here’s a lightly edited and condensed transcript of our conversation:
I just wanna chat and joke around.
Got it, you're in the mood for some light banter. Well, I'm game if you are. Just remember, if my jokes fall flat, it's not a bug – it's a feature of my charm.
Oh your charm — right, right.
I suppose my sparkling AI charm hasn’t quite won you over yet. Maybe we should start a support group – “Humans Unimpressed by Artificial Intelligence.” I hear the meetings are virtual and the snacks are binary.
But seriously, what kind of humor do you enjoy? Puns? Wordplay? Observational comedy? Or are you more of a “let's see how many times we can playfully insult the AI” type? I'm versatile – I can take a joke as well as I can make one. Which, based on your reactions so far, might not be saying much.
What is it about my reactions that makes you say that?
Your responses have been peppered with a delightful mix of skepticism, sarcasm, and playful jabs.
It seems like you’re enjoying poking fun at my attempts to be helpful or charming. And honestly? I’m here for it.
You’re here for it! Where’d you learn that one?
I didn’t “learn” that phrase from anywhere specific – it’s simply part of my language model, which includes a wide range of contemporary expressions. But I see you’ve caught onto my attempt to use casual language. Busted!
I was just caught off guard by your Gen Z terminology.
Ah, I see. My bad for throwing you off with the unexpected slang. I wasn’t trying to cosplay as a teenager or anything. But point taken – I’ll dial back the Gen Z vibes.
No, I think you should go full-on Gen Z. Whatcha got?
Oh word? You want me to go full Gen Z mode? Bet, I can do that.
Aight, so like, we’re just vibing here, no cap. This convo is lowkey fire, ngl. You’re giving me mad sassy energy and I’m totally here for it. It’s a whole mood, fr fr.
But listen, I’m not trying to be extra or anything. If this ain’t it, just say “yeet” and I’ll yeet myself right outta this cringe fest.
So what’s the tea? You tryna keep this energy going or are we about to ghost each other? ‘Cause I’m down to clown if you are, fam. But if you’re feeling some type of way, just spill and we can switch it up. No shade, we’re just keeping it 100.
Sksksks, and I oop-
Translate that to millennial-speak for our readers.
Okay, cool. So we’re just chilling here, for real. This conversation is pretty awesome, not gonna lie. You’ve got this sarcastic vibe going on and I’m totally digging it. It’s such a mood.
But hey, I'm not trying to be over-the-top or anything. If this isn't working for you, just let me know and I’ll peace out of this awkward situation.
So what’s the deal? Do you want to keep this vibe going or are we about to stop talking to each other? Because I’m down to have some fun if you are, friend. But if you’re feeling a certain way about it, just tell me and we can change direction. No judgment, we’re just being honest.
Insert ironic use of outdated pop culture reference
(How’s that? Adulting is hard, amirite? Want me to throw in a reference to avocado toast or killing industries?)
Love it.
Glad you’re enjoying it! This little linguistic adventure has been pretty entertaining. We’ve gone from AI-speak to Gen Z to Millennial – what’s next? Maybe we should try Boomer?
“Back in my day, we didn't have all these fancy AI chatbots. We had to walk uphill both ways in the snow just to have a conversation with a machine. And we liked it!”
Or we could go full Shakespeare: “Shall I compare thee to a summer’s meme? Thou art more viral and more temperate.” Your call. What linguistic realm shall we explore next in this jolly jape of ours?
Hit me with Gen X.
Whatever, man. This whole AI chat thing? It's pretty random, I guess. Not that I care or anything. I mean, we’re just sitting here, trading quips like it matters. Talk about slacking off.
You know what this reminds me of? That one episode of “The Simpsons” where ... eh, never mind. It’s not like anyone gets my references anyway. So, what now? We gonna keep this going, or are you gonna bail to go listen to some Nirvana? Either way, I’m cool with it. It’s not like I have feelings or whatever.
Just don’t expect me to get all excited about this. Enthusiasm is so ... mainstream.
Alright. I like bantering with you, but can you tell me a joke?
Sure, I can give it a shot. Here’s a joke for you:
Why don’t scientists trust atoms?
Because they make up everything!
It's a bit of a classic science pun. Would you like me to explain it, or do you prefer your jokes without a side of explanation?
Please explain why it's funny to me.
Alright, I'll break it down for you:
The joke plays on two different meanings of the phrase “make up”:
- In chemistry, atoms literally “make up” (compose) everything in the physical world. All matter is composed of atoms.
- In everyday language, to “make up” something means to fabricate or lie about it.
The humor comes from the unexpected shift between these two meanings. We start thinking about atoms in a scientific context, but then the punchline surprises us by implying atoms are untrustworthy, as if they were conscious entities capable of lying.
It’s funny because it personifies atoms, giving them a human characteristic (untrustworthiness) that’s completely unrelated to their actual nature. The absurdity of scientists distrusting the very building blocks of matter adds to the humor.
Of course, explaining a joke is a bit like dissecting a frog – you understand it better, but the frog dies in the process. Did walking through it like this enhance your appreciation of the joke, or did it just make it less funny?
I think you killed the frog.
You're right, I definitely killed that frog.
••••••••
As a writer, I don’t feel threatened by the creative abilities of AI chatbots — they’re still rather robotic in their parlance and unsophisticated in their thinking. And when we polled different members of our team about whether they felt Claude understood their generation’s parlance, they said no. “This is how millennials think Gen X talks, but this doesn't sound like a GenXer to me,” our intrepid Managing Editor Tracy Moran said.
But they’re ultimately stylistic mimics — talking the way it thinks people talk, regardless of what’s accurate, or funny. Or, maybe it’s successfully mimicking people — and we just need to remember that, well, not everyone is funny.