We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
How medical technology will transform human life - Siddhartha Mukherjee
On GZERO World, Ian Bremmer and Siddhartha Mukherjee explore the many ways medical technology will transform our lives and help humans surpass physical and mental limitations. Mukherjee, a cancer physician and biologist, believes artificial intelligence will help create whole categories of new medicines. AI can spit out molecules with properties we didn’t even know existed, which has tantalizing implications for diseases currently thought to be incurable. Recently discovered treatments for things like spinal muscular dystrophy, which used to be almost certainly deadly but is now being treated with gene therapy, are just the beginning of what could be possible using tools like CRISPR gene editing or bionic prosthetics.
Mukherjee envisions a future where people who are paralyzed by disease or stroke can walk again, where people with speech impairments can talk to their loved ones, and where prosthetics become much more effective and integrated into our bodies. And beyond curing ailments, biotechnology can help improve the lives of healthy people, optimizing things like brain power and energy.
“We will become smarter, we will become hopefully more disease resistant, we will have larger memory banks,” Mukherjee explains, “And we will have the capacity to interact in the virtual sphere in a way we cannot just simply interact in the real sphere.”
Watch the full interview: From CRISPR to cloning: The science of new humans
Catch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- From CRISPR to cloning: The science of new humans ›
- Podcast: Tracking the rapid rise of human-enhancing biotech with Siddhartha Mukherjee ›
- AI agents are here, but is society ready for them? ›
- Steven Pinker shares his "relentless optimism" about human progress ›
- What is CRISPR? Gene editing pioneer Jennifer Doudna explains ›
- CRISPR gene editing and the human race ›
Siddhartha Mukherjee: CRISPR, AI, and cloning could transform the human race
Technologies like CRISPR gene editing, synthetic biology, bionics integrated with AI, and cloning will create "new humans," says Dr. Siddhartha Mukherjee.
On GZERO World, Ian Bremmer sits down with the cancer physician and biologist to discuss some of the recent groundbreaking developments in medical technology that are helping to improve the human condition. Mukherjee points to four tools that have sped up our understanding of how the human body works: gene editing with CRISPR, AI-powered prosthetics, cloning, and synthetic biology. Gene editing with CRISPR allows humans to make precise alterations in the genome and synthetic biology means you can create a genome similar to writing a computer code.
“That technology is groundbreaking, and it really shook our worlds because I hadn’t expected it,” Mukherjee says.
Mukherjee also talks about bionic prosthetics that help us extend our hands, brains, and other body parts with artificial intelligence. AI learning algorithms mean that prosthetics like neural implants can work more efficiently, adapting to each body's specific environment and making them more effective. The last tool Mukherjee highlights is cloning, a technology that’s been around for decades but has recently become much faster and easier. Right now, these four technologies are sitting in different silos. In the near future, however, some combination of these tools will be applied to real individuals, which will profoundly impact the medical landscape of biological science and lead to what Mukherjee calls “the new human.”
Watch the full interview: From CRISPR to cloning: The science of new humans
Catch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
Photo illustration showing the DALL-E logo on a smartphone with an Artificial intelligence chip and symbol in the background.
Hard Numbers: Profitable prompts, Happy birthday ChatGPT, AI goes superhuman, Office chatbots, Self-dealing at OpenAI, Saying Oui to Mistral
$200,000: Want an image of a dog? DALL-E could spit out any breed. Want an Australian shepherd with a blue merle coat and heterochromia in front of a backdrop of lush, green hills? Now you’re starting to write like a prompt engineer, and that could be lucrative. Companies are paying up to $200,000 for full-time AI “prompt engineering” roles, placing a premium on this newfangled skill. It's all about descriptive fine-tuning of language to get desired results.
1: Can you believe it’s only been one year since ChatGPT launched? It all started when OpenAI CEO Sam Altman tweeted, “today we launched ChatGPT. Try talking with it here.” Since then, the chatbot has claimed hundreds of millions of users.
56: Skynet, anyone? No thanks, say 56% of Americans, who are concerned with AI gaining “superhuman capabilities” and support policies to prevent it, according to a new poll by the AI Policy Institute.
$51 million: In 2019, OpenAI reportedly agreed to buy $51 million worth of chips from Rain, a “neuromorphic” chip-making startup, meant to mirror the activity of the human brain. Why is this making news now? According to Wired, OpenAI’s Sam Altman personally invested $1 million in the company.
$20: You work at a big company and need help sifting through sprawling databases for a single piece of information. Enter AI. Amazon’s new chatbot, called Q, costs $20 a month and aims to help with tasks like “summarizing strategy documents, filling out internal support tickets, and answering questions about company policy.” It’s Amazon’s answer to Microsoft’s work chatbot, Copilot, released in September.
$2 billion: French AI startup Mistral is about to close a new funding round that would value it at $2 billion. The new round, worth $487 million, includes investment from venture capital giant Andreessen Horowitz, along with chipmaker NVIDIA and the business software firm Salesforce. Mistral, founded less than a year ago, boasts an open-source large language model that it hopes will rival OpenAI’s (ironically) closed-source model, GPT4. What’s the difference? Open-source LLMs publish their source code so it can be studied and third-party developers can build off of it.Federal Chancellor Olaf Scholz on stage at the Digital Summit 2023 in November.
Wie sagt man: Not cheap as chips?
It committed $10 billion for Intel, which is building factories in Magdeburg; $5 billion in subsidies for a new fabrication plant built by Taiwanese giant TSMC along with Dutch company NXP, and German firms Bosch and Infineon. German Chancellor Olaf Scholz even noted in July how impressive it was that “so many German and international companies are choosing Germany for the expansion of their semiconductor production.”
But last month, a German court ruled that Scholz’s government violated its constitutional powers when he moved $65 billion in unused funds earmarked for the COVID-19 pandemic to the “climate and transformation” fund. The bad news for chipmakers? That was the money earmarked for their subsidies.
Germany wants to position itself as particularly friendly to industry, not only courting multinational tech corporations willing to build manufacturing plants, but also — in a recent shock move — by throwing a wrench in EU plans to heavily regulate large language models like OpenAI’s GPT-4.
Trouble is, to run the high-powered AI models, developers need high-powered chips – whatever the cost.
Female doctor in hospital setting.
Slapping nutrition labels on AI for your health
At a congressional hearing last week, Rep. Cathy McMorris Rodgers (R-WA) noted how AI can help detect deadly diseases early, improve medical imaging, and clear cumbersome paperwork from doctors’ desks. But she also expressed concern that it could exacerbate bias and discrimination in healthcare.
Patients need to know who, or what, is behind their healthcare determinations and treatment plans. This requires transparency, which is a key part of Biden's AI Bill of Rights, released last year.
The new rule, first proposed in April by the HHS’s health information technology office, would require developers to publish information about how AI healthcare apps were trained and how they should and shouldn’t be used. The rule, which could be finalized before January, aims to improve both transparency and accountability.
Smartphone with a displayed Russian flag with the word "Cyberattack" and binary codes over it is placed on a computer motherboard in this illustration.
NATO’s virtual battlefield misses AI
The world’s most powerful military bloc held cyber defense exercises last week, simulating cyberattacks against power grids and critical infrastructure. NATO rightly insists these exercises are crucial because cyberattacks are standard tools of modern warfare. Russia regularly engages in such attacks, for example, to threaten Ukraine’s power supply, and the US and Israel recently issued a joint warning of Iranian-linked cyberattacks on US-based water systems.
A whopping 120 countries have been hit by cyberattacks in the past year alone — and nearly half of those involved NATO members. Looking forward, the advent of generative AI could make even the simplest cyberattacks more potent. “Cybercriminals and nation states are using AI to refine the language they use in phishing attacks or the imagery in influence operations,” says Microsoft security chief Tom Burt.
Yet, in its latest wargames, NATO's preparations for cyberattacks involving AI were nowhere to be found. The alliance says AI will be added to the training next year.
“The most acute change we will see in the cyber domain will be the use of AI both in attacking but also in defending our networks,” said David van Weel, NATO’s Assistant Secretary General for Emerging Security Challenges. He noted that the bloc will also update its 2021 AI strategy to include generative AI next year.
We can’t help but wonder whether these changes will be too little, too late.
What country will win the AI race?
Art: Courtesy of Midjourney
Savvy startups, tech giants, and research labs woo the best engineers and financing to fuel technological breakthroughs. But the battle for AI supremacy is much bigger than the industry itself – it's a global contest, pitting nations against each other.
Many of the world’s most powerful governments are flexing their muscles to build a competitive edge by cultivating robust domestic AI sectors. Don’t be fooled into thinking that recent efforts to legislatively rein in AI models and the companies behind them are signs of governments hitting the brakes – it’s quite the opposite.
Why, you ask? Because it’s a boon for any country to attract top talent and spur economic activity, says Valerie Wirtschafter, a fellow at the Brookings Institution’s Artificial Intelligence and Emerging Technology Initiative. Hosting top AI companies also “inevitably catapults host countries to the forefront of conversations around standards and governance, both domestically and internationally.”
Beyond that, a thriving AI sector can do wonders for national security. That’s true not only for military and intelligence applications or research-and-development, but also for ensuring that standards of development “do not pose an inherent risk and are developed with a certain set of values in mind,” Wirtschafter says.
Since Google, Microsoft, and OpenAI call America home, Washington has the ultimate power play. It can better control these tech giants and set the vibe for worldwide AI regulation.
Such control sets governments an inch closer to technological sovereignty, says Nick Reiners, a senior analyst for geotechnology at Eurasia Group: “Having these companies in your country means you’re not dependent on another country.”
Governments can boost their AI sectors in numerous ways — through subsidies, research funding, infrastructure investment, and government contracts.
“Defense spending and government R&D has always been a big stimulus for civilian and commercial research and product development,” says Scott Wallsten, president and senior fellow at the Technology Policy Institute, a Washington-based think tank. “You can be sure the DOD is working on these tools for their own purposes because they’re in an arms race with potential adversaries.”
Who’s ahead? The US and China are way out in front. “While in the US, these advances have been primarily driven by the private sector, in China they have been shaped more by government support,” says Wirtschafter. But she notes that the US CHIPS Act is a sign that America is trying to boost its strategic advantage.
Stanford University’s annual AI Index report found the US and China leading in many different ways, including private investment and newly funded AI firms. (The UK, EU, Israel, India, and Canada also rank highly in many of the report’s metrics.)
While it’s unlikely that anyone will challenge the US and China, and the US is ahead, Wirtschafter notes that China is powerful on facial recognition technology.
Could governments get possessive? Yep, this is a high-stakes game, and Washington and Beijing, among others, could increasingly opt for protectionist measures to keep powerful AI models in their grasp.
The US is already doing this with chips, the underlying technology for AI. Washington exerts strict export controls over any semiconductor-related equipment, lest it get into enemy hands – meaning China. It has also blocked corporate takeovers that could shift the balance of power with chips, including a 2018 deal involving US chipmaker Qualcomm (keeping it from a Singapore-based company’s grasp). And a new report indicates the Biden administration forced a Saudi firm to divest from a US chipmaker linked to OpenAI CEO Sam Altman.
If the US and other governments determine that protecting powerful AI models is key to their national security, they could take similarly drastic measures to keep them domestic — or at least in the hands of allies. Just last week, Bloomberg reported that the London-based AI startup Stability AI, known for its Stable Diffusion image generator, is exploring a sale amid internal turmoil. The company reportedly reached out to two startups — the Canadian company Cohere and the US-based Jasper — to gauge their interest in a sale. There’s no indication yet that regulators are worried, but the potential corporate shakeup comes as British politicians have been desperately trying to make the UK a friendly place for AI firms.
The last thing the UK wants is to get burned again – like it did with DeepMind and Arm, two promising British AI companies that were acquired by US and Japanese firms in 2014 and 2016, respectively. In a recent interview with the BBC, Ian Hogarth, who is leading the UK’s AI taskforce, spoke of the need to boost European technology companies instead of allowing them to be sold. “We've had some great tech companies and some of them got bought early, you know – Skype got bought by eBay, DeepMind got bought by Google,” Hogarth said. “I think really our ecosystem needs to rise to the next level of the challenge.”
British lawmakers passed the National Security and Investment Act in 2022, granting the government new national-security powers to intervene in the foreign acquisition of domestic companies. “The pace of change has been really significant since that period,” Wirtschafter said of the DeepMind acquisition, “and the desire to maintain a competitive national position in this space would be central to any potential sale.” The UK’s National AI Strategy, published in 2021, says that the government will “protect national security” and protect against “potentially hostile foreign investment.”
But ministers are now considering rolling back those new rules to appear more business-friendly. And that’s the central tension that all AI-hungry countries face: They need to appear AI-friendly while trying to be forceful with regulation. The battle for AI supremacy is on the line.Singapore sets an example on AI governance
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she reviews the Singapore government's latest agenda in its AI policy: How to govern AI, at the Singapore Conference on Artificial Intelligence.
Hello. My name is Marietje Schaake. I'm in Singapore this week, and this is GZERO AI. Again, a lot of AI activities going on here at a conference organized by the Singaporese government that is looking at how to govern AI, the key question, million-dollar question, billion-dollar question that is on agendas for politicians, whether it is in cities, countries, or multilateral organizations. And what I like about the approach of the government here in Singapore is that they've brought together a group of experts from multiple disciplines, multiple countries around the world, to help them tackle the question of, what should we be asking ourselves? And how can experts inform what Singapore should do with regard to its AI policy? And this sort of listening mode and inviting experts first, I think is a great approach and hopefully more governments will do that, because I think it's necessary to have such well-informed thoughts, especially while there is so much going on already. Singapore is thinking very, very clearly and strategically about what its unique role can be in a world full of AI activities.
Speaking of the world full of AI activities, the EU will have the last, at least last planned negotiating round on the EU AI Act where the most difficult points will have to come to the table. Outstanding differences between Member States, the European parliaments around national security uses of AI, or the extent to which human rights protections will be covered, but also the critical discussion that is surfacing more and more around foundation models, whether they should be regulated, how they should be regulated, and how that can be done in a way that European companies are not disadvantaged compared to, for example, US leaders in the generative AI space in particular. So it's a pretty intense political fight, even after it looked like there was political consensus until about a month ago. But of course that is not unusual. Negotiations always have to tackle the most difficult points at the end, and that is where we are. So it's a space to watch, and I wouldn't be surprised if there would be an additional negotiating round planned after the one this week.
Then there will be the first physical meeting of the UN AI Advisory Body, of which I'm a member and I'm looking forward. This is going to happen in New York City and it will really be the first opportunity for all of us to get together and discuss, after online working sessions have taken place and a flurry of activities has already taken off after we were appointed roughly a month ago. So the UN is moving at break speed this time, and hopefully it will lead to important questions and answers with regard to the global governance of AI, the unique role of the United Nations, and the application of the charter international human rights and international law at this critical moment for global governance of artificial intelligence.