We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
AI
How medical technology will transform human life - Siddhartha Mukherjee
On GZERO World, Ian Bremmer and Siddhartha Mukherjee explore the many ways medical technology will transform our lives and help humans surpass physical and mental limitations. Mukherjee, a cancer physician and biologist, believes artificial intelligence will help create whole categories of new medicines. AI can spit out molecules with properties we didn’t even know existed, which has tantalizing implications for diseases currently thought to be incurable. Recently discovered treatments for things like spinal muscular dystrophy, which used to be almost certainly deadly but is now being treated with gene therapy, are just the beginning of what could be possible using tools like CRISPR gene editing or bionic prosthetics.
Mukherjee envisions a future where people who are paralyzed by disease or stroke can walk again, where people with speech impairments can talk to their loved ones, and where prosthetics become much more effective and integrated into our bodies. And beyond curing ailments, biotechnology can help improve the lives of healthy people, optimizing things like brain power and energy.
“We will become smarter, we will become hopefully more disease resistant, we will have larger memory banks,” Mukherjee explains, “And we will have the capacity to interact in the virtual sphere in a way we cannot just simply interact in the real sphere.”
Watch the full interview: From CRISPR to cloning: The science of new humans
Catch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- From CRISPR to cloning: The science of new humans ›
- Podcast: Tracking the rapid rise of human-enhancing biotech with Siddhartha Mukherjee ›
- AI agents are here, but is society ready for them? ›
- Steven Pinker shares his "relentless optimism" about human progress ›
- What is CRISPR? Gene editing pioneer Jennifer Doudna explains ›
- CRISPR gene editing and the human race ›
Technologies like CRISPR gene editing, synthetic biology, bionics integrated with AI, and cloning will create "new humans," says Dr. Siddhartha Mukherjee.
On GZERO World, Ian Bremmer sits down with the cancer physician and biologist to discuss some of the recent groundbreaking developments in medical technology that are helping to improve the human condition. Mukherjee points to four tools that have sped up our understanding of how the human body works: gene editing with CRISPR, AI-powered prosthetics, cloning, and synthetic biology. Gene editing with CRISPR allows humans to make precise alterations in the genome and synthetic biology means you can create a genome similar to writing a computer code.
“That technology is groundbreaking, and it really shook our worlds because I hadn’t expected it,” Mukherjee says.
Mukherjee also talks about bionic prosthetics that help us extend our hands, brains, and other body parts with artificial intelligence. AI learning algorithms mean that prosthetics like neural implants can work more efficiently, adapting to each body's specific environment and making them more effective. The last tool Mukherjee highlights is cloning, a technology that’s been around for decades but has recently become much faster and easier. Right now, these four technologies are sitting in different silos. In the near future, however, some combination of these tools will be applied to real individuals, which will profoundly impact the medical landscape of biological science and lead to what Mukherjee calls “the new human.”
Watch the full interview: From CRISPR to cloning: The science of new humans
Catch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
Hard Numbers: Profitable prompts, Happy birthday ChatGPT, AI goes superhuman, Office chatbots, Self-dealing at OpenAI, Saying Oui to Mistral
Photo illustration showing the DALL-E logo on a smartphone with an Artificial intelligence chip and symbol in the background.
$200,000: Want an image of a dog? DALL-E could spit out any breed. Want an Australian shepherd with a blue merle coat and heterochromia in front of a backdrop of lush, green hills? Now you’re starting to write like a prompt engineer, and that could be lucrative. Companies are paying up to $200,000 for full-time AI “prompt engineering” roles, placing a premium on this newfangled skill. It's all about descriptive fine-tuning of language to get desired results.
1: Can you believe it’s only been one year since ChatGPT launched? It all started when OpenAI CEO Sam Altman tweeted, “today we launched ChatGPT. Try talking with it here.” Since then, the chatbot has claimed hundreds of millions of users.
56: Skynet, anyone? No thanks, say 56% of Americans, who are concerned with AI gaining “superhuman capabilities” and support policies to prevent it, according to a new poll by the AI Policy Institute.
$51 million: In 2019, OpenAI reportedly agreed to buy $51 million worth of chips from Rain, a “neuromorphic” chip-making startup, meant to mirror the activity of the human brain. Why is this making news now? According to Wired, OpenAI’s Sam Altman personally invested $1 million in the company.
$20: You work at a big company and need help sifting through sprawling databases for a single piece of information. Enter AI. Amazon’s new chatbot, called Q, costs $20 a month and aims to help with tasks like “summarizing strategy documents, filling out internal support tickets, and answering questions about company policy.” It’s Amazon’s answer to Microsoft’s work chatbot, Copilot, released in September.
$2 billion: French AI startup Mistral is about to close a new funding round that would value it at $2 billion. The new round, worth $487 million, includes investment from venture capital giant Andreessen Horowitz, along with chipmaker NVIDIA and the business software firm Salesforce. Mistral, founded less than a year ago, boasts an open-source large language model that it hopes will rival OpenAI’s (ironically) closed-source model, GPT4. What’s the difference? Open-source LLMs publish their source code so it can be studied and third-party developers can build off of it.Federal Chancellor Olaf Scholz on stage at the Digital Summit 2023 in November.
It committed $10 billion for Intel, which is building factories in Magdeburg; $5 billion in subsidies for a new fabrication plant built by Taiwanese giant TSMC along with Dutch company NXP, and German firms Bosch and Infineon. German Chancellor Olaf Scholz even noted in July how impressive it was that “so many German and international companies are choosing Germany for the expansion of their semiconductor production.”
But last month, a German court ruled that Scholz’s government violated its constitutional powers when he moved $65 billion in unused funds earmarked for the COVID-19 pandemic to the “climate and transformation” fund. The bad news for chipmakers? That was the money earmarked for their subsidies.
Germany wants to position itself as particularly friendly to industry, not only courting multinational tech corporations willing to build manufacturing plants, but also — in a recent shock move — by throwing a wrench in EU plans to heavily regulate large language models like OpenAI’s GPT-4.
Trouble is, to run the high-powered AI models, developers need high-powered chips – whatever the cost.
Female doctor in hospital setting.
At a congressional hearing last week, Rep. Cathy McMorris Rodgers (R-WA) noted how AI can help detect deadly diseases early, improve medical imaging, and clear cumbersome paperwork from doctors’ desks. But she also expressed concern that it could exacerbate bias and discrimination in healthcare.
Patients need to know who, or what, is behind their healthcare determinations and treatment plans. This requires transparency, which is a key part of Biden's AI Bill of Rights, released last year.
The new rule, first proposed in April by the HHS’s health information technology office, would require developers to publish information about how AI healthcare apps were trained and how they should and shouldn’t be used. The rule, which could be finalized before January, aims to improve both transparency and accountability.
President Joe Biden signs an executive order about artificial intelligence as Vice President Kamala Harris looks on at the White House on Oct. 30, 2023.
US President Joe Biden on Monday signed an expansive executive order about artificial intelligence, ordering a bevy of government agencies to set new rules and standards for developers with regard to safety, privacy, and fraud. Under the Defense Production Act, the administration will require AI developers to share safety and testing data for the models they’re training — under the guise of protecting national and economic security. The government will also develop guidelines for watermarking AI-generated content and fresh standards to protect against “chemical, biological, radiological, nuclear, and cybersecurity risks.”
The US order comes the same day that G7 countries agreed to a “code of conduct” for AI companies, an 11-point plan called the “Hiroshima AI Process.” It also came mere days before government officials and tech-industry leaders meet in the UK at a forum hosted by British Prime Minister Rishi Sunak. The event will run tomorrow and Thursday, Nov. 1-2, at Bletchley Park. While several world leaders have passed on attending Sunak’s summit, including Biden and Emmanuel Macron, US Vice President Kamala Harris and European Commission President Ursula von der Leyen plan to participate.
When it comes to AI regulation, the UK is trying to differentiate itself from other global powers. Just last week, Sunak said that “the UK’s answer is not to rush to regulate” artificial intelligence while also announcing the formation of a UK AI Safety Institute to study “all the risks, from social harms like bias and misinformation through to the most extreme risks of all.”
The two-day summit will focus on the risks of AI and its use of large language models trained by huge amounts of text and data.
Unlike von der Leyen’s EU, with its strict AI regulation, the UK seems more interested in attracting AI firms than immediately reining them in. In March, Sunak’s government unveiled its plan for a “pro-innovation” approach to AI regulation. In announcing the summit, the government’s Department for Science, Innovation, and Technology boasted the country’s “strong credentials” in AI: employing 50,000 people, bringing £3.7 billion to the domestic economy, and housing key firms like DeepMind (now owned by Google), while also investing £100 million in AI safety research.
Despite the UK’s light-touch approach so far, the Council on Foreign Relations described the summit as an opportunity for the US and UK, in particular, to align on policy priorities and “move beyond the techno-libertarianism that characterized the early days of AI policymaking in both countries.”Participants enter the Dubai Exhibition Centre during the COP28, UN Climate Change Conference.
AI is on the lips of climate-policy negotiators gathered for the United Nation’s COP28 conference in Dubai, and for good reason — it presents a high-risk but potentially high-reward scenario.
The upside: AI has the potential to supercharge efforts to find real climate solutions. For example, scientists can send AI-powered robots to collect data in the Arctic and other challenging environs, and the technology can also be used to improve forecasting for extreme weather and climate-related disasters. On an even more basic level, it can be used to maximize the efficiency of all kinds of systems and reduce their carbon footprint.
But there’s a big catch: AI is an energy-guzzler. One analysis found that AI systems worldwide could consume 85 to 134 terawatt-hours per year — equivalent to the electricity diet of Argentina or the Netherlands. That’d be good for half a percent of the world’s energy consumption. (This analysis is based on the sale of popular servers from US chipmaker NVIDIA, used by much of the AI market.)
At COP28, government and industry leaders made bold announcements. Boston Consulting Group said AI could reduce greenhouse-gas emissions by 5-10% by 2023. Meanwhile, the UN announced a deal with Microsoft to use AI to track countries' carbon-reduction promises.
Is the risk worth the reward? “Whether you like it or not,” says Shari Friedman, managing director for climate and sustainability at Eurasia Group, “AI is here to stay, so the job of humans will be to use it for the best purpose possible and maximize clean energy on the back end.”