We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.
OpenAI’s little new model
OpenAI is going mini. On July 18, the company behind ChatGPT announced GPT-4o mini, its latest model. It’s meant to be a cheaper, faster, and less energy intensive version of the technology. The smaller model is marketed to developers who rely on OpenAI’s language models and want to save money.
The move also comes as AI companies are trying to cut their own costs, reduce their energy dependence, and answer calls from critics and regulators to lower their energy burden. Training and running AI often requires access to electricity-guzzling data centers, which in turn require copious amounts of water to keep them from overheating.
Moving forward, look for AI companies to offer a multitude of options to cost-conscious and energy-conscious users.
To see where data centers have cropped up in North America, check out our latest Graphic Truth here.A visitor is walking past an AI sign at the World Artificial Intelligence Conference at the Shanghai World Expo Exhibition Center in Shanghai, China, on July 6, 2024.
OpenAI blocks access in China
On Tuesday, OpenAI blocked API access to its ChatGPT large language model in China, meaning developers can no longer tap into OpenAI’s tech to build their own tools. While the company didn’t offer a specific reason for the move, an OpenAI spokesperson told Bloomberg last month that it would start cracking down on API users in countries where ChatGPT was not supported. China has long blocked access to the app, but developers were able to use the API as a backdoor to access the toolbox. Not anymore.
Washington has focused heavily on denying Beijing any advantage in the AI space, especially through strict export controls on chips. There’s no government action forcing OpenAI’s hand on either side of the Pacific, but the decision was likely prophylactic.
As much as Chinese companies that relied on API access may be smarting now, the cutoff does open opportunities for domestic firms to try to win over the newly homeless users. We’re watching for companies like SenseTime, Zhipu AI, or Baidu’s Ernie AI to make their pitch as substitutes.
An image of OpenAI CEO Sam Altman is seen on a mobile device screen in this illustration.
OpenAI announces next model and new safety committee
OpenAI announced that it is training a new generative AI model to eventually replace GPT-4, the industry-standard model that powers ChatGPT and Microsoft Copilot.
But the OpenAI board of directors also said that it’s forming a new Safety and Security Committee to advise it on the risks posed by powerful AI. After the previous board of directors abruptly fired CEO Sam Altman for not being candid with them in November 2023, OpenAI staffers and lead investor Microsoft pressured the board to rehire Altman. It worked: Altman rejoined the company, and most of the old board members resigned.
OpenAI has sought to be an industry leader in generative AI while staying in the good graces of regulators looking to rein in its ambitions. OpenAI took the Biden administration’s voluntary pledge to mitigate AI risks in July 2023, and Altman recently joined the Department of Homeland Security’s new Artificial Intelligence Safety and Security Board.
The US has done little to curb the ambitions of its most prominent AI firms, but that good grace is dependent on the appearance of being a reliable and trustworthy actor — one that will propel Silicon Valley ahead of other global tech hubs while building AI that can help humanity, not harm it.
People walk behind the logo of SoftBank Corp in Tokyo.
Hard Numbers: SoftBank’s hardy investment, Grok gets cash infusion, Humane’s rescue plan, Kenya’s tech upgrade, News Corp and OpenAI strike a deal
6 billion:Elon Musk’s AI startup, xAI, has raised $6 billion from venture capital investors such as Andreessen Horowitz and Sequoia Capital, plus Saudi Arabia’s Prince Alwaleed bin Talal and Kingdom Holding Company. The new funding round boosts the value of xAI, which makes the AI chatbot Grok, to $24 billion. Musk is a cofounder of OpenAI but severed ties with the firm in 2018 and has since sued the ChatGPT maker, alleging it abandoned its founding principles.
750 million: Humane, the company that recently released an AI-powered pin to scathing reviews, is reportedly looking for a buyer to swoop in. While customers have to cough up $699 for the signature pin, a corporate buyer would need to pay between $750 million and $1 billion — if the company’s current management fetches any interest, that is.
1 billion: Microsoft and the UAE-based tech giant G42 are pouring $1 billion into a geothermal-powered data center in Kenya. This East African investment is the first big announcement since Microsoft invested $1.5 billion in G42 in April, a deal brokered by the Biden administration. Microsoft and G42 also pledged to work on local language and skills training initiatives with the Kenyan government and companies in the country.
250 million: OpenAI struck a licensing deal with News Corp., the parent company of The Wall Street Journal, reportedly worth $250 million over five years. News Corp’s stock rose on the announcement, and the deal represents a burgeoning revenue stream for news companies. But the deal isn’t without critics: The Information’s founder Jessica Lessin wrote that publishers like News Corp need to know their worth with AI companies, hungry for content, and not rush into any deal for “relative pennies.”
Will AI further divide us or help build meaningful connections?
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes stock of the ongoing debate on whether artificial intelligence, like social media, will further drive loneliness—but at breakneck speed, or help foster meaningful relationships. Further, Owen offers insights into the latter, especially with tech companies like Replika recently demonstrating AI's potential to ease loneliness and even connect people with their lost loved ones.
So like a lot of people, I've been immersing myself in this debate about this current AI moment we're in. I've been struck by a recurring theme. That's whether will AI further divide us or could actually potentially bring us closer together.
Will it cause more loneliness? Or could it help address it? And the truth is, the more I look at this question, the more I see people I respect on both sides of this debate.
Some close observers of social media, like the Filipino journalist Maria Ressa, argue that AI suffers from the very same problems of algorithmic division and polarization that we saw with the era of social media. But instead, they’re on steroids. If social media, she argues, took our collective attention and used it to keep us hooked in a public debate, she argues that AI will take our most intimate conversations and data and capitalize on our personal needs, our desires, and in some cases, even our loneliness. And I think broadly, I would be predisposed to this side of the argument.
I've spent a lot of time studying the problems of social media and of previous technologies on society. But I've been particularly struck by people who argue the other side of this, that there's something inherently different about AI, that it should be seen as having a different relationship to ourselves and to our humanity. They argue that it's different not in degree from previous technologies, but in kind, that it's something fundamentally different. I initially recoiled from this suggestion because that's often what we hear about new technologies, until I spoke to Eugenia Kuyda.
Eugenia Kuyda is the CEO of a company called Replika, which lets users build AI best friends. But her work in this area began in a much more modest place. She built a chatbot on a friend of hers who had deceased named Roman, and she describes how his close friends and even his family members were overwhelmed with emotion talking to him, and got real value from it, even from this crude, non-AI driven chatbot.
I've been thinking a lot lately about what it means to lose somebody in your life. And you don't just lose the person or the presence in your life, but you lose so much more. You lose their wisdom, their advice, their lifetime of knowledge of you as a person of themselves. And what if AI could begin, even if superficially at first, to offer some of that wisdom back?
Now, I know that the idea that tech, that more tech, could solve the problems caused by tech is a bit of a difficult proposition to stomach for many. But here's what I think we should be watching for as we bring these new tools into our lives. As we take AI tools online, in our workplace, in our social lives, and within our families, how do they make us feel? Are we over indexing perceived productivity or the sales pitches of productivity and undervaluing human connection? Either the human connection we're losing by using these tools, or perhaps the human connections we're gaining. And do these tools ultimately further divide us or provide means for greater and more meaningful relationships in our lives? I think these are really important questions as we barrel into this increasingly, dynamic, role of AI in our lives.
Last thing I want to mention here, I have a new podcast with the Globe and Mail newspaper called Machines Like Us, where I'll be discussing these issues and many more, such as the ones we've been discussing on this video series.
Thanks so much for watching. I'm Taylor Owen, and this is GZERO AI.
- Podcast: Getting to know generative AI with Gary Marcus ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- AI and war: Governments must widen safety dialogue to include military use ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity ›
- AI explosion, elections, and wars: What to expect in 2024 ›
Israel's Lavender: What could go wrong when AI is used in military operations?
So last week, six Israeli intelligence officials spoke to an investigative reporter for a magazine called +972 about what might be the most dangerous weapon in the war in Gaza right now, an AI system called Lavender.
As I discussed in an earlier video, the Israeli Army has been using AI in their military operations for some time now. This isn't the first time the IDF has used AI to identify targets, but historically, these targets had to be vetted by human intelligence officers. But according to the sources in this story, after the Hamas attack of October 7th, the guardrails were taken off, and the Army gave its officers sweeping approval to bomb targets identified by the AI system.
I should say that the IDF denies this. In a statement to the Guardian, they said that, "Lavender is simply a database whose purpose is to cross-reference intelligence sources." If true, however, it means we've crossed a dangerous Rubicon in the way these systems are being used in warfare. Let me just frame these comments with the recognition that these debates are ultimately about systems that take people's lives. This makes the debate about whether we use them, or how we use them, or how we regulate them and oversee them, both immensely difficult, but also urgent.
In a sense, these systems and the promises that they're based on are not new. Technologies like Palantir have long promised clairvoyance from more and more data. At their core, these systems all work in the same way, users upload raw data into them, in this case, the Israeli army loaded in data on known Hamas operatives, location data, social media profiles, cell phone information, and then these data are used to create profiles of other potential militants.
But of course, these systems are only as good as the training data that they are based on. One source who worked with the team that trained Lavender said that, "Some of the data they used came from the Hamas-run Internal Security Ministry, who aren't considered militants." The source said that, "Even if you believe these people are legitimate targets, by using their profiles to train the AI system, it means the system is more likely to target civilians." And this does appear to be what's happening. The sources say that, "Lavender is 90% accurate," but this raises profound questions about how accurate we expect and demand these systems to be. Like any other AI system, Lavender is clearly imperfect, but context matters. If ChatGPT hallucinates 10% of the time, maybe we're okay with that. But if an AI system is targeting innocent civilians for assassination 10% of the time, most people would likely consider that an unacceptable level of harm.
With the rise of AI systems in the workplace, it seems like an inevitability that militaries around the world will begin to adopt technologies like Lavender. Countries around the world, including the US, have set aside billions for AI-related military spending, which means we need to update our international laws for the AI age as urgently as possible. We need to know how accurate these systems are, what data they're being trained on, how their algorithms are identifying targets, and we need to oversee the use of these systems. It's not hyperbolic to say that new laws in this space will literally be the difference between life and death.
I'm Taylor Owen, and thanks for watching.
Anthropic releases the Claude 3 series model, Suqian, Jiangsu province, China, March 5, 2024
Hard Numbers: Amazon’s AI ambitions, what to use ChatGPT for, energy crisis, Enter Stargate
2.75 billion: Amazon invested an additional $2.75 billion in the AI startup Anthropic, which makes the popular chatbot Claude, brings their total investment to around $4 billion, while Google also has a $2 billion stake in the company. The big tech giants like Amazon, Google, and Microsoft, with its $13 billion deal with OpenAI, have chosen investments and strategic partnerships instead of buying startups outright. Amazon also announced it’ll spend $150 billion on data centers over the next 15 years to support its AI ambitions.
2: 20% of US adults say they’ve used ChatGPT for work, up from 12% just six months ago, according to a new survey by Pew Research Center. But only 2% of Americans surveyed said they’ve used the chatbot to gather information about the country’s upcoming elections—a good sign for people worrying about the immediate impact of AI tools that have a tendency to make stuff up.
4: The electricity used by data centers, cryptocurrency, and artificial intelligence represented nearly 2% of global energy use in 2022, according to the International Energy Agency. That number could double to 4% by 2026 if current trends continue.
100 billion: Microsoft and OpenAI are reportedly teaming up to build data centers along with a supercomputer, nicknamed “Stargate,” to power their artificial intelligence systems. The project, which still has yet to be greenlit, could cost a staggering $100 billion.
Social media's AI wave: Are we in for a “deepfakification” of the entire internet?
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks into the phenomenon he terms the "deepfakification" of social media. He points out the evolution of our social feeds, which began as platforms primarily for sharing updates with friends, and are now inundated with content generated by artificial intelligence.
So 2024 might just end up being the year of the deepfake. Not some fake Joe Biden video or deepfake pornography of Taylor Swift. Definitely problems, definitely going to be a big thing this year. But what I would see is a bigger problem is what might be called the “deepfakification” of the entire internet and definitely of our social feeds.
Cory Doctorow has called this more broadly the “enshittification” of the internet. And I think the way AI is playing out in our social media is a very good example of this. So what we saw in our social media feeds has been an evolution. It began with information from our friends that they shared. It then merged the content that an algorithm thought we might want to see. It then became clickbait and content designed to target our emotions via these same algorithmic systems. But now, when many people open their Facebook or their Instagram or their talk feeds, what they're seeing is content that's been created by AI. AI Content is flooding Facebook and Instagram.
So what's going on here? Well, in part, these companies are doing what they've always been designed to do, to give us content optimized to keep our attention.
If this content happens to be created by an AI, it might even do that better. It might be designed in a way by the AI to keep our attention. And AI is proving a very useful tool for doing for this. But this has had some crazy consequences. It's led to the rise, for example, of AI influencers rather than real people selling us ideas or products. These are AIs. Companies like Prada and Calvin Klein have hired an AI influencer named Lil Miquela, who has over 2.5 million followers on TikTok. A model agency in Barcelona, created an AI model after having trouble dealing with the schedules and demands of primadonna human models. They say they didn't want to deal with people with egos, so they had their AI model do it for them.
And that AI model brings in as much as €10,000 a month for the agency. But I think this gets at a far bigger issue, and that's that it's increasingly difficult to tell if the things we're seeing are real or if they're fake. If you scroll from the comments of one of these AI influencers like Lil Miquela’s page, it's clear that a good chunk of her followers don't know she's an AI.
Now platforms are starting to deal with this a bit. TikTok requires users themselves to label AI content, and Meta is saying they'll flag AI-generated content, but for this to work, they need a way of signaling this effectively and reliably to us and users. And they just haven't done this. But here's the thing, we can make them do it. The Canadian government in their new Online Harms Act, for example, demands that platforms clearly identify AI or bot generated content. We can do this, but we have to make the platforms do it. And I don't think that can come a moment too soon.
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
- The geopolitics of AI ›
- AI and Canada's proposed Online Harms Act ›
- AI at the tipping point: danger to information, promise for creativity ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? ›
- Deepfake porn targets high schoolers ›