We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
What impact will AI have on gender equality?
At the current rate of progress toward gender equality, the World Economic Forum estimates it will take 131 years for women to attain parity in income, status, and leadership.
While technology is a powerful tool to help close the gender gap, it can also be weaponized. GZERO’s special presentation “Gender Equality in the Age of AI” featured candid conversations about the opportunities and threats that exist online, and how artificial intelligence will impact them.
Produced on the sidelines of the 68th United Nations Commission on the Status of Women, the program featured leading experts from government, technology, and philanthropy. Moderator Penny Abeywardena, former NYC Commissioner for International Affairs, was joined by Jac sm Kee, co-founder of Numun Fund; Vickie Robinson, general manager of the Microsoft Airband Initiative; Michelle Milford Morse, the United Nations Foundation’s vice president for Girls and Women Strategy; and Lucia Ďuriš Nicholsonová, a member of the European Parliament from Slovakia.
“The beauty and the promise of digital technologies is the opening up of democratic and civic participation space,” said Jac sm Kee. “But what is happening right now is the direct closing down of these spaces through deliberate attacks.”
The discussion focused on three key areas: gender-based online violence, the need for greater digital inclusion and access, and increasing leadership roles for women in all aspects of public life.
In a recent study from UNESCO, 58% of women and girls surveyed globally said they had experienced online violence, defined as a range of abuses including harassment, stalking, and defamation. Female journalists and politicians experienced these threats in even higher numbers.
During GZERO’s program, European Union parliamentarian Lucia Ďuriš Nicholsonová shared incredibly disturbing messages she has received throughout her years in office, many including violent and profane language and graphic sexual threats.
“These words are real. The people who are writing these words are real,” Nicholsonová said. “We can erase them through algorithms online, but they will still exist. I think we really need to know what is out there because it's a real threat.”
Michelle Milford Morse of UN Foundation explained to the crowd gathered at the NYC event that these kinds of abuses have compounding impacts on victims. “More than half of young women are experiencing some form of abuse and harassment online, sometimes as young as eight,” she said. “I don't think that we're thinking enough about the accumulation of that over time and the real harm to their mental health.”
But technology, when used for good, is also a powerful tool that can help close the gender gap. Microsoft’s Vickie Robinson described the importance of connectivity and digital skills. Of the estimated 2.6 billion people worldwide who lack internet access, the majority are women and girls.
“It's critically important, now more than ever, we need to make sure that we close the digital divide once and for all, but that we bring along with that the skills, we make it affordable, we make it accessible,” Robinson said.
The conversation then turned to leadership, and the need for more women in positions of authority in all industries and sectors of public life.
“Parliaments and legislators that have more women, they prioritize social services for children and the most vulnerable. When they engage in peace agreements, those peace agreements last longer. They're more likely to protect biodiversity,” said Morse. “There is no argument for half our human family to be shut out of society.”
The program was part of the Global Stage series and produced by GZERO in partnership with Microsoft and the United Nations Foundation. The series features politicians, private sector leaders, and renowned experts in conversation about issues at the intersection of technology, geopolitics and society.
- Ian Explains: How will AI impact the workplace? ›
- Can A.I. Reduce Poverty and Inequality?: AI in 60 Seconds ›
- Want global equality? Get more people online ›
- What We’re Watching: Boosting access, gender equality, and trust in the digital economy ›
- Scared of rogue AI? Keep humans in the loop, says Microsoft's Natasha Crampton ›
Yuval Noah Harari on protecting the right to be stupid
Bestselling author and historian Yuval Noah Harari makes the case for mental self-care in an age where our minds are bombarded with an unprecedented influx of information. In a wide-ranging interview with Ian Bremmer, filmed before a live audience at the 92nd Street Y in New York City, Harari stresses the importance of a healthy ‘'information diet.'
"Our minds were shaped back in the Stone Age," Harari says. Smartphones and social media, designed by the today’s smartest minds, are engineered to 'hack our brains and manipulate our emotions. Harari warns, "Anybody who thinks they are strong enough to resist it is just fooling themselves."
As a public intellectual, Harari is acutely aware of the weight of his words. "We need to build a wall between the mind and the mouth," he tells Bremmer. "I also think that we need a part of preserving privacy is to preserve the right for stupidity."
Watch full episode: Yuval Noah Harari explains why the world isn't fair (but could be)
Catch GZERO World with Ian Bremmer every week online and on US public television. Check local listings.
- Podcast: Tracking the rapid rise of human-enhancing biotech with Siddhartha Mukherjee ›
- Why is America punching below its weight on happiness? ›
- Is life better than ever for the human race? ›
- Podcast: The case for global optimism with Steven Pinker ›
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
Hard Numbers: Understanding the universe, Opening up OpenAI, Bioweapon warning, Independent review, AI media billions
100 million: AI is helping researchers better map outer space. One recent simulation led by a University College London researcher was able to show 100 million galaxies just across a quarter of the Earth’s southern hemisphere sky. This is part of a wider effort to understand dark energy, the mysterious force causing the expansion of the universe.
30,000: The law firm WilmerHale, which completed its investigation of Sam Altman’s brief December ouster from OpenAI, examined 30,000 documents as part of its review. The contents of the report haven’t been made public, but new board chairman Bret Taylor said that the review found the prior board acted in good faith but didn’t anticipate the reaction to removing Altman, who is now rejoining the board. The SEC, meanwhile, is still investigating whether OpenAI deceived investors, but it’s unclear whether WilmerHale will give their findings to the agency.
90: More than 90 scientists have pledged not to use AI to develop bioweapons as part of an agreement forged somewhat in response to congressional remarks given by Anthropic CEO Dario Amodei last year. Amodei said while the current generation of AI technology couldn’t handle such a task, it’s only two or three years away.
100: More than 100 AI researchers have signed an open letter asking the leading companies to allow independent investigators access to their models to ensure that risk assessment is thorough. “Generative AI companies should avoid repeating the mistakes of social media platforms, many of which have effectively banned types of research aimed at holding them accountable,” the letter said.
8 billion: The media company Thomson Reuters says it has an $8 billion “war chest” to spend on AI-related acquisitions. In addition to publishing the Reuters newswire, the company sells access to services like Westlaw, a popular legal research platform. It’s also committed to spending at least $100 million developing in-house AI technology to integrate into its news and data offerings.
Live premiere TODAY at 12 pm ET: Gender Equality in the Age of AI
WATCH OUR LIVE PREMIERE TODAY at 12 pm ET: In the age of generative AI, how can technology become a tool to create more opportunities for women and girls worldwide? What will it take to train more women to use AI to reach their economic potential, and ultimately greater equality? As AI innovation advances at a staggering pace, the widening equality gap threatens to erode progress. There is also a dark side — the alarming number of women who experience online violence.
In our next Global Stage discussion, presented by GZERO in partnership with Microsoft and the United Nations Foundation, our expert panel will discuss inclusive ways to make our digital lives safer and more productive. Today, Monday, March 18 at 12 pm ET, watch the live premiere of "Gender Equality in the Age of AI," taking place on the sidelines of UN Women’s 68th Commission on the Status of Women, a gathering of leaders from UN member states and NGOs focused on progress and equality.
Penny Abeywardena, Social Justice Advocate and Former NYC Commissioner for International Affairs, moderates the conversation with :
- Jac sm Kee, Co-Founder, Numun Fund
- Michelle Milford Morse, UN Foundation’s Vice President for Girls and Women Strategy
- Lucia Ďuriš Nicholsonová, Member of European Parliament, Slovakia
- Vickie Robinson, General Manager, Airband Initiative, Tech for Fundamental Rights, Microsoft
Monday, March 18, 2024 | 12 pm ET | 9 am PT
Watch at gzeromedia.com/globalstage
RSVP on LinkedIn Live, Facebook Live, or YouTube
More about Global Stage:
Global Stage: Global issues at the intersection of technology, politics, and societyyoutu.be
Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity
In a wide-ranging conversation with Ian Bremmer, filmed live at the historic 92nd Street Y in NYC, bestselling author Yuval Noah Harari delves deep into the profound shifts AI is creating in geopolitical power dynamics, narrative control, and the future of humanity.
Highlighting AI's unparalleled capacity to make autonomous decisions and generate original content, Harari underscores the rapid pace at which humans are ceding control over both power and stories to machines. "AI is the first technology in history that can take power away from us,” Harari tells Bremmer.
The discussion also touches on AI's impact on democracy and personal relationships, with Harari emphasizing AI's infiltration into our conversations and its burgeoning ability to simulate intimacy. This, he warns, could "destroy trust between people and destroy the ability to have a conversation," thereby unraveling the fabric of democracy itself. Harari chillingly refers to this potential outcome as "a social weapon of mass destruction." And it’s scaring dictators as much as democratic leaders. “Dictators,” Harari reminds us, “they have problems too.”
Harari's insights into AI's impact on democracy, intimacy, and social cohesion offer a stark vision of the challenges and transformations lying ahead. "The most sophisticated information technology in history, and people can no longer talk with each other?"
Watch full episode: Yuval Noah Harari explains why the world isn't fair (but could be)
Catch GZERO World with Ian Bremmer every week online and on US public television. Check local listings.
- Everybody wants to regulate AI ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
- Steven Pinker shares his "relentless optimism" about human progress ›
- From CRISPR to cloning: The science of new humans ›
AI and Canada's proposed Online Harms Act
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes at a look at the Canadian government’s Online Harms Act, which seeks to hold social media companies responsible for harmful content – often generated by artificial intelligence.
So last week, the Canadian government tabled their long-awaited Online Harms legislation. Similar to the Digital Services Act in the EU., this is a big sweeping piece of legislation, so I won't get into all the details. But essentially what it does is it puts the onus on social media companies to minimize the risk of their products. But in so doing, this bill actually provides a window in how we might start regulate AI.
It does this in two ways. First, the bill requires platforms to minimize the risk of exposure to seven types of harmful content, including self-harm content directed to kids or posts that incite hatred or violence. The key here is the obligation is on social media platforms, like Facebook or Instagram or TikTok, to minimize the risk of their products, not to take down every piece of bad content. The concern is not with all of the each individual pieces of content, but the way that social media products and particularly their algorithms might amplify or help target its distribution. And these products are very often driven by AI.
Second, one area where the proposed law does mandate a takedown of content is when it comes to intimate image abuse, and that includes deepfakes or content that's created by AI. If an intimate image is flagged as non-consensual, even if it's created by AI, it needs to be taken down within 24 hours by the platform. Even in a vacuum, AI generated deepfake pornography or revenge porn is deeply problematic. But what's really worrying is when these things are shared and amplified online. And to get at that element of this problem, we don't actually need to regulate the creation of these deepfakes, we need to regulate the social media that distributes them.
So countries around the world are struggling with how to regulate something as opaque and unknown as the existential risk of AI, but maybe that's the wrong approach. Instead of trying to govern this largely undefined risk, maybe we should be watching for countries like Canada who are starting with the harms we already know about.
Instead of broad sweeping legislation for AI, we might want to start with regulating the older technologies, like social media platforms that facilitate many of the harms that AI creates.
I'm Taylor Owen and thanks for watching.
- When AI makes mistakes, who can be held responsible? ›
- Taylor Swift AI images & the rise of the deepfakes problem ›
- Ian Bremmer: On AI regulation, governments must step up to protect our social fabric ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- EU AI regulation efforts hit a snag ›
Voters beware: Elections and the looming threat of deepfakes
With AI tools already being used to manipulate voters across the globe via deepfakes, more needs to be done to help people comprehend what this technology is capable of, says Microsoft vice chair and president Brad Smith.
Smith highlighted a recent example of AI being used to deceive voters in New Hampshire.
“The voters in New Hampshire, before the New Hampshire primary, got phone calls. When they answered the phone, there was the voice of Joe Biden — AI-created — telling people not to vote. He did not authorize that; he did not believe in it. That was a deepfake designed to deceive people,” Smith said during a Global Stage panel on AI and elections on the sidelines of the Munich Security Conference last month.
“What we fundamentally need to start with is help people understand the state of what technology can do and then start to define what's appropriate, what is inappropriate, and how do we manage that difference?” Smith went on to say.
Watch the full conversation here: How to protect elections in the age of AI
Hard Numbers: OnlyAI, Raw deal for media companies, AGI approaches, Less work and more money
10: OnlyFans CEO Amrapali Gan said in an interview that verified creators on the platform need to provide 10 different pieces of personal information in the US — nine everywhere else — including government ID, which she claims will help prevent the site from being overrun by AI porn bots. She admitted that sex workers may use AI tools on the platform but emphasized that their work can't be “wholly AI.”
2,500: Media outlets Raw Story, Alternet, and The Intercept sued OpenAI last week for copyright infringement, following the leads of the New York Times and others. The companies are seeking $2,500 per violation — that would add up quickly — in addition to the removal of the violating material. “Big Tech has decimated journalism,” Raw Story founder John Byrne said. “It’s time that publishers take a stand.”
5: AI-focused chip maker Nvidia’s CEO, Jensen Huang, says we’re just five years away from artificial general intelligence, where AI systems can outperform humans in most cognitive tests.
90: JPMorgan Chase claims its new AI-powered cashflow management tool was able to help clients cut back on manual labor by 90% and made it easier to “analyze and forecast cashflows.” The tool is currently free, though the company is considering charging in the future.