We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Online violence means real-world danger for women in politics like EU's Lucia Nicholsonová
Content Warning: This clip contains sensitive language.
In a compelling dialogue from a GZERO Global Stage discussion on gender equality in the age of AI, Lucia Nicholsonová, former Slovak National Assembly vice president and current member of European Parliament for Slovakia, recounts her harrowing personal experiences with disinformation campaigns and gendered hate speech online.
Ms. Nicholsonová read example messages she receives online, such as, "Damn you and your whole family. I wish you all die of cancer."
She also has faced false accusations of past criminal activity through deliberate online misinformation campaigns, which she says led to endured public humiliation and threats, even experiencing strangers spitting on her in the streets. These attacks were fueled by misogyny and prejudice and took a toll on her mental well-being and family life.
As Ms. Nicholsonová recalls, “It was a real trauma because I mean, at some point I wasn't able to go out of my home because I felt so threatened.”
The conversation was presented by GZERO in partnership with Microsoft and the UN Foundation. The Global Stage series convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: What impact will AI have on gender equality?
- Atwood and Musk agree on Online Harms Act ›
- Facebook allows "lies laced with anger and hate" to spread faster than facts, says journalist Maria Ressa ›
- Facebook allows "lies laced with anger and hate" to spread faster than facts, says journalist Maria Ressa ›
- AI and Canada's proposed Online Harms Act ›
- What impact will AI have on gender equality? - GZERO Media ›
The online abuse crisis threatens the mental health of young women worldwide
In a GZERO Global Stage discussion from the sidelines of the United Nation's 68th Session of the Commission on the Status of Women, the pervasive issue of online abuse and harassment faced by young women was in the spotlight.
Michelle Milford Morse, the UN Foundation's Vice President for Girls and Women Strategy points out that “more than half of young women are experiencing some form of abuse and harassment online, sometimes as young as eight,” underscoring the urgent need for collective efforts to combat online abuse and create safer digital spaces for everyone, but especially women. Milford Morse points out the importance that we all work towards a future where everyone can thrive free from fear and harassment in both physical and digital environments.
The conversation was presented by GZERO in partnership with Microsoft and the UN Foundation. The Global Stage series convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: What impact will AI have on gender equality?
What impact will AI have on gender equality?
At the current rate of progress toward gender equality, the World Economic Forum estimates it will take 131 years for women to attain parity in income, status, and leadership.
While technology is a powerful tool to help close the gender gap, it can also be weaponized. GZERO’s special presentation “Gender Equality in the Age of AI” featured candid conversations about the opportunities and threats that exist online, and how artificial intelligence will impact them.
Produced on the sidelines of the 68th United Nations Commission on the Status of Women, the program featured leading experts from government, technology, and philanthropy. Moderator Penny Abeywardena, former NYC Commissioner for International Affairs, was joined by Jac sm Kee, co-founder of Numun Fund; Vickie Robinson, general manager of the Microsoft Airband Initiative; Michelle Milford Morse, the United Nations Foundation’s vice president for Girls and Women Strategy; and Lucia Ďuriš Nicholsonová, a member of the European Parliament from Slovakia.
“The beauty and the promise of digital technologies is the opening up of democratic and civic participation space,” said Jac sm Kee. “But what is happening right now is the direct closing down of these spaces through deliberate attacks.”
The discussion focused on three key areas: gender-based online violence, the need for greater digital inclusion and access, and increasing leadership roles for women in all aspects of public life.
In a recent study from UNESCO, 58% of women and girls surveyed globally said they had experienced online violence, defined as a range of abuses including harassment, stalking, and defamation. Female journalists and politicians experienced these threats in even higher numbers.
During GZERO’s program, European Union parliamentarian Lucia Ďuriš Nicholsonová shared incredibly disturbing messages she has received throughout her years in office, many including violent and profane language and graphic sexual threats.
“These words are real. The people who are writing these words are real,” Nicholsonová said. “We can erase them through algorithms online, but they will still exist. I think we really need to know what is out there because it's a real threat.”
Michelle Milford Morse of UN Foundation explained to the crowd gathered at the NYC event that these kinds of abuses have compounding impacts on victims. “More than half of young women are experiencing some form of abuse and harassment online, sometimes as young as eight,” she said. “I don't think that we're thinking enough about the accumulation of that over time and the real harm to their mental health.”
But technology, when used for good, is also a powerful tool that can help close the gender gap. Microsoft’s Vickie Robinson described the importance of connectivity and digital skills. Of the estimated 2.6 billion people worldwide who lack internet access, the majority are women and girls.
“It's critically important, now more than ever, we need to make sure that we close the digital divide once and for all, but that we bring along with that the skills, we make it affordable, we make it accessible,” Robinson said.
The conversation then turned to leadership, and the need for more women in positions of authority in all industries and sectors of public life.
“Parliaments and legislators that have more women, they prioritize social services for children and the most vulnerable. When they engage in peace agreements, those peace agreements last longer. They're more likely to protect biodiversity,” said Morse. “There is no argument for half our human family to be shut out of society.”
The program was part of the Global Stage series and produced by GZERO in partnership with Microsoft and the United Nations Foundation. The series features politicians, private sector leaders, and renowned experts in conversation about issues at the intersection of technology, geopolitics and society.
- Ian Explains: How will AI impact the workplace? ›
- Can A.I. Reduce Poverty and Inequality?: AI in 60 Seconds ›
- Want global equality? Get more people online ›
- What We’re Watching: Boosting access, gender equality, and trust in the digital economy ›
- Scared of rogue AI? Keep humans in the loop, says Microsoft's Natasha Crampton ›
AI and the future of work: Experts Azeem Azhar and Adam Grant weigh in
Listen:What does this new era of generative artificial intelligence mean for the future of work? On the GZERO World Podcast, Ian Bremmer sits down with tech expert Azeem Azhar and organizational psychologist Adam Grant on the sidelines of the World Economic Forum in Davos, Switzerland, to learn more about how this exciting and anxiety-inducing technology is already changing our lives, what comes next, and what the experts are still getting wrong about the most powerful technology to hit the workforce since the personal computer.
The rapid advances in generative AI tools like ChatGPT, which has only been public for a little over a year, are stirring up excitement and deep anxieties about how we work and if we work. Artificial intelligence can potentially increase productivity and prosperity massively, but there are fears of job replacement and unequal access to technology. Will AI be the productivity booster CEOs hope for, the job killer employees fear?
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.Why Meta opened up
Last week, Meta CEO Mark Zuckerberg announced his intention to build artificial general intelligence, or AGI — a standard whereby AI will have human-level intelligence in all fields – and said Meta will have 350,000 high-powered NVIDIA graphics chips by the end of the year.
Zuckerberg isn’t alone in his intentions – Meta joins a long list of tech firms trying to build a super-powered AI. But he is alone in saying he wants to make Meta’s AGI open-source. “Our long-term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit,” Zuckerberg said. Um, everyone?
Critics have serious concerns with the advent of the still-hypothetical AGI. Publishing such technology on the open web is a whole other story. “In the wrong hands, technology like this could do a great deal of harm. It is so irresponsible for a company to suggest it.” University of Southampton professor Wendy Hall, who advises the UN on AI issues, told The Guardian. She added that it is “really very scary” for Zuckerberg to even consider it.
Unpacking Meta’s shift in AI focus
Meta has been developing artificial intelligence for more than a decade. The company first hired the esteemed academic Yann LeCun to helm a research lab originally called FAIR, or Facebook Artificial Intelligence Research, and now called Meta AI. LeCun, a Turing Award-winning computer scientist, splits his time between Meta and his professorial post at New York University.
But even with LeCun behind the wheel, most of Meta’s AI work was meant to supercharge its existing products — namely, its social media platforms, Facebook and Instagram. That included the ranking and recommendation algorithms for the apps’ news feeds, image recognition, and its all-important advertising platform. Meta makes most of its money on ads, after all.
While Meta is a closed ecosystem for users posting content or advertisers buying ad space, they’re considerably more open on the technical side. “They're a walled garden for advertisers, but they've always pitched themselves as an open platform when it comes to tech,” said Yoram Wurmser, a principal analyst at Insider Intelligence. “They explicitly like to differentiate themselves in that regard from other tech companies, particularly Apple, which is very guarded about their software platforms.” Differentiation like that can help Meta attract talent from elsewhere in Silicon Valley, but especially from academia, where open-source publishing is the standard – as opposed to proprietary research that might never even see the light of day.
Opening the door
In building its generative AI models early last year, the decision to go open-source, publishing the code of its LLaMA language model for all to use, was born out of FOMO (fear of missing out) and frustration. In early 2023, OpenAI was getting all of the buzz for its groundbreaking chatbot ChatGPT, and Meta — a Silicon Valley stalwart that’s been in the AI game for more than a decade — reportedly felt left behind.
So LeCun proposed going open-source for its large language model (once called Genesis and renamed to the infinitely more catchy LLaMA). Meta’s legal team cautioned it could put Meta further in the crosshairs of regulators, who might be concerned about such a powerful codebase living on the open internet, where bad actors — criminals and foreign adversaries — could leverage it. Feeling the heat and the urgency of the moment for attracting talent, hype, and investor fervor, Zuckerberg agreed with LeCun, and Meta released its original LLaMA model in February 2023. Meta has since released LLaMA 2 in partnership with OpenAI backer Microsoft in July, and has publicly confirmed it’s working on the next iteration, LLaMA 3.
Pros and cons of being an open book
Meta is one of the few AI-focused firms currently making their models open-source. There’s also the US-based startup HuggingFace, which oversaw the development of a model called Bloom, and the French firm Mistral AI, which has multiple open-source models. But Meta is the only established Silicon Valley giant pursuing this high-risk route head-on.
The potential reward is clear: Open-source development might help Meta attract top engineers, and its accessibility could make it the default system for tinkerers unwilling or unable to shell out for enterprise versions of OpenAI’s GPT-4. “It also gets a lot of people to do free labor for Meta,” said David Evan Harris, a public scholar at UC Berkeley and a former research manager for responsible AI at Meta. “It gets a lot of people to play with that model, find ways to optimize it, find ways of making it more efficient, find ways of making it better.” Open-source software encourages innovation and can enable smaller companies or independent developers to build out new applications that might’ve been cost-prohibitive otherwise
But the risk is clear too: When you publish software on the internet, anyone can use it. That means criminals could use open models to perpetuate scams and fraud, and generate misinformation or non-consensual sexual material. And, of pressing interest to the US, foreign adversaries will have unfettered access too. Harris says that an open-source language model is a “dream tool” for people trying to sow discord around elections further, deceive voters, and instill distrust in reliable democratic systems.
Regulators have already expressed concern: US Sens. Josh Hawley and Richard Blumenthal sent a letter to Meta last summer demanding answers about its language model. “By purporting to release LLaMA for the purpose of researching the abuse of AI, Meta effectively appears to have put a powerful tool in the hands of bad actors to actually engage in such abuse without much discernable forethought, preparation, or safeguards,” they wrote.
The Biden administration directed the Commerce Department in its October AI executive order to investigate the risk of “widely available” models. “When the weights for a dual-use foundation model are widely available — such as when they are publicly posted on the Internet — there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model,” the order says.
Open-source purists might say that what Meta is doing is not truly open-source because it has usage restrictions: For example, they don’t allow the model to be used by companies with 700 million monthly users without a license or by anyone who doesn’t disclose “known dangers” to users. But these restrictions are merely warnings without a real method of enforcement, Harris says: “The threat of lawsuit is the enforcement.”
That might deter Meta’s biggest corporate rivals, such as Google or TikTok, from pilfering the company’s code to boost their own work, but it’s unlikely to deter criminals or malicious foreign actors.
Meta is reorienting its ambitions around artificial intelligence. Yes, Meta has bet big on the metaverse, an all-encompassing digital world powered by virtual and augmented reality technology, going so far as to change its official name from Facebook to reflect its ambitions. But the metaverse hype has been largely replaced by AI hype, and Meta doesn’t want to be left behind — certainly not for something it’s been working on for a long time.
How is the world tackling AI, Davos' hottest topic?
It’s the big topic at Davos: What the heck are we going to do about artificial intelligence? Governments just can’t seem to keep up with the pace of this ever-evolving technology—but with dozens of elections scheduled for 2024, the world has no time to lose.
GZERO and Microsoft brought together folks who are giving the subject a great deal of thought for a Global Stage event on the ground in Switzerland, including Microsoft’s Brad Smith, EU Member of Parliament Eva Maydell, the UAE’s AI Minister Omar Sultan al Olama, the UN Secretary’s special technology envoy Amandeep Singh Gill, and GZERO Founder & President Ian Bremmer, moderated by CNN’s Bianna Golodryga.
The opportunities presented by AI could revolutionize healthcare, education, scientific research, engineering – just about every human activity. But the technology threatens to flood political discourse with disinformation, victimize people through scams or blackmail, and put people out of work. A poll of over 2,500 GZERO readers found a 45% plurality want to see international cooperation to develop a regulatory framework.
The world made great strides in AI regulation in 2023, perhaps most prominently in the European Union’s AI Act. But implementation and enforcement are a different game, and with every passing month, AI gets more powerful and more difficult to rein in.
So where do these luminaries see the path forward? Tune in to our full discussion from the World Economic Forum in Davos, Switzerland, above.
- Davos 2024: AI is having a moment at the World Economic Forum ›
- Be very scared of AI + social media in politics ›
- The AI power paradox: Rules for AI's power ›
- Davos 2024: China, AI & key topics dominating at the World Economic Forum ›
- Accelerating Sustainability with AI: A Playbook ›
- AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake - GZERO Media ›
Man — er, teenager — beats machine
Some 39 years after the release of the landmark Nintendo game in North America, 13-year-old Willis Gibson became the first person to beat the game, taking it to a kill screen, where the game stops functioning. It was long assumed that a human couldn’t take Tetris past 290 lines, but Gibson cleared 1,511 lines of the game in 40 minutes, and he caught it all on video.
AI will get stronger in 2024
While its lawyers are suing the world’s most powerful AI firms, reporters at The New York Times’ are simultaneously trying to make sense of this important emerging technology — namely, how rapidly it’s progressing before our eyes.
On Monday, veteran tech reporter Cade Metz suggested that AI will get stronger in innumerable ways.
“The A.I. industry this year is set to be defined by one main characteristic: a remarkably rapid improvement of the technology as advancements build upon one another, enabling A.I. to generate new kinds of media, mimic human reasoning in new ways and seep into the physical world through a new breed of robot,” Metz writes.
Huh? He’s referring to the advent of mass-market AI-generated video. Just like Midjourney and DALL-E brought AI-image generators to us in 2023, new tools will make it easy to type and generate whole videos made by AI.
Not only that, but popular chatbots like ChatGPT will become multimodal, meaning they can respond just as seamlessly with images, video, and audio as they do today with text. So perhaps there will be a true one-stop-shop for all your generative AI needs.
Logical reasoning of AI tools could also improve greatly this year, he suggests, allowing them to better function as “agents” to whom humans can delegate tasks and offload responsibilities.
Dust off your sci-fi classics: Smarter AI systems could power smart robots — though they’ll almost certainly invade factories first, rather than trying to become at-home personal butlers.