We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
Science & Tech
Hard Numbers: Russia shoots down space resolution, US economy sputters, Nigerian prisoners make slippery escape, Ecuador gets lifeline
13: A UN Security Council resolution reaffirming a long-standing prohibition on arms races in outer space got 13 votes in favor this week, but it was shot down by a single veto from UNSC permanent member Russia. Moscow says it wasn’t necessary to support a resolution that merely reaffirmed a 1967 treaty that Russia is already part of, but the US ambassador to the UN asked, “What could you possibly be hiding?” In recent months, the US has said it believes Russia is developing a new space-based, anti-satellite weapon.
1.6: The US economy expanded by just 1.6% in the first quarter of the year, lagging analyst forecasts by nearly a full percentage point, as consumer spending slowed. Normally that would create momentum for the Fed to cut interest rates to spur growth, but there’s no joy there either: Core inflation (which excludes food and energy) rose 3.7%, higher than economists expectations, limiting the scope for any near-term rate cuts.
118: Authorities in the Nigerian capital of Abuja are on high alert after a rainstorm destroyed a fence at a nearby penitentiary, allowing as many as 118 inmates to escape. A prison service spokesperson blamed “colonial era” facilities. Weak security and run-down buildings contribute to frequent prison-breaks in the West African nation.
4 billion: After months of talks, Ecuador and the IMF agreed to a $4 billion loan agreement meant to help stabilize the small Andean country’s finances as it grapples with a vicious cycle of economic hardship, rising poverty, and skyrocketing homicides. Just days earlier, Ecuadorians had voted yes in a referendum to boost the government’s ability to crack down on drug violence.President Joe Biden on Wednesday signed a law that could see TikTok banned nationwide unless its Chinese parent company, ByteDance, sells the popular app within a year. The law was motivated by national security concerns.
TikTok promptly vowed to challenge the “unconstitutional” law in court, saying it would “silence” millions of Americans – setting the stage for a battle over whether the law violates First Amendment rights.
Expect delays. Eurasia Group’s US Director Clayton Allen is skeptical that such legal challenges will be successful, but they will still likely delay “any action well into 2025, putting the onus – potentially – on a second Trump administration.”
Though Donald Trump moved to ban TiikTok while he was in office, the former president is now attacking Biden over the law and calling for “young people” to remember the move on Election Day.
Notably, Biden’s campaign says it plans to continue using TikTok to reach younger voters.
What will China do? China expects delays in the process but is likely to prohibit a sale if it comes to it, according to Eurasia Group, our parent company. Beijing is unlikely to respond with a tit-for-tat approach targeting American companies and will instead focus on building a fortress economy that’s insulated from US containment efforts.
Rep. Don Beyer, a 73-year-old car dealership owner-turned-politician, is not your typical grad student. A Democrat who served as Virginia’s lieutenant governor in the 1990s and an ambassador during the Obama administration before getting elected to Congress in 2015, Beyer decided to go back to school in 2022 to pursue a master’s degree in machine learning at George Mason University.
Since then, Beyer has served as vice chair of the Congressional Artificial Intelligence Caucus and introduced a bill to provide transparency into the development of so-called foundation models.
GZERO spoke with Beyer about his studies, his concerns and hopes for the technology, and whether the US will catch up to Europe in regulating AI.
GZERO: Was there a specific moment when you realized that you were unprepared for the challenge of artificial intelligence and wanted to learn more? Why did you feel you needed to take the step of actually enrolling in a master’s program to get the education you needed?
Beyer: I was interested in AI long before I knew what it was that I was interested in, and this goes back a long time, to the early 1980s. I had read and heard several compelling discussions of the topic and got interested in pattern recognition and using technology and deep learning to make sense of big data sets. Going back to school arose first from opportunity, having a good school nearby that offered the coursework to finally tackle something that had interested me for a long time. I wasn’t sure it would work, but I have no regrets at all. And then part way through my course of study, it suddenly became a much bigger topic for the country and the Congress.
How have your professors and classmates reacted to having a sitting congressman in class?
Many of my classmates are unaware, which is just fine with me. Those who know have been tolerant and kind. I am just another student.
What are you learning in your classes?
Mostly math and coding, so far.
Do you feel more prepared to legislate around AI because of this education?
Yes, much more so. Even though I’m not a fully trained computer scientist, I at least have more than a generalist’s understanding of neural networks, large databases, the predictive and generative uses of computer science, and so on.
What are you most concerned about with the rise of artificial intelligence? What are you most excited about?
The big concerns in the short run for me are deepfakes, misinformation, and economic disruptions from job displacement. But there are very exciting prospects in areas like health care, scientific research, management and workflow, productivity, and much more.
Europe just passed the AI Act. Are you optimistic that Congress can pass comprehensive AI regulations anytime soon?
Congress is more likely to take an incremental than a comprehensive approach, at least in the near term, to solve specific problems rather than attempting a large overarching regulation like what the EU did. But we are working on legislation right now with every intention to pass laws.
Anything else you want to leave us with?
Most people associate Congress with chaos, dysfunction, and partisanship, but those of us working on AI have a refreshingly cooperative and collaborative spirit. This is important to get right. Few things have greater potential to change all our lives and the lives of future generations.
Hard Numbers: Google’s spending spree, Going corporate, Let’s see a movie, Court-ordered AI ban, Energy demands
100 billion: AI is a priority for many of Silicon Valley’s top companies — and it’s a costly one. Google DeepMind chief Demis Hassabis said that the tech giant plans to spend more than $100 billion developing artificial intelligence. That’s the same amount that rival Microsoft is expected to spend in building an AI-powered supercomputer, nicknamed Stargate.
72.5: The free market is dominating the AI game: Of the foundation models released between 2019 and 2023, 72.5% of them originated from private industry, according to a new Staford report. 108 models were released by companies, as opposed to 28 from academia, nine from an industry-academia collaboration, and four from government. None at all were released through a collaboration between government and industry.
5: The A24 film Civil War has garnered considerable controversy for its content, but its promotion is under scrutiny as well. Five posters for the film were created using artificial intelligence and depict scenes that never occur in the narrative. That’s kicked off a debate about the ethics of using AI in film marketing as well as questions of whether this is false advertising for the movie itself.
1,000: A sex offender in the UK who was found to have created 1,000 indecent images of children was banned from using any “AI creating tools” for five years by a British court. It’s not clear if he was actually using AI to create the illegal images in question, or if the order is peremptory, but it could serve as a model for future punishment in UK cases in the future. Meanwhile, on April 23, a group of AI companies including Google, Meta, and OpenAI, pledged to better prevent their tools from creating sexualized images of children and other exploitative material.
4.5: Salesforce is calling on AI companies to disclose the energy efficiency and carbon footprint of their models, and asking legislators to pass new laws aimed at demanding transparency and reducing the total energy consumption of AI. Salesforce’s best estimates put the total power generation demands of global data centers at 1.5% but warn that that figure could increase to 4.5% in the coming years absent intervention.If you use any Meta product — Facebook, Instagram, WhatsApp, or Messenger — buck up for an onslaught of AI. The social media giant is rolling out AI-powered assistants across its apps in unavoidable ways.
Meta’s AI, quite simply, will be everywhere: in your searches, conversations with friends, and chiming to conversations on Facebook groups. It’s powered by the company’s LLaMA 3 model, and is meant to help you answer questions or complete tasks — whatever you want, really. GZERO searched for Thai food on Instagram and instantly initiated a conversation with the Meta AI chatbot. (It gave five good options nearby.)
Meta has taken an open-source approach to developing artificial intelligence, releasing its powerful model for the world to use. That’s different from rivals like OpenAI, which charge consumers and companies to use their closed-source tech.
Now, it’s putting its models to use in a bid to ensure you spend as much time on its platforms as possible. Meta’s bread and butter, as an advertising giant, is attention. If you don’t need to leave Instagram to Google something, or write something with ChatGPT, that’ll quickly mean more money for Meta.
If users aren’t so horribly annoyed or creeped out that they disengage completely, that is. 404 Media reported that Meta’s AI told a parents group on Facebook that it has a disabled-yet-gifted child before the company received complaints and removed the comments. And, for people who want to opt out entirely, it doesn’t help that currently there’s no real way to turn the AI off either.The World Health Organization recently released Smart AI Resource Assistant for Health — or SARAH — an AI chatbot that’s able to answer basic health questions. SARAH is able to answer health questions in eight different languages, and the organization says she’s a tool to fight misinformation about mental health, cancer, and COVID, among other things.
The WHO bills SARAH, which appears as a female avatar with a voice and facial expressions, as a digital health “promoter” — not a provider — and, though SARAH hasn’t taken the Hippocratic Oath, it’s meant to fill in the gaps for people searching for health questions without access to proper health care providers. (They’ll still need a broadband connection.) You can speak through a microphone, and SARAH will respond, or you can type your answers to a similar effect.
But SARAH still struggles with plenty of basic queries, according to independent researchers who spoke to Bloomberg.
SARAH is trained on GPT-3.5, the model that OpenAI powers its free version of ChatGPT with, not the updated premium version (that’s GPT-4). Bloomberg found that SARAH repeatedly hallucinated — giving false and outdated medical information about drugs, medical advisories, or WHO’s own data. It incorrectly said that an Alzheimer’s drug was not approved, couldn’t provide details on where to get a mammogram, and couldn’t even recount the WHO’s finding about hepatitis cases worldwide.
When GZERO tested SARAH, it didn’t make any noticeable mistakes, but it basically refused to answer any questions, including a query about whether COVID is still dangerous. It responded, “I’m here to encourage you to live a healthy lifestyle, so I can't respond to that. Is there anything else health-related you'd like to discuss or any other questions I can help answer for you today?”
So maybe don’t cancel that appointment with your doctor just yet.
The US Air Force and the Defense Advanced Research Projects Agency, aka DARPA, have been tinkering with the latest aerial weapons. On April 17, DARPA confirmed that in military exercises with the Air Force last year, an AI-controlled jet was pitted against a human pilot in an in-air dogfight simulation.
The Air Force installed its autonomous pilot system in a modified F-16 relabelled as the X-62A back in 2021. Humans were aboard the autonomous aircraft during the dogfight experiment, with the ability to take control if necessary. The military didn’t specify whether the autonomous X-62A or the human-piloted opponent, an F-16 jet, “won” the duel, which took place in September 2023, though it did say the test was a success.
“The potential for autonomous air-to-air combat has been imaginable for decades, but the reality has remained a distant dream up until now,” Air Force Secretary Frank Kendall wrote in a statement. “This is a transformational moment.”
As we’ve written previously, militaries around the world are gearing up for autonomous warfare, with weapons systems able to identify and take out specific targets. The United Nations has meanwhile called the use of autonomous weapons on human targets a “moral line that we must not cross,” a signal that there will be a drumbeat of public criticism as the US and other militaries expand and deploy their AI-powered weapons.