We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Rep. Don Beyer goes back to school
Rep. Don Beyer, a 73-year-old car dealership owner-turned-politician, is not your typical grad student. A Democrat who served as Virginia’s lieutenant governor in the 1990s and an ambassador during the Obama administration before getting elected to Congress in 2015, Beyer decided to go back to school in 2022 to pursue a master’s degree in machine learning at George Mason University.
Since then, Beyer has served as vice chair of the Congressional Artificial Intelligence Caucus and introduced a bill to provide transparency into the development of so-called foundation models.
GZERO spoke with Beyer about his studies, his concerns and hopes for the technology, and whether the US will catch up to Europe in regulating AI.
GZERO: Was there a specific moment when you realized that you were unprepared for the challenge of artificial intelligence and wanted to learn more? Why did you feel you needed to take the step of actually enrolling in a master’s program to get the education you needed?
Beyer: I was interested in AI long before I knew what it was that I was interested in, and this goes back a long time, to the early 1980s. I had read and heard several compelling discussions of the topic and got interested in pattern recognition and using technology and deep learning to make sense of big data sets. Going back to school arose first from opportunity, having a good school nearby that offered the coursework to finally tackle something that had interested me for a long time. I wasn’t sure it would work, but I have no regrets at all. And then part way through my course of study, it suddenly became a much bigger topic for the country and the Congress.
How have your professors and classmates reacted to having a sitting congressman in class?
Many of my classmates are unaware, which is just fine with me. Those who know have been tolerant and kind. I am just another student.
What are you learning in your classes?
Mostly math and coding, so far.
Do you feel more prepared to legislate around AI because of this education?
Yes, much more so. Even though I’m not a fully trained computer scientist, I at least have more than a generalist’s understanding of neural networks, large databases, the predictive and generative uses of computer science, and so on.
What are you most concerned about with the rise of artificial intelligence? What are you most excited about?
The big concerns in the short run for me are deepfakes, misinformation, and economic disruptions from job displacement. But there are very exciting prospects in areas like health care, scientific research, management and workflow, productivity, and much more.
Europe just passed the AI Act. Are you optimistic that Congress can pass comprehensive AI regulations anytime soon?
Congress is more likely to take an incremental than a comprehensive approach, at least in the near term, to solve specific problems rather than attempting a large overarching regulation like what the EU did. But we are working on legislation right now with every intention to pass laws.
Anything else you want to leave us with?
Most people associate Congress with chaos, dysfunction, and partisanship, but those of us working on AI have a refreshingly cooperative and collaborative spirit. This is important to get right. Few things have greater potential to change all our lives and the lives of future generations.
Hard Numbers: Google’s spending spree, Going corporate, Let’s see a movie, Court-ordered AI ban, Energy demands
100 billion: AI is a priority for many of Silicon Valley’s top companies — and it’s a costly one. Google DeepMind chief Demis Hassabis said that the tech giant plans to spend more than $100 billion developing artificial intelligence. That’s the same amount that rival Microsoft is expected to spend in building an AI-powered supercomputer, nicknamed Stargate.
72.5: The free market is dominating the AI game: Of the foundation models released between 2019 and 2023, 72.5% of them originated from private industry, according to a new Staford report. 108 models were released by companies, as opposed to 28 from academia, nine from an industry-academia collaboration, and four from government. None at all were released through a collaboration between government and industry.
5: The A24 film Civil War has garnered considerable controversy for its content, but its promotion is under scrutiny as well. Five posters for the film were created using artificial intelligence and depict scenes that never occur in the narrative. That’s kicked off a debate about the ethics of using AI in film marketing as well as questions of whether this is false advertising for the movie itself.
1,000: A sex offender in the UK who was found to have created 1,000 indecent images of children was banned from using any “AI creating tools” for five years by a British court. It’s not clear if he was actually using AI to create the illegal images in question, or if the order is peremptory, but it could serve as a model for future punishment in UK cases in the future. Meanwhile, on April 23, a group of AI companies including Google, Meta, and OpenAI, pledged to better prevent their tools from creating sexualized images of children and other exploitative material.
4.5: Salesforce is calling on AI companies to disclose the energy efficiency and carbon footprint of their models, and asking legislators to pass new laws aimed at demanding transparency and reducing the total energy consumption of AI. Salesforce’s best estimates put the total power generation demands of global data centers at 1.5% but warn that that figure could increase to 4.5% in the coming years absent intervention.WHO can succeed at AI?
The World Health Organization recently released Smart AI Resource Assistant for Health — or SARAH — an AI chatbot that’s able to answer basic health questions. SARAH is able to answer health questions in eight different languages, and the organization says she’s a tool to fight misinformation about mental health, cancer, and COVID, among other things.
The WHO bills SARAH, which appears as a female avatar with a voice and facial expressions, as a digital health “promoter” — not a provider — and, though SARAH hasn’t taken the Hippocratic Oath, it’s meant to fill in the gaps for people searching for health questions without access to proper health care providers. (They’ll still need a broadband connection.) You can speak through a microphone, and SARAH will respond, or you can type your answers to a similar effect.
But SARAH still struggles with plenty of basic queries, according to independent researchers who spoke to Bloomberg.
SARAH is trained on GPT-3.5, the model that OpenAI powers its free version of ChatGPT with, not the updated premium version (that’s GPT-4). Bloomberg found that SARAH repeatedly hallucinated — giving false and outdated medical information about drugs, medical advisories, or WHO’s own data. It incorrectly said that an Alzheimer’s drug was not approved, couldn’t provide details on where to get a mammogram, and couldn’t even recount the WHO’s finding about hepatitis cases worldwide.
When GZERO tested SARAH, it didn’t make any noticeable mistakes, but it basically refused to answer any questions, including a query about whether COVID is still dangerous. It responded, “I’m here to encourage you to live a healthy lifestyle, so I can't respond to that. Is there anything else health-related you'd like to discuss or any other questions I can help answer for you today?”
So maybe don’t cancel that appointment with your doctor just yet.
The military jet that acts alone
The US Air Force and the Defense Advanced Research Projects Agency, aka DARPA, have been tinkering with the latest aerial weapons. On April 17, DARPA confirmed that in military exercises with the Air Force last year, an AI-controlled jet was pitted against a human pilot in an in-air dogfight simulation.
The Air Force installed its autonomous pilot system in a modified F-16 relabelled as the X-62A back in 2021. Humans were aboard the autonomous aircraft during the dogfight experiment, with the ability to take control if necessary. The military didn’t specify whether the autonomous X-62A or the human-piloted opponent, an F-16 jet, “won” the duel, which took place in September 2023, though it did say the test was a success.
“The potential for autonomous air-to-air combat has been imaginable for decades, but the reality has remained a distant dream up until now,” Air Force Secretary Frank Kendall wrote in a statement. “This is a transformational moment.”
As we’ve written previously, militaries around the world are gearing up for autonomous warfare, with weapons systems able to identify and take out specific targets. The United Nations has meanwhile called the use of autonomous weapons on human targets a “moral line that we must not cross,” a signal that there will be a drumbeat of public criticism as the US and other militaries expand and deploy their AI-powered weapons.Biden, Microsoft, and the United Arab Emirates
Microsoft has quickly become the most important investor in artificial intelligence technology, holding a $13 billion stake in ChatGPT-maker OpenAI. It’s a peculiar deal with a revenue-sharing agreement that’s raised eyebrows from global regulators. But its latest billion-dollar investment is perhaps even more of an eyebrow-raiser.
The US tech giant announced last week that it would invest $1.5 billion in G42, a leading artificial intelligence holding company based in Abu Dhabi. The deal was “largely orchestrated” by the Biden administration, according to the New York Times, an effort to beat back China and gain influence in the Persian Gulf.
“There’s no question the investment was made to try and box out Chinese investment” in artificial intelligence in the Middle East, said Alexis Serfaty, a director in Eurasia Group’s geo-technology practice.
Under the terms of the new deal, Microsoft will let G42 sell its generative AI services and, in exchange, G42 will use Microsoft’s Azure cloud services. It also agreed to stricter assurances with the US government to further cut off China and remove their products and technology from use.
It’s not every day that the White House plays corporate dealmaker, but the administration hasn’t been shy about making AI — and the chips needed to power it — an economic and national security priority. Serfaty said the closest parallel he could think of was the proposed Trump administration deal to hand a stake of TikTok to the US software and cloud giant Oracle. (TikTok’s Chinese parent company ByteDance never sold a stake of its social media app to Oracle, but it did strike a deal to host its US user data on Oracle servers). Plus, the US has recently given massive grants and favorable loans to global chip manufacturers—like TSMC and Samsung—for moving production to the US.
The Biden administration has imposed strict export controls on US-made chips going to China, especially powerful ones used to run artificial intelligence models. The goal: cut off China and hamper their ability to build powerful AI. Tech investments in the Persian Gulf have been something of a casualty of this Cold War over AI. G42 announced in December 2023 that it would cut ties with China in order to keep working with US industry.
“For better or worse, as a commercial company, we are in a position where we have to make a choice,” G42 CEO Peng Xiao told the Financial Times. “We cannot work with both sides. We can’t.”
Serfaty said that the deal signals that the US government is going to increasingly treat artificial intelligence like defense technology, and play a more hands-on role in its commercial affairs and investment.
“When it comes to emerging technology, you cannot be both in China’s camp and our camp,” Commerce Secretary Gina Raimondo told the Times.
AI policy formation must include voices from the global South
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she explains the need to incorporate diverse and inclusive perspectives in formulating policies and regulations for artificial intelligence. Narrowing the focus primarily to the three major policy blocs—China, the US, and Europe—would overlook crucial opportunities to address risks and concerns unique to the global South.
This is GZERO AI from Stanford's campus, where we just hosted a two-day conference on AI policy around the world. And when I say around the world, I mean truly around the world, including many voices from the Global South, from multilateral organizations like the OECD and the UN, and from the big leading AI policy blocs like the EU, the UK, the US and Japan that all have AI offices for oversight.
But what I really want to focus on is the role of people in the Global South, and how they're underrepresented in discussions about both what AI means in their local context and how they participate in debates around policy, if they do at all. Because right now, our focus is way too much on the three big policy blocks, China, the US and Europe.
Also because of course, a lot of industry is here around the corner in Silicon Valley. But I've learned so much from listening to people who focus on the African continent, where there are no less than 2000 languages. And, many questions about what AI will mean for those languages, for access for people beyond just the exploitative and attractive model, based on which large language models are trained with cheap labor from people in these developing countries, but also about how harms can be so different.
For example, the disinformation tends to spread with WhatsApp rather than social media platforms and that voice, through generative AI. So synthetic voice is one of the most effective ways to spread disinformation. Something that's not as prominently recognized here, where there's so much focus on text content and deepfakes videos, but not so much on audio. And then, of course, we talked about elections because there are a record number of people voting this year and disinformation around elections, tends to pick up.
And AI is really a wild card in that. So I take away that we just need to have many more conversations, not so much, about AI in the Global South and tech policy there, but listening to people who are living in those communities, researching the impact of AI in the Global South, or who are pushing for fair treatment when their governments are using the latest technologies for repression, for example.
So lots of fruitful thought. And, I was very grateful that people made it all the way over here to share their perspectives with us.
The UK is plotting to regulate AI
Policy officials in the Department for Science, Innovation and Technology have begun drafting legislation to rein in the most potent dangers from AI, sources told Bloomberg News this week. While Europe has set the standard by passing its comprehensive AI Act, Sunak has pledged to take a more hands-off approach to the technology. It’s unclear how far the forthcoming bill, which is still in its early stages, will go in setting up safeguards. Separately, the Department for Culture, Media and Sport has also proposed amending the country’s copyright law to allow companies to “opt out” of having their content scraped by generative AI firms.
Everyone has a chief AI officer now
It seems like every company worth its salt has a Chief AI Officer, aka CAIO, these days – or, at least, every company that wants to buy into the hype.
The Financial Times reports that these positions aren’t just for tech-related companies but any firm looking to integrate artificial intelligence into its work. And over the past five years, the number of companies with a top AI official has tripled, according to LinkedIn’s best count. Some see their role as an AI evangelist, while others are more integrated with ethics, risk, compliance, and legal practices of their businesses.
The corporate model is salient, too, because the Biden administration recently announced that just about every federal agency and department needs to appoint a CAIO charged with implementing the new standards set by the White House Office of Management and Budget. That’s right, there will soon be AI czars across private industry and the entire US government.
While many employees are naturally concerned about AI taking their jobs, many are also starting to use ChatGPT (even covertly) and other applications in their jobs. In other words, everyone’s still tinkering — employees, managers, executives, and government officials.