Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Hard Numbers: Hacks galore, Hollywood dreams, US on top, Pokémon Go scan the world
3 million: Evidently, there’s hot demand for AI-related scripts in Hollywood. A thriller about artificial intelligence from a relatively unknown screenwriter named Natan Dotan just sold for $1.25 million, a number that will rise to $3 million if it’s turned into a film. Despite Hollywood’s perennial discomfort with AI infiltrating the film industry, maybe all of the hubbub has got them thinking that audiences will turn out for a good old-fashioned AI thriller.
36: Stanford researchers analyzed the AI capabilities of 36 countries and determined that the US significantly leads in most of the 42 areas studied — including research, private investment, and notable machine learning models. China, which leads in patents and journal citations, came in second, followed by the UK, India, and the UAE.
10 million: Niantic, the developer of the augmented reality game Pokémon Go, announced that it’s building an AI model to make a 3D world map using location data submitted by the game’s users. The company said it already has 10 million scanned locations.Biden will support a UN cybercrime treaty
The Biden administration is planning to support a controversial United Nations treaty on cybercrime, which will be the first legally binding agreement on cybersecurity.
The treaty would be an international agreement to crack down on child sexual abuse material, or CSAM, and so-called revenge porn. It would also increase information-sharing between parties of the treaty, increasing the flow of evidence the United States, for one, has on cross-border cybercrime. This will also make it easier to extradite criminals.
But the treaty has faced severe pushback from advocacy groups and even Democratic lawmakers. On Oct. 29, six Democratic US senators, including Tim Kaine and Ed Markey, wrote a letter to the Biden administration saying they fear the treaty, called the UN Convention Against Cybercrime, could “legitimize efforts by authoritarian countries like Russia and China to censor and surveil internet users, furthering repression and human rights abuses around the world.” They said the treaty is a threat to “privacy, security, freedom of expression, and artificial intelligence safety.”
The senators wrote that the Convention doesn’t include a needed “good-faith exception for security research” or a “requirement for malicious or fraudulent intent for unauthorized access crimes.” This runs afoul of the Biden administration’s executive order on AI, which requires “red-teaming” efforts that could involve hacking or simulating attacks to troubleshoot problems with AI systems. The UN will vote on the Convention later this week, but even if the United States supports it, it would need a two-thirds majority in the US Senate — a difficult mark to achieve — to ratify it.Chinese telecom hack sparks national security fears
A group of hackers with backing from the Chinese government broke past the security of multiple US telecom firms, including AT&T and Verizon, and potentially accessed data used by law enforcement officials. Specifically, the hackers appear to have targeted information about court-authorized wiretaps, which could be related to multiple ongoing cases in the US concerning Chinese government agents intimidating and harassing people in the US.
The hack was carried out by a group known as Salt Typhoon, one of many such units used by the Chinese government to infiltrate overseas networks. Investigators from Microsoft and a Google subsidiary have been helping investigate the breach alongside the FBI, whose cybersecurity agents are reportedly outnumbered by their Chinese opponents 50:1.
Will the hack undermine US-China relations? Both sides have been trying to keep tensions under control — largely successfully — all year, but this incident may be too awkward to smooth over. China’s Embassy in Washington, DC, denied the hack and accused the US of “politicizing cybersecurity issues to smear China,” and the FBI and DOJ have not commented. We’re watching how the fallout might affect a notional Biden-Xi phone call the White House has reportedly been attempting to arrange.
Are US elections Safe? Chris Krebs is optimistic
The debate around the US banning TikTok is a proxy for a larger question: How safe are democracies from high-tech threats, especially from places like China and Russia?
There are genuine concerns about the integrity of elections. What are the threats out there and what can be done about it? No one understands this issue better than Chris Krebs. Krebs is best known as the former director of the US Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency.
In a high-profile showdown, Donald Trump fired Krebs in November 2020, after CISA publicly affirmed that the election was among the “most secure in history” and that the allegations of election corruption were flat-out wrong. Since then, Krebs has become the chief public policy officer at SentinelOne and cochairs the Aspen Institute’s U.S. Cybersecurity Working Group, and he remains at the forefront of the cyber threat world.
GZERO Publisher Evan Solomon spoke to him this week about what we should expect in this volatile election year.
Solomon: How would you compare the cyber threat landscape now to the election four years ago? Have the rapid advances in AI made a material difference?
Chris Krebs: The general threat environment related to elections tracks against the broader cyber threat environment. The difference here is that beyond just pure technical attacks on election systems, election infrastructure, and on campaigns themselves, we have a parallel threat of information operations, and influence operations —what we more broadly call disinformation.
This has picked up almost exponentially since 2016, when the Russians, as detailed in the Intelligence Community Assessment of January 2017, showed that you can get into the middle of domestic elections and pour kerosene on that conversation. That means it jumps into the real world, potentially even culminating in political violence like we saw on Jan. 6.
We saw the Iranians follow the lead in 2020. The intelligence community released another report in December that talked about how the Chinese attempted to influence the 2022 elections. We've seen the Russians are active too through a group we track called Doppelganger, specifically targeting the debate around the border and immigration in the US.
Solomon: When you say Doppelganger is “active,” what exactly does that mean in real terms?
Krebs: They use synthetic personas or take over existing personas that have some element of credibility and jump into the online discourse. They also use Pink Slime websites, which is basically fake media, and then get picked up through social media and move over to traditional media. They are taking existing divides and amplifying the discontent.
Solomon: Does it have a material impact on, say, election results?
Krebs: I was at an event back in 2019, and a former governor came up to me as we were talking about prepping for the 2020 election and said: “Hey, everything you just talked about sounds like opposition research, typical electioneering, and hijinks.”
And you know what? That's not totally wrong. But there is a difference.
Rather than just being normal domestic politics, now we have a foreign security service that's inserting itself in driving discourse domestically. And that's where there are tools that the intelligence services here in the US as well as our allies in the West have the ability to go in and disrupt.
They can get onto foreign networks and say, “Hey, I know that account right there. I am able to determine that the account which is pushing this narrative is controlled by the Russian security services, and we can do something with that.”
But here is the key: Once you have a social media influencer here in the US that picks up that narrative and runs with it, well, now, it's effectively fair game. It's part of the conversation, First Amendment protected.
Solomon: Let's move to the other side. What do you do about it without violating the privacy and free speech civil liberties of citizens?
Krebs: This is really the political question of the day. In fact, just last week there was a Supreme Court hearing on Murthy v. Missouri that gets to this question of government and platforms working together. (Editor’s note: The case hinges on whether the government’s efforts to combat misinformation online around elections and COVID constitute a form of censorship). Based on my read, the Supreme Court was largely being dismissive of Missouri and Louisiana's arguments in that case. But we'll see what happens.
I think the bigger issue is that there is this broader conflict, particularly with China, and it is a hot cyber war. Cyber war from their military doctrine has a technical leg and there's a psychological leg. And as we see it, there are a number of different approaches.
For example, India has outlawed and banned hundreds of Chinese origin apps, including WeChat and TikTok and a few others. The US has been much more discreet in combating Chinese technology. The recent actions by the US Congress and the House of Representatives are much more focused on getting the foreign control piece out of the conversation and requiring divestitures.
Solomon: Chris, what’s the biggest cyber threat to the elections?
Krebs: Based on my conversations with law enforcement and the national security community, the number one request that they're getting from election officials isn't on the cyber side. It isn't on the disinformation side. It's on physical threats to election workers. We're talking about doxing, we're talking about swatting, we're talking about people physically intimidating at the polls and at offices. And this is resulting in election officials resigning and quitting and not showing up.
How do we protect those real American heroes who are making sure that we get to follow through on our civic duty of voting and elections? If those election workers aren't there, it's going to be a lot harder for you and me to get out there and vote.
Solomon: What is your biggest concern about AI technology galloping ahead of regulations?
Krebs: Here in the United States, I'm not too worried about regulation getting in front of AI. When you look at the recent AI executive order out of the Biden administration, it's about transparency and even the threshold they set for compute power and operations is about four times higher than the most advanced publicly available generative AI. And even if you cross that threshold, the most you have to do is tell the government that you're building or training that model and show safety and red teaming results, which hardly seems onerous to me.
The Europeans are taking a different approach, more of a regulate first, ask questions later, which I think is going to limit some of their ability to truly be at the bleeding edge of AI.
But I'll tell you this: We are using AI and cybersecurity to a much greater effect and impact than the bad guys right now. The best they can do right now is use it for social engineering, for writing better phishing emails, for some research, and for functionality. We are not seeing credible reports of AI being used to write new innovative malware. But in the meantime, we are giving tools that are AI powered to the threat hunters that have really advanced capabilities to go find bad stuff, to improve configurations, and ultimately take the security operations piece and supercharge it.
NATO’s virtual battlefield misses AI
The world’s most powerful military bloc held cyber defense exercises last week, simulating cyberattacks against power grids and critical infrastructure. NATO rightly insists these exercises are crucial because cyberattacks are standard tools of modern warfare. Russia regularly engages in such attacks, for example, to threaten Ukraine’s power supply, and the US and Israel recently issued a joint warning of Iranian-linked cyberattacks on US-based water systems.
A whopping 120 countries have been hit by cyberattacks in the past year alone — and nearly half of those involved NATO members. Looking forward, the advent of generative AI could make even the simplest cyberattacks more potent. “Cybercriminals and nation states are using AI to refine the language they use in phishing attacks or the imagery in influence operations,” says Microsoft security chief Tom Burt.
Yet, in its latest wargames, NATO's preparations for cyberattacks involving AI were nowhere to be found. The alliance says AI will be added to the training next year.
“The most acute change we will see in the cyber domain will be the use of AI both in attacking but also in defending our networks,” said David van Weel, NATO’s Assistant Secretary General for Emerging Security Challenges. He noted that the bloc will also update its 2021 AI strategy to include generative AI next year.
We can’t help but wonder whether these changes will be too little, too late.
Podcast: Can governments protect us from dangerous software bugs?
Listen: We've probably all felt the slight annoyance at prompts we receive to update our devices. But these updates deliver vital patches to our software, protecting us from bad actors. Governments around the world are increasingly interested in monitoring when dangerous bugs are discovered as a means to protect citizens. But would such regulation have the intended effect?
In season 2, episode 5 of Patching the System, we focus on the international system of bringing peace and security online. In this episode, we look at how software vulnerabilities are discovered and reported, what government regulators can and can't do, and the strength of a coordinated disclosure process, among other solutions.
Our participants are:
- Dustin Childs, Head of Threat Awareness at the Zero Day Initiative at Trend Micro
- Serge Droz from the Forum of Incident Response and Security Teams (FIRST)
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: Can governments protect us from dangerous software bugs?
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
DUSTIN CHILDS: The industry needs to do better than what they have been doing in the past, but it's never going to be a situation where they ship perfect code, at least not with our current way of developing software.
SERGE DROZ: I think the job of the government is to create an environment in which responsible vulnerability disclosure is actually possible and is also something that's desirable.
ALI WYNE: If you've ever gotten a notification pop up on your phone or computer saying that an update is urgently needed, you've probably felt that twinge of inconvenience at having to wait for a download or restart your device. But what you might not always think about is that these software updates can also deliver patches to your system, a process that is in fact where this podcast series gets its name.
Today, we'll talk about vulnerabilities that we all face in a world of increasing interconnectedness.
Welcome to Patching the System, a special podcast from the Global Stage Series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from more than 150 global technology companies dedicated to creating a safer cyber world for all of us.
And about those vulnerabilities that I mentioned before, we're talking specifically about the vulnerabilities in the wide range of IT products that we use, which can be entry points for malicious actors. And governments around the world are increasingly interested in knowing about these software vulnerabilities when they're discovered.
Since 2021 for example, China has required that anytime such software vulnerabilities are discovered, they first be reported to a government ministry even before the company that makes a technology is alerted to the issue. In the European Union, less stringent, but similar legislation is pending, that would require companies that discover that a software vulnerability has been exploited to report the information to government agencies within 24 hours and also provide information on any mitigation use to correct the issue.
These policy trends have raised concerns from technology companies and incident responders that such policies could actually undermine security.
Joining us today to delve into these trends and explain why are Dustin Childs, Head of Threat Awareness at the Zero Day Initiative at Trend Micro, a cybersecurity firm base in Japan, and Serge Droz from the Forum of Incident Response and Security Teams, AKA First, a community of IT security teams that respond when there's a major cyber crisis. Dustin, Serge, welcome to you both.
DUSTIN CHILDS: Hello. Thanks for having me.
SERGE DROZ: Hi. Thanks for having me.
ALI WYNE: It's great to be talking with both of you today. Dustin, let me kick off the conversation with you. And I tried in my introductory remarks to give listeners a quick glimpse as to what it is that we're talking about here, but give us some more detail. What exactly do we mean by vulnerabilities in this context and where did they originate?
DUSTIN CHILDS: Well, vulnerability, really when you break it down, it's a flaw in software that could allow a threat actor to potentially compromise a target, and that's a fancy way of saying it's a bug. They originate in humans because humans are imperfect and they make imperfect code, so there's no software in the world that is completely bug free, at least none that we've been able to generate so far. So every product, every program given enough time and resources can be compromised because they all have bugs, they all have vulnerabilities in them. Now, vulnerability doesn't necessarily mean that it can be exploited, but a vulnerability is something within a piece of software that potentially can be exploited by a threat actor, a bad guy.
ALI WYNE: And Serge, when we're talking about the stakes here, obviously vulnerabilities can create cracks in the foundation that lead to cybersecurity incidents or attacks. What does it take for a software vulnerability to become weaponized?
SERGE DROZ: Well, that really depends on the particular vulnerability. A couple of years ago, there was a vulnerability that was really super easy to exploit: log4j. It was something that everybody could do in an afternoon, and that of course, is a really big risk. If something like that gets public before it's fixed, we really have a big problem. Other vulnerabilities are much harder to exploit also because software vendors, in particular operating system vendors have invested a great deal in making it hard to exploit vulnerabilities on their systems. The easy ones are getting rarer, mostly because operating system companies are building countermeasures that makes it hard to exploit these. Others are a lot harder and need specialists, and that's why they fetch such a high price. So there is no general answer, but the trend is it's getting harder, which is a good thing.
ALI WYNE: And Dustin, let me come back to you then. So who might discover these vulnerabilities first and what kinds of phenomena make them more likely to become a major security risk? And give us a sense of the timeline between when a vulnerability is discovered and when a so-called bad actor can actually start exploiting it in a serious way.
DUSTIN CHILDS: The people who are discovering these are across the board. They're everyone from lone researchers just looking at things to nation states, really reverse engineering programs for their own purposes. So a lot of different people are looking at bugs, and it could be you just stumble across it too and it's like, "Oh, hey. Look, it's a bug. I should report this."
So there's a lot of different people who are finding bugs. Not all of them are monetizing their research. Some people just report it. Some people will find a bug and want to get paid in one way or another, and that's what I do, is I help them with that.
But then once it gets reported, depending on what industry you're in, it's usually like 120 days to up to a year until it gets fixed from the vendor. But if a threat actor finds it, they can weaponize it and it can be weaponized, they can do that within 48 hours. So even if a patch is available and that patch is well-known, the bad guys can take that patch and reverse engineer it and turn it into an exploit within 48 hours and start spreading. So within 30 days of a patch being made available, widespread exploitation is not uncommon if a bug can be exploited.
ALI WYNE: Wow. So 48 hours, that doesn't give folks much time to respond, but thank you, Dustin, for giving us that number. I think we now have at least some sense of the problem, the scale of the problem, and we'll talk about prevention and solutions in a bit. But first, Serge, I want to come back to you. I want to go into some more detail about the reporting process. What are the best practices in terms of reporting these vulnerabilities that we've been discussing today? I mean, suppose if I were to discover a software vulnerability for example, what should I do?
SERGE DROZ: This is a really good question, and there's still a lot of ongoing debate, even though the principles are actually quite clear. If you find a vulnerability, your first step should be to actually start informing confidentially the vendor, whoever is responsible for the software product.
But that actually sounds easier than it is because quite often it's maybe hard to talk to a vendor. There's still some companies out there that don't talk to ‘hackers,’ in inverted commas. That's really bad practice. In this case, I recommend that you contact a national agency that you trust that can mediate in between you, and that's all fairly easy to do if it's just between you and another party, but then you have a lot of vulnerabilities in products for no one is really responsible, take open source or products that actually are used in all the other products.
So we talking about supply chain issues and then things really become messy. And in these cases, I really recommend that people start working together with someone who's experienced in doing coordinated vulnerability disclosure. Quite often what happens is that within the industry affected organizations get together, they form a working group that silently starts mitigating this spec practices, that you give the vendor three months or more to actually be able to fix a bug because sometimes it's not that easy. What you really should not be doing is leaking any kind of information, like even saying, "Hey, I have found the vulnerability in product X," it may actually trigger someone to start looking at this. So this is really important that this remains a confidential process where very few people are involved.
ALI WYNE: So one popular method of uncovering these vulnerabilities that we've been discussing, it involves, so-called bug bounty programs. What are bug bounty programs? Are they a good tool for catching and reporting these vulnerabilities, and then moving beyond bug bounty programs, are there other tools that work when it comes to reporting vulnerabilities?
SERGE DROZ: Bug bounty programs are just one of the tools we have in our tool chest to actually find vulnerabilities. The idea behind a bounty program is that you have a lot of researchers that actually poke at code just because they may be interested, and at the company or a producer of software, you offer them a bounty, some money. If they report a vulnerability responsibly, you pay them some money usually depending on how severe or how dangerous the vulnerability is and encourage good behavior this way. I think it's a really great way because it actually creates a lot of diversity. Typically, bug bounty programs attract a lot of different types of researchers. So we have different ways of looking at your code and that often discovers vulnerabilities that no one has ever thought of because no one really had that way of thought, so I think it's a really good thing.
It also awards people that responsibly disclose and don't just sell it to the highest bidder because we do have companies out there that buy vulnerabilities that then end up in some strange gray market, exactly what we don't want, so I think that's a really good thing. Bug bounty programs are complimentary to what we call penetration testing, where you hire a company that for money, starts looking at your software. There's no guarantee that they find a bug, but they usually have a systematic way of going over this and you have an agreement. As I said, I don't think there's a single silver bullet, a single way to make this, but I think this is a great way to actually also reward this. And some of the bug bounty researchers make a lot of money. They actually make a living of that. If you're really good, you can make a decent amount of money.
DUSTIN CHILDS: Yeah, and let me just add on to that as someone who runs a bug bounty program. There are a couple of different types of bug bounty programs too, and the most common one is the vendor specific one. So Microsoft buys Microsoft bugs, Apple buys Apple bugs, Google buys Google bugs. Then there's the ones that are like us. We're vendor-agnostic. We buy Microsoft and Apple and Google and Dell and everything else pretty much in between.
And one of the biggest things that we do as a vendor-agnostic program is an individual researcher might not have a lot of sway when they contact a big vendor like a Microsoft or a Google, but if they come through a program like ours or other vendor-agnostic programs out there, they know that they have the weight of the Zero Day Initiative or that program behind it, so when the vendor receives that report, they know it's already been vetted by a program and it's already been looked at. So it's a little bit like giving them a big brother that they can take to the schoolyard and say, "Show me where the software hurt you," and then we can help step in for that.
ALI WYNE: And Dustin, you've told us what bug bounty programs are. Why would someone want to participate in that program?
DUSTIN CHILDS: Well, researchers have a lot of different motivations, whether it's curiosity or just trying to get stuff fixed, but it turns out money is a very big motivator pretty much across the spectrum. We all have bills to pay, and a bug bounty program is a way to get something fixed and earn potentially a large amount of money depending on the type of bug that you have. The bugs I deal with range anywhere between $150 on the very low end, up to $15 million for the most severe zero click iPhone exploits being purchased by government type of thing, so there's all points in between too. So it's potentially lucrative if you find the right types of bugs, and we do have people who are exclusively bug hunters throughout the year and they make a pretty good living at it.
ALI WYNE: Duly noted. So maybe I'm playing a little bit of a devil's advocate here, but if vulnerabilities, these cyber vulnerabilities, if they usually arise from errors in code or other technology mistakes from companies, aren't they principally a matter of industry responsibility? And wouldn't the best prevention just be to regulate software development more tightly and avoid these mistakes from getting out into the world in the first place?
DUSTIN CHILDS: Oh, you used the R word. Regulation, that's a big word in this industry. So obviously it's less expensive to fix bugs in software before it ships than after it ships. So yes, obviously it's better to fix these bugs before they reach the public. However, that's not really realistic because like I said, every software has bugs and you could spend a lifetime testing and testing and testing and never root them all out and then never ship a product. So the industry right now is definitely looking to ship product. Can they do a better job? I certainly think they can. I spent a lot of money buying bugs and some of them I'm like, "Ooh, that's a silly bug that should never have left wherever shipped at." So absolutely, the industry needs to do better than what they have been doing in the past, but it's never going to be a situation where they ship perfect code, at least not with our current way of developing software.
ALI WYNE: Obviously there isn't any silver bullet when it comes to managing these vulnerabilities, disclosing these vulnerabilities. So assuming that we probably can't eliminate all of them, how should organizations deal with fixing these issues when they're discovered? And is there some kind of coordinated vulnerability disclosure process that organizations should follow?
DUSTIN CHILDS: There is a coordinated disclosure process. I mean, I've been in this industry for 25 years and dealing with vulnerability disclosures since 2008 personally, so this is a well-known process where you report to it. As an industry if you're developing software, one of the most important things you can do is make sure you have a contact. If someone finds a bug in your program, who do they email? The more established programs like Microsoft and Apple and Google, it's very clear if you find a bug there who you're supposed to email and what you're supposed to do with it. One of the problems we have as a bug bounty program is if we purchase a bug in a lesser known piece of software, sometimes it's hard for us to hunt down who actually is responsible for maintaining it and updating it.
We've even had to go on to Twitter and LinkedIn to try and hunt down some people to respond to an email to say, "Hey, we've got a bug in your program," so that's one of the biggest things you can do is just be aware that somebody could report a bug to you. And as a consumer of the product, however, you need a patch management program. So you can't just rely on automatic updates. You can't just rely on things happening automatically or easily. You need to understand first what is in your environment, so you have to be ruthless in your asset discovery, and I do use the word ruthless there intentionally. You've got to know what is in your enterprise to be able to defend it, and then you've got to have a plan for managing it and patching it. That's a lot easier said than done, especially in a modern enterprise where not only do you have desktops and laptops, you've got IT devices, you've got IOT devices, you've got thermostats, you've got update, you've got little screens everywhere that need updating and they all have to be included in that patch management process.
ALI WYNE: Serge, when it comes to triaging vulnerabilities, it doesn't sound like there's a large need for government participation. So what are some of the reasons legitimate and maybe less than legitimate why governments might increasingly want to be notified about vulnerabilities even before patches are available? What are their motivations?
SERGE DROZ: So I think there are several different motivations that governments are getting increasingly fed up with these kind of excuses that our industry, the software industry makes about how hard it is to avoid software vulnerabilities, all the reasons and excuses we bring and for not doing our jobs. And frankly, as Dustin said, we could be doing better. Governments just want to know so they can actually give out the message that, "Hey, we're watching you and we want to make sure you do your job." Personally, I'm not really convinced this is going to work. So that will be mostly the legitimate reasons why the governments want to know about vulnerabilities. I think it's fair that the government knows or learns about the vulnerability after the fact, just to get an idea of what the risk is for the entire industry. Personally, I feel it should only be the parties that need to know should know it during the responsible disclosure.
And then of course, there's governments that like vulnerabilities because they can abuse it themselves. I mean, governments are known to exploit vulnerabilities through their favorite three letter agencies. That's actually quite legitimate for governments to do. It's not illegal for governments to do this type of work, but of course, as a consumer or as an end user, I don't like this, I don't want products that have vulnerabilities that are exploited. And personally from a civil society point of view, there's just too much risk with this being out there. So my advice really is the fewer people, the few organizations know about a vulnerability the better.
DUSTIN CHILDS: What we've been talking about a lot so far is what we call coordinated disclosure, where the researcher and the vendor coordinate a response. When you start talking about governments though, you start talking about non-disclosure, and that's when people hold onto these bugs and don't report them to the vendor at all, and the reason they do that is so that they can use them exclusively. So that is one reason why governments hold onto these bugs and want to be notified is so that they have a chance to use them against their adversaries or against their own population before anyone else can use them or even before it gets fixed.
ALI WYNE: So the Cybersecurity Tech Accord had recently released a statement opposing the kinds of reporting requirements we've been discussing. From an industry perspective, what are the concerns when it comes to reporting on vulnerabilities to governments?
DUSTIN CHILDS: Really the biggest concern is making sure that we all have an equitable chance to get it fixed before it gets used. If a single government starts using vulnerabilities to exploit for their own personal gain, for whatever, that puts the rest of the world at a disadvantage, and that's the rest of the world, their allies as well as their opponents. So we want to do coordinated disclosure. We want to get the bugs fixed in a timely manner, and keeping them to themselves really discourages that. It discourages finding bugs, it discourages reporting bugs. It really discourages from vendors from fixing bugs too, because if the vendors know that the governments are just going to be using these bugs, they might get a phone call from their friendly neighborhood three letter and say, "You know what? Hold off on fixing that for a while." Again, it just puts us all at risk, and we saw this with Stuxnet.
Stuxnet was a tool that was developed by governments targeting another government. It was targeting Iranian nuclear facilities, and it did do damage to Iranian nuclear facilities, but it also did a lot of collateral damage throughout Europe as well, and that's what we're trying to avoid. It's like if it's a government on government thing, great, that's what governments do, but we're trying to minimize the collateral damage from everyone else who was hurt by this, and there really were a lot of other places that were impacted negatively from the Stuxnet virus.
ALI WYNE: And Serge, what would you say to someone who might respond to the concerns that Dustin has raised by saying, "Well, my government is advanced and capable enough to handle information about vulnerabilities responsibly and securely, so there's no issue or added risk in reporting to them." What would you say to that individual?
SERGE DROZ: The point is that there are certain things that really you only deal on a need to know basis. That's something that governments actually do know. Governments when they deal with confidential or critical information, it's always on the need to know. They don't tell this to every government employee even though they're, of course, are loyal. It makes the risk of this leaking even if the government doesn't have any ill intent bigger, so there's just no need the same way there is no need that all the other a hundred thousand security researchers need to know about this. So I think as long as you cannot contribute constructively to mitigating this vulnerability, you should not be part of that process.
Having said that, though, there is some governments that actually have really tried hard to help researchers making contact with vendors. Some researchers are afraid to report vulnerabilities because they feel they're going to become under pressure or stuff like this. So if a government wants to take that role and can or can't create enough trust that researchers trust them, I don't really have a problem, but it should not be mandatory. Trust needs to be earned. You cannot legislate this, and every time you have to legislate something, I mean, come on, you legislate it because people don't trust you.
ALI WYNE: We spent some time talking about vulnerabilities, why they're a problem. We've discussed some effective and maybe some not so effective ways to prevent or manage them better. And I think the governments have a legitimate interest in knowing the companies are acting responsibly and that, that interest is the impetus behind some of the push, at least for more regulation and reporting. But what do each of you see sees other ways that governments could help ensure that companies are mitigating risks and protecting consumers as much as possible?
DUSTIN CHILDS: So one of the things that we're involved with here at the Zero Day Initiative is encouraging governments to allow safe harbor. And really what that means is researchers are safe in reporting vulnerabilities to a vendor without the legal threat of being sued or having other action taken against them so that as long as they are legitimately reporting a bug and not trying to steal or violate laws, as long as they're legitimate researchers trying to get something fixed, they're able to do that without facing legal consequences.
One of the biggest things that we do as a bug bounty program is just handle the communications between researchers and the vendors, and that is really where it can get very contentious. So to me, one of the things that governments can do to help is make sure that safe harbor is allowed so that the researchers know that, "I can report this vulnerability to this vendor without getting in touch with a lawyer first. I'm just here trying to get something fixed. Maybe I'm trying to get paid as well," so maybe there is some monetary value in it, but really they're just trying to get something fixed, and they're not trying to extort anyone. They're not trying to create havoc, they're just trying to get a bug fixed, and that safe harbor would be very valuable for them. That's one thing we're working on with our government contacts, and I think it's a very big thing for the industry to assume as well.
SERGE DROZ: Yes, I concur with Dustin. I think the job of the government is to create an environment in which responsible vulnerability disclosure is actually possible and is also something that's desirable, that also includes a regulatory framework that actually gets away from this blaming. I mean, writing software is hard, bugs appear. If you just constantly keep bashing people that they're not doing it right or you threaten them with liabilities, they're not going to talk to you about these types of things. So I think the job of the government is to encourage responsible behavior and to create an environment in that, and maybe there's always going to be a couple of black sheeps, and here maybe the role of the government is really to encourage them to play along and start offering vulnerability reporting programs. That's where I see the role of the government, creating good governance to actually enable responsible vulnerabilities disclosure.
ALI WYNE: Dustin Childs, Head of Threat Awareness at the Zero Day Initiative at Trend Micro, a cybersecurity firm base in Japan. And Serge Droz from the Forum of Incident Response and Security Teams, a community of IT security teams that respond when there is a major cyber crisis. Dustin, Serge, thanks very much for joining me today.
DUSTIN CHILDS: You're very welcome. Thank you for having me.
SERGE DROZ: Yes, same here. It was a pleasure.
ALI WYNE: That's it for this episode of Patching the System. We have five episodes this season covering everything from cyber mercenaries to a cybercrime treaty. So follow Ian Bremmer's GZERO World feed anywhere you get your podcast to hear more. I'm Ali Wyne. Thanks very much for listening.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
Podcast: Would the proposed UN Cybercrime Treaty hurt more than it helps?
Listen: As the world of cybercrime continues to expand, it follows suit that more international legal standards should follow. But while many governments around the globe see a need for a cybercrime treaty to set a standard, a current proposal on the table at the United Nations is raising concerns among private companies and nonprofit organizations alike. There are fears it covers too broad a scope of crime and could fail to protect free speech and other human rights across borders while not actually having the intended effect of combatting cybercrime.
In season 2, episode 4 of Patching the System, we focus on the international system of online peace and security. In this episode, we hear about provisions currently included in the proposed Russia-sponsored UN cybercrime treaty as deliberations continue - and why they might cause more problems than they solve.
Our participants are:
- Nick Ashton-Hart, head of delegation to the Cybercrime Convention Negotiations for the Cybersecurity Tech Accord
- Katitza Rodriguez, policy director for global privacy at a civil society organization, the Electronic Frontier Foundation
- Ali Wyne, Eurasia Group Senior Analyst (moderator)
GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
TRANSCRIPT: Would the proposed UN Cybercrime Treaty hurt more than it helps?
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.
NICK ASHTON HART: We want to actually see a result that improves the situation for real citizens that actually protects victims of real crimes and that doesn't allow //cybercrime to go unpunished. That's in no one's interest.
KATITZA RODRIGUEZ: By allowing countries to set their own standards of what constitutes a serious crime, the states are opening the door for authoritarian countries to misuse this treaty as a tool for persecution. The treaty needs to be critically examined and revised to ensure that it's truly served its purpose in tackling cybercrimes without undermining human rights.
ALI WYNE: It's difficult to overstate the growing impact of international cybercrime. Many of us either have been victims of criminal activity online or know someone who has been.
Cybercrime is also a big business, it's one of the top 10 risks highlighted in the World Economic Forum's 2023 Global Risk Report, and it's estimated that it could cost a world more than $10 trillion by 2025. Now, global challenges require global cooperation, but negotiations of a new UN Cybercrime Treaty have been complicated by questions around power, free speech and privacy online.
Welcome to Patching the System, a special podcast from the Global Stage series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from more than 150 global technology companies dedicated to creating a safer cyber world for all of us.
In this episode, we'll explore the current draft of what would be the first United Nations Cybercrime Treaty, the tense negotiations behind the scenes, and the stakes that governments and private companies have in those talks.
Last season we spoke about the UN Cybercrime Treaty negotiations when they were still relatively early on in the process. While they had been kicked off by a Russia-sponsored resolution that passed in 2019, there had been delays due to COVID-19.
In 2022, there was no working draft and member states were simply making proposals about what should be included in a cybercrime treaty, what kinds of criminal activity it should address, and what kinds of cooperation it should enable.
Here's Amy Hogan-Burney of the Microsoft Digital Crimes Unit speaking back then:
AMY HOGAN-BURNEY: There is a greater need for international cooperation because as cyber crime escalates, it’s clearly borderless and it clearly requires both public sector and the private sector to work on the problem. Although I am just not certain that I think that a new treaty will actually increase that cooperation. And I’m a little concerned that it might do more harm than good. And so, yes, we want to be able to go after cyber criminals across jurisdiction. But at the same time, we want to make sure that we’re protecting fundamental freedoms, always respectful of privacy and other things. Also, we’re always mindful of authoritarian states that may be using these negotiations to criminalize content or freedom of expression.
Now a lot has happened since then as we've moved from the abstract to the concrete. The chair of the UN Negotiating Committee released a first draft of the potential new cybercrime treaty last June, providing the first glimpse into what could be new international law and highlighting exactly what's at stake. The final draft is expected in November with the diplomatic conference to finalize the text starting in late January 2024.
Joining me are Nick Ashton-Hart, head of delegation to the Cybercrime Convention Negotiations for the Cybersecurity Tech Accord and Katitza Rodriguez, policy director for global privacy at a civil society organization, the Electronic Frontier Foundation. Thanks so much for speaking with me today.
KATITZA RODRIGUEZ: Thank you for inviting us.
NICK ASHTON-HART: It's a pleasure to be here.
ALI WYNE: Let's dive right into the cybercrime treaty. Now, this process started as a UN resolution sponsored by Russia and it was met early on by a lot of opposition from Western democracies, but there were also a lot of member states who genuinely thought that it was necessary to address cybercrime. So give us the broad strokes as to why we might want a cybercrime treaty?
NICK ASHTON-HART: The continuous expansion of cybercrime at an explosive growth rate is clearly a problem and one that the private sector would like to see more effectively addressed because of course, we're on the front lines of addressing it as victims of it. At one level it sounds like an obvious candidate for international action.
In reality, of course, there is the Budapest Convention on cybercrime, which was agreed in 2001. It is not just a convention that European countries can join, any member state can join. If there hadn't been any international convention, then you could see how it would be an obvious thing to work on.
This was controversial from the beginning because there is one and it's widely implemented. I think it's 68 countries, but 120 countries' laws have actually been impacted by the convention. There was also a question because of who was asking for it. This also raised more questions than answers.
KATITZA RODRIGUEZ: For us, civil society, I don't think the treaty is necessary because there are other international treaties, but I do understand why some states are trying to push for this treaty because they feel that their system for law enforcement cooperation is just too slow or not reliable. And they have argued that they have not been able to set up effective mutual legal assistance treaties, but we think the reasons fall short, especially because there are lot of these existing mechanisms include solid human rights safeguards, and when the existing mutual legal assistance treaty for international cooperation does not work well, we believe they can be improved and fixed.
And just let's be real, there are some times when not cooperating is actually the right thing to do, especially when criminal investigations could lead to prosecution of individuals for their political belief, their sexual or protection, gender identity or simply for speaking out of protesting peacefully or insulting the president or the king.
On top of that, this treaty as is stand now, might not even make the cybercrime cooperation process any faster. The negotiators are aiming for mandatory cooperation of almost all crimes on this planet and not just cybercrimes. This could end up bogging down the system even more.
ALI WYNE: Nick, let me just ask you, are there any specific aspects of a new global cybercrime treaty that you think could be genuinely helpful to citizens around the world?
NICK ASHTON-HART: Well, if for one it focused only on cybercrime, that would be the most fundamental issue. The current trajectory would have this convention address all crimes of any kind, which is clearly an ocean boiling exercise and creates many more problems than it solves. There are many developing countries who will say, as Katitza has noted, that they don't receive timely law enforcement cooperation through the present system because if you are not a part of the Budapest Convention, honestly you have to have a bilateral treaty relationship with every country that you want to have law enforcement cooperation with.
And clearly, every country negotiating a mutual legal assistance treaty with 193 others is not a recipe for an international system that's actually effective. That's where an instrument like this can come in and set a basic common set of standards so that all parties feel confident that the convention’s provisions will not be taken advantage of for unacceptable purposes.
ALI WYNE: Katitza, I want to bring you back into the conversation. On balance, what do you think of the draft of the treaty as it stands now as we approach the end of 2023?
KATITZA RODRIGUEZ: Honestly, I'm pretty worried. The last negotiation session in New York made it crystal clear that we're short of time and there is still a lot left undecided, especially on critical issues like defining the treaty scope and ensuring human rights are protected.
The treaty was supposed to tackle cybercrime, but it's morphing into something much broader, a general purpose surveillance tool that could apply to any crime, tech involvement or not, as long as there is digital evidence. We're extremely far from our original goal and opening a can of worms. I agree with Nick when he said that a treaty with a tight focus on just actual cybercrimes topped with solid human right protections could really make a difference. But sadly what we are seeing right now is very far from that.
Many countries are pushing for sweeping surveillance powers, hoping to access real-time location data and communication for a wide array of crimes with minimum legal safeguards, the check and balance to put limits to curb potential abuse of power. This is a big red flag for us.
On the international cooperation front, it's a bit of a free for all the treaty leaves it up to individual countries to set their own standards for privacy and human rights when using these surveillance powers in cross border investigations.
And we know that the standards of some countries are very far from minimal standards, yet every country that signs a treaty is expected to implement these cross-border cooperation powers. And here's where it gets really tricky. This sets a precedent for international cooperation on investigations, even into activities that might be considered criminal in one country but are actually forms of free expression. This includes laws against so-called fake news, peaceful protests, blasphemy, or expressing non-conforming sexual orientation or gender identity. These are matters of human rights.
ALI WYNE: Nick, from your perspective, what are the biggest concerns for industry right now with the text, with the negotiations as they're ongoing? What are the biggest concerns for industry and is there any major provision that you think is missing right now from the current text?
NICK ASHTON-HART: Firstly, I will say that industry actually agrees with everything you just heard from EFF. And that's one of the most striking things about this negotiation, is in more than 25 years of working in multilateral policy, I have never seen all NGOs saying the same thing to the extent that is the case in this negotiation. Across the board, we have the same concerns. We may emphasize some more than others or put a different level of emphasis on certain things, but we all agree comprehensively, I think, about the problems.
One thing that's very striking is this is a convention which is fundamentally about the sharing of personal information about real people between countries. There is no transparency at all at any point. In fact, the convention repeatedly says that all of these transfers of information should be kept secret.
This is the reality that they are talking about agreeing to, is a convention where countries globally share the personal information of citizens with no transparency at all. Ask yourself if that is a situation which isn't likely to be abused, because I think we know the answer. It's the old joke about you know who somebody is if you put them in a room and turn the lights off. Well, the lights are off and the light switch doesn't exist in this treaty.
And so that, to us, is simply invidious in 2024 that you would see that bearing the UN logo - it would be outrageous. And that's just the starting place. There's also provisions that would allow one country to ask another to seize the person of say a tech worker who is on holiday, or a government worker who is traveling that has access to passwords of secure systems, to seize that person and demand that that person turn over those codes with no reference back to their employer.
As Katitza has said, it also allows for countries to ask others to provide the location data and communication metadata about where a person is in real time along with real time access to their computer information. This is clearly subject to abuse, and we brought this up with some delegations and they said, "Well, but countries do this already, so do we have to worry about it?"
I just found that an astonishing level of cynicism: the fact that people abuse international law isn't an argument for trying to limit their ability to do it in this context. We have a fundamental disconnect where we're asking to trust all countries in the world to operate in the dark, in secret, forever and that that will work out well for human rights.
ALI WYNE: Katitza, let me bring you back into the conversation. You heard Nick's assessment. I'd like to ask you to react to that assessment and also to follow up with you, do you think that there are any critical provisions that need to be added to the current text of the draft treaty?
KATITZA RODRIGUEZ: Well, I agree on many of the points that Nick made. One, keeping a sharp focus on so-called cybercrimes, is not only crucial for protecting human rights, our point of view, but it's also key to making this whole cooperation work. We have got countries left and right pointing out the flaws in the current international cooperation mechanisms, saying they are too flawed, too complex. And yet here we are heading towards a treaty that could cover a limitless list of crimes. That's not just missing the point, it's setting us up for even more complexity when the goal should be working better together, easier to tackle this very serious crimes like ransomware attacks that we have seen everywhere lately.
There is a few things that are also very problematic that are more into the details. One is one that Nick mentioned, this provision that could be used to coerce individual engineers, people who have knowledge to be able to access systems, to compel them to bypass their own security measures or the measures of their own employees, without the company actually knowing and putting the engineer into trouble because it won't be able to tell their employer that they are working on behalf of the law enforcement. I think it's really Draconian, these provisions, and it's also very bad for security, for encryption, for keeping us more safe.
But there's another provision that is also very problematic for us. It's the one that on international cooperation too, when it mentions that states should share, "Items or data required for analysis of investigations." The way it's phrased, it is very vague and leaves room for a state's ability to share entire databases or artificial intelligence trainings data to be shared. This could include biometrics data, data that is very sensitive and it's a human rights minefield here. We have seen how biometric data, face and voice recognition can be used against protestors, minorities, journalists, and migrants in certain countries. This treaty shouldn't become a tool that facilitates such abuses on an international scale.
And we also know that Interpol, in the mix too, is developing this massive predictive analytic system fed by all sorts of data, but it will be also with information data provided by member states. The issue with predictive policing is that it's often pitched as unbiased since it's based on data and not personal data, but we know that's far from the truth. It's bound to disproportionately affect Black and other over-policed communities. The data feeds into these systems comes from a racially biased criminal punishment systems and arrests in Black neighborhoods are disproportionately high. Even without explicit racial information, the data is tainted.
One other one:Human rights safeguards in the treaty as Nick says, they're in secret and the negotiation, no transparency, we fully agreed on that, but they are very weak.
As it stands, the main human rights safeguards in the treaty don't even apply to the international co-operation chapter, which is a huge gap. It defers to national law, whatever national law says, and as I said before, for one country this is good and for others it's bad and that's really problematic.
ALI WYNE: Nick, in terms of the private sector and in terms of technology companies, what are the practical concerns when it comes to potential misuses or abuses of the treaty from the perspective specifically of the Cybersecurity Tech Accord?
NICK ASHTON-HART: In the list of criminal acts in the convention, at the present time, none of them actually require criminal intent, but that is not actually the case at the moment. The criminal acts are only defined as "Acts done intentionally without right." This opens the door for all kinds of abuses. For example, security researchers often attempt to break into systems in order to find defects that they can then notify the vendors of, so these can be fixed. This is a fundamentally important activity for the security of all systems globally. They are intentionally breaking into the system but not for a negative purpose, for an entirely positive one.
But the convention does not recognize how important it is not to criminalize security researchers. The Budapest Convention, by contrast, actually does this. It has very extensive notes on the implementation of the convention, which are a part of the ratification process, meaning countries should not only implement the exact text of the convention, but they should do so in a rule of law-based environment that does, among other things, protect security researchers.
We have consistently said to the member states, "You need to make clear that criminal intent is the standard." The irony here is this is actually not complicated because this is a fundamental concept of criminal law called mens rea, which says that with the exception of certain crimes like murder, for someone to be convicted, you have to find that they had criminal intent.
Without that, you have the security researchers’ problem. You also have the issue that whistleblowers are routinely providing information that they're providing without authorization, for example, to journalists or also to watchdog agencies of government. Those people would also fall foul of the convention as its currently written, as would journalists' sources, depending on the legal environment in which they're implemented. Like civil society, we have consistently pointed out these glaring omissions and yet no country including the developed Western countries that you would expect would seize upon this, none of them have moved to include protections for any of these situations.
I have to say that's one of the most disappointing things about this negotiation is so far most of the Western democracies are not acting to prevent abuses of this convention and they are resisting any efforts from all of us in civil society and the private sector urging them to take action and they're refusing to do so. There are two notable exceptions which is New Zealand and Canada, but the rest, frankly, are not very helpful.
Some of the other issues that we have is that it should be much clearer that if there's a conflict of law problem where a country asks for cooperation of a provider and the provider says to them, "Look, if we provide this information to you, it's coming from another jurisdiction and it would cause us to break the law in that jurisdiction." We have repeatedly said to the member states, "You need to provide for this situation because it happens routinely today and in such an instance it's up to the two cooperating states to work out between themselves how that data can be provided in a way that does not require the provider to break the law."
If you want to see more effective cooperation and more expeditious cooperation, you would want more safeguards, as Katitza has mentioned. There's a direct connection between how quickly cooperation requests go through and the level of safeguards and comfort with the legal system of the requesting and requested states.
Where a request goes through quickly, it's because the states both see that their legal systems are broadly compatible in terms of rights and the treatment of accused persons and appeals and the like. And so they not only see that the crimes are the same, called dual criminality, but that also the accused will be treated in a way that's broadly compatible with the home jurisdiction. And so there's a natural logic to saying, "Since we know this is the case, we should provide for this in here and ensure robust safeguards because that will produce the cooperation that everyone wants." Unfortunately, the opposite is the case. The cooperation elements continue to be weakened by poor safeguards.
ALI WYNE: I think that both of you have made clear that the stakes are very high for whether this treaty comes to pass, what will the final text be? What will the final provisions be? But just to put a fine point on it, are there concerns that this treaty could also set a precedent for future cybercrime legislation across jurisdictions? I can imagine this treaty serving as a north star in places that don't already have cybercrime laws in place, so Katitza, let me begin with you.
KATITZA RODRIGUEZ: Yes, your are concerns and indeed very valid and very pressing. By setting a precedent where broad intrusive surveillance tools are made available for an extensive range of crimes, we risk normalizing a global landscape where human rights are secondary to state surveillance and control. Law enforcement needs ensured access to data, but the check and balances and the safeguards is to ensure that we can differentiate between the good cops and the bad cops. The treaty provides a framework that could empower states to use the guise of cybercrime prevention to clamp down on activities that are protected under human right law.
And I think that this broad approach not only diverts valuable resources and attention away for tackling genuine cybercrimes, but also offers – and here to answer your question - an example for future legislation that could facilitate this repressive state's practice. It sends a message that this is acceptable to use invasive surveillance tools to gather evidence for any crime deemed serious by a particular country irrespective of the human rights implications. And that's wrong.
By allowing countries to set their own standards of what constitutes a serious crime, the states are opening the door for authoritarian countries to misuse this treaty as a tool for persecution. The treaty needs to be critically examined and revised to ensure that it's truly served its purpose in tackling cybercrimes without undermining human rights. The stakes are high and I know it's difficult, but we're talking about the UN and we're talking about the UN charter. The international community must work together to ensure that they can protect security and also fundamental rights.
NICK ASHTON-HART: I think Katitza has hit the nail on the head, and there's one particular element I'd like to add to this is something like 40% of the world's countries at the moment either do not have cybercrime legislation or are revising quite old cybercrime legislation. They are coming to this convention, they've told us this, they've coming to this convention because they believe this can be the forcing mechanism, the template that they can use in order to ensure that they get the cooperation that they're interested in.
So the normative impact of this convention would be far greater than in a different situation, for example, where there was already a substantial level of legislation globally and it had been in place in most countries for a long enough period for them to have a good baseline of experience in what actually works in prosecuting cybercrimes and what doesn't.
But we're not in that situation. We're pretty much in the opposite situation and so this convention will have a disproportionately high impact on legislation in many countries because with the technical assistance that will come with it, it'll be the template that is used. Knowing that that is the case, we should be even more conservative in what we ask this convention to do and even more careful to ensure that what we do will actually help prosecute real cybercrimes and not facilitate cooperation on other crimes.
This makes things even more concerning for the private sector because of this. We want to actually see a result that improves the situation for real citizens that actually protects victims of real crimes and that doesn't allow as is unfortunately the case here, even large-scale cybercrime to go unpunished. That's in no one's interest, but this convention will not actually help with that. At this point we would have to see it as net harmful to that objective, which is supposed to be a core objective.
ALI WYNE: We've discussed quite extensively the need for international agreements when it comes to cybercrime. We've also mentioned some of the concerns about the current deal on the table. Nick, what would you need to see to mitigate some of the concerns that you have about the current deal on the table?
NICK ASHTON-HART: The convention should be limited to the offenses that it contains. Its provisions should not be available for any other criminal activity or cooperation. That would be the starting place. The second thing would be to inscribe crimes that are self-evidently criminal through providing for mens rea in all the articles to avoid the problems with whistleblowers, and journalists and security researchers. There should be a separate commitment that the provisions of this convention do not apply to actors acting in good faith to secure systems such as those that have been described. There must be, we think, transparency. There is no excuse for a user not to be notified at the point that the offense for which their data was accessed has been adjudicated or the prosecution abandoned and that should be explicitly provided.
People have a right to know what governments are doing with their personal information. We think it should be much clearer what dual criminality is. It should be very straightforward that without dual criminality, no cooperation under the convention will take place so that requests go through more quickly. It's much more clear that it is basically the same crime in all the cooperating jurisdictions. I would say those were the most important.
ALI WYNE: Katitza, you get the last word. What would you need to see to mitigate some of the concerns that you've expressed in our conversation about the current draft text on the table?
KATITZA RODRIGUEZ: First of all, we need to rethink how we handle refusals for cross border investigations. The treaty is just too narrow here, offering barely any room to say no. Even when the request to cooperate violates, or is inconsistent with human rights law. We need to make dual criminality a must to invoke the international cooperation powers, as Nick says. This dual criminality principle is a safeguard. That means that if it is not a crime in both countries involved, the treaty shouldn't allow for any assistance. You also need clear mandatory human rights safeguards in all international cooperation, that are robust - with notification, transparency, oversight mechanisms. Countries need to actively think about potential human regulations before using these powers.
It also helps if we only allow cooperation for genuine cybercrimes like real core cybercrimes, and not just any crime involving a computer, or that is generating electronic evidence, which today even the electronic toaster could leave digital evidence.
I just want to conclude by saying actual cybercrime investigations are often highly sophisticated and there's a case to be made for an international effort focused on investigating those crimes, but including every crime under the sun in its scope and sorry, it's really a big problem.
This treaty fails to create that focus. The second thing it also fails to provide these safeguards for security researchers, which Nick explained. We’re fully agreed on that. Security researchers are the ones who make our systems safe. Criminalizing what they do and not providing the effective, safeguards, it really contradicts the core aim of the treaty, which is actually to make us more secure to fight cybercrime. So we need a treaty that it's narrow on the scope and protects human rights. The end result however, is a cybercrime treaty that may well do more to undermine cybersecurity than to help it.
ALI WYNE: A really thought-provoking note on a which to close. Nick Ashton-Hart, head of delegation to the cybercrime convention negotiations for the Cybersecurity Tech Accord and Katitza Rodriguez, policy director for global privacy at A Civil Society Organization, the Electronic Frontier Foundation. Nick, Katitza, thank you so much for speaking with me today.
NICK ASHTON-HART: Thanks very much. It's been a pleasure.
KATITZA RODRIGUEZ: Thanks for having me on. Muchas gracias. It was a pleasure.
ALI WYNE: That's it for this episode of Patching the System. Catch all of the episodes from this season, exploring topics such as cyber mercenaries and foreign influence operations by following Ian Bremmer's GZERO World feed anywhere you get your podcasts. I'm Ali Wyne, thanks for listening.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Podcast: Foreign influence, cyberspace, and geopolitics ›
- Podcast: Cyber mercenaries and the global surveillance-for-hire market ›
- Podcast: How cyber diplomacy is protecting the world from online threats ›
- Attacked by ransomware: The hospital network brought to a standstill by cybercriminals ›
- Hacked by Pegasus spyware: The human rights lawyer trying to free a princess ›
- Podcast: Can governments protect us from dangerous software bugs? - GZERO Media ›
Why privacy is priceless
If someone were to get a few pictures off your phone without your permission, what's the big deal, right? Don't be so blasé, says human rights attorney David Haigh, who was prominently targeted with the powerful Pegasus spyware in 2021.
"If someone breaches your private life, that is a gateway to very, very serious breaches of other human rights, like your right to life and right to all sorts of other things," he said. "That's why I think a lot of governments and public sector don't take things as seriously as they should."
Right now, he says, dictators can buy your privacy, "and with it, your life."
Haigh spoke with Eurasia Group Senior Analyst Ali Wyne as part of “Caught in the Digital Crosshairs,” a panel discussion on cybersecurity produced by GZERO in partnership with Microsoft and the CyberPeace Institute.
Watch the full Global Stage conversation: The devastating impact of cyberattacks and how to protect against them
- Fooled by cyber criminals: The humanitarian CEO scammed by hackers - GZERO Media ›
- Attacked by ransomware: The hospital network brought to a standstill by cybercriminals - GZERO Media ›
- Podcast: How cyber diplomacy is protecting the world from online threats - GZERO Media ›
- Podcast: Foreign Influence, Cyberspace, and Geopolitics - GZERO Media ›
- Podcast: Cyber mercenaries and the global surveillance-for-hire market - GZERO Media ›