ISF Podcast  By  cover art

ISF Podcast

By: Information Security Forum Podcast
  • Summary

  • The ISF Podcast brings you cutting-edge conversation, tailored to CISOs, CTOs, CROs, and other global security pros. In every episode of the ISF Podcast, Chief Executive, Steve Durbin speaks with rule-breakers, collaborators, culture builders, and business creatives who manage their enterprise with vision, transparency, authenticity, and integrity. From the Information Security Forum, the leading authority on cyber, information security, and risk management.
    263000
    Show more Show less
Episodes
  • S25 Ep5: Boosting Business Success: Unleashing the potential of human and AI collaboration
    Apr 30 2024
    Today, Steve and producer Tavia Gilbert discuss the impact artificial intelligence is having on the threat landscape and how businesses can leverage this new technology and collaborate with it successfully.

    Key Takeaways:
    1. AI risk is best presented in business-friendly terms when seeking to engage executives at the board level.
    2. Steve Durbin takes the position that AI will not replace leadership roles, as human strengths like emotional intelligence and complex decision making are still essential.
    3. AI risk management must be aligned with business objectives while ethical considerations are integrated into AI development.
    4. Since AI regulation will be patchy, effective mitigation and security strategies must be built in from the start.


    Tune in to hear more about:
    1. AI’s impact on cybersecurity, including industrialized high-impact attacks and manipulation of data (0:00)
    2. AI collaboration with humans, focusing on benefits and risks (4:12)
    3. AI adoption in organizations, cybersecurity risks, and board involvement (11:09)
    4. AI governance, risk management, and ethics (15:42)


    Standout Quotes:

    1. Cyber leaders have to present security issues in terms that board level executives can understand and act on, and that's certainly the case when it comes to AI. So that means reporting AI risk in financial, economic, operational terms, not just in technical terms. If you report in technical terms, you will lose the room exceptionally quickly. It also involves aligning AI risk management with business needs by you know, identifying how AI risk management and resilience are going to help to meet business objectives. And if you can do that, as opposed to losing the room, you will certainly win the room. -Steve Durbin

    2. AI, of course, does provide some solution to that, in that if you can provide it with enough examples of what good looks like and what bad looks like in terms of data integrity, then the systems can, to an extent, differentiate between what is correct and what is incorrect. But the fact remains that data manipulation, changing data, whether that be in software code, whether it be in information that we're storing, all of those things remain a major concern. -Steve Durbin

    3. We can’t turn the clock back. So at the ISF, you know, our goal is to try to help organizations figure out how to use this technology wisely. So we're going to be talking about ways humans and AI complement each other, such as collaboration, automation, problem solving, monitoring, oversight, all of those sorts of areas. And I think for these to work, and for us to work effectively with AI, we need to start by recognizing the strengths both we as people and also AI models can bring to the table. -Steve Durbin

    4. I also think that boards really need to think through the impact of what they're doing with AI on the workforce, and indeed, on other stakeholders. And last, but certainly not least, what the governance implications of the use of AI might look like. And so therefore, what new policies controls need to be implemented. -Steve Durbin

    5. We need to be paying specific attention to things like ethical risk assessment, working to detect and mitigate bias, ensure that there is, of course, informed consent when somebody interacts with AI. And we do need, I think, to be particularly mindful about bias, you know? Bias detection, bias mitigation. Those are fundamental, because we could end up making all sorts of decisions or having the machines make decisions that we didn't really want. So there's always going to be in that area, I think, in particular, a role for human oversight of AI activities. -Steve Durbin


    Mentioned in this episode:

    • ISF Analyst Insight Podcast

    Read the transcript of this episode
    Subscribe to the ISF Podcast wherever you listen to podcasts
    Connect with us on LinkedIn and Twitter

    From the Information Security Forum, the leading authority on cyber, information security, and risk management.
    Show more Show less
    23 mins
  • S25 Ep4: Brian Lord - AI, Mis-and Disinformation in Election Fraud and Education
    Apr 23 2024
    This is the second of a two-part conversation between Steve and Brian Lord, who is currently the Chief Executive Officer of Protection Group International. Prior to joining PGI, Brian served as the Deputy Director of a UK Government Agency governing the organization's Cyber and Intelligence Operations. Today, Steve and Brian discuss the proliferation of mis- and disinformation online, the potential security threats posed by AI, and the need for educating children in cyber awareness from a young age.

    Key Takeaways:
    1. The private sector serves as a skilled and necessary support to the public sector, working to counter mis- and disinformation campaigns, including those involving AI.
    2. AI’s increasing ability to create fabricated images poses a particular threat to youth and other vulnerable users.

    Tune in to hear more about:
    1. Brian gives his assessment of cybersecurity threats during election years. (16:04)
    2. Exploitation of vulnerable users remains a major concern in the digital space, requiring awareness, innovative countermeasures, and regulation. (31:0)

    Standout Quotes:

    1. “I think when we look at AI, we need to recognize it is a potentially long term larger threat to our institutions, our critical mass and infrastructure, and we need to put in countermeasures to be able to do that. But we also need to recognize that the most immediate impact on that is around what we call high harms, if you like. And I think that was one of the reasons the UK — over a torturously long period of time — introduced the The Online Harms Bill to be able to counter some of those issues. So we need to get AI in perspective. It is a threat. Of course it is a threat. But I see then when one looks at AI applied in the cybersecurity test, you know, automatic intelligence developing hacking techniques, bear in mind, AI is available to both sides. It's not just available to the attackers, it's available to the defenders. So what we are simply going to do is see that same kind of thing that we have in the more human-based countering the cybersecurity threat in an AI space.” -Brian Lord

    2. “The problem we have now — now, one can counter that by the education of children, keeping them aware, and so on and so forth— the problem you have now is the ability, because of the availability of imagery online and AI's ability to create imagery, one can create an entirely fabricated image of a vulnerable target and say, this is you. Even though it isn’t … when you're looking at the most vulnerable in our society, that's a very, very difficult thing to counter, because it doesn't matter whether it's real to whoever sees it, or the fear from the most vulnerable people, people who see it, they will believe that it is real. And we've seen that.” -Brian Lord


    Mentioned in this episode:
    • ISF Analyst Insight Podcast

    Read the transcript of this episode
    Subscribe to the ISF Podcast wherever you listen to podcasts
    Connect with us on LinkedIn and Twitter

    From the Information Security Forum, the leading authority on cyber, information security, and risk management.
    Show more Show less
    23 mins
  • S25 Ep3: Brian Lord - Lost in Regulation: Bridging the cyber security gap for SMEs
    Apr 16 2024
    This episode is the first of two conversations between Steve and Brian Lord, who is currently the Chief Executive Officer of Protection Group International. Prior to joining PGI, Brian served as the Deputy Director of a UK Government Agency governing the organization's Cyber and Intelligence Operations. He brings his knowledge of both the public and private sector to bear in this wide-ranging conversation. Steve and Brian touch on the challenges small-midsize enterprises face in implementing cyber defenses, what effective cooperation between government and the private sector looks like, and the role insurance may play in cybersecurity.


    Key Takeaways:
    1. A widespread, societal approach involving both the public and private sectors is essential in order to address the increasingly complex risk landscape of cyber attacks.
    2. At the public or governmental levels, there is an increasing need to bring affordable cyber security services to small and mid-sized businesses, because failing to do so puts those businesses and major supply chains at risk.
    3. The private sector serves as a skilled and necessary support to the public sector, working to counter mis- and disinformation campaigns, including those involving AI.


    Tune in to hear more about:
    1. The National Cybersecurity Organization is part of GCHQ, serving to set regulatory standards and safeguards, communicate novel threats, and uphold national security measures in the digital space. (5:42)
    2. Steve and Brian discuss existing challenges of small organizations lacking knowledge and expertise to meet cybersecurity regulations, leading to high costs for external advice and testing. (7:40)



    Standout Quotes:

    1. “...If you buy an external expertise — because you have to do, because either you haven’t got the demand to employ your own, or if you did the cost of employment would be very hard — the cost of buying an external advisor becomes very high. And I think the only way that can be addressed without compromising the standards is of course, to make more people develop more skills and more knowledge. And that, in a challenging way, is a long, long term problem. That is the biggest problem we have in the UK at the moment. And actually, in a lot of countries. The cost of implementing cybersecurity can quite often outweigh, as it may be seen within a smaller business context, the benefit.” -Brian Lord

    2. “I think there probably needs to be a lot more tangible support, I think, for the small to medium enterprises. But that can only come out of collaboration with the cybersecurity industry and with government about, how do you make sure that some of the fees around that are capped?” -Brian Lord


    Mentioned in this episode:

    • ISF Analyst Insight Podcast

    Read the transcript of this episode
    Subscribe to the ISF Podcast wherever you listen to podcasts
    Connect with us on LinkedIn and Twitter

    From the Information Security Forum, the leading authority on cyber, information security, and risk management.
    Show more Show less
    17 mins

What listeners say about ISF Podcast

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.