Episodios

  • The 350 Million Problem: Securing the Businesses No One Else Will
    Mar 17 2026

    Show Description

    Joe Levy is the CEO of Sophos and a 30-year cybersecurity veteran who has held technical and executive roles across some of the industry's most recognizable brands. In this episode, we dig into a stat that should reframe how the entire industry thinks about its mission: out of roughly 359 million businesses worldwide, fewer than 32,000 have a CISO. That's less than one in 10,000 organizations with a security strategy leader — and it's a number Joe worked with Cybersecurity Ventures to quantify for the first time.

    We explore what that structural gap means for how vendors build products, why the cybersecurity market is a 40-year-old market failure where spending goes up every year but outcomes don't improve, and how Sophos is betting that agentic AI can deliver CISO-level intuition to the hundreds of millions of organizations that could never conceive of hiring one. Joe breaks down where AI is genuinely delivering in security operations — and where the industry is overselling — drawing from Sophos's experience running the world's largest MDR service with 36,000 customers.

    We also get into Sophos's Pacific Rim disclosure, a five-year engagement with a Chinese nation-state actor targeting their firewalls that Joe calls the highest form of threat intelligence sharing. He walks through the calculus of going public with that story, including the kernel-level monitoring they deployed on a handful of devices to stay one step ahead of the attacker. Plus, we discuss the SecureWorks acquisition, the CTO-to-CEO transition, competing with hyperscalers like Microsoft, and what the next chapter looks like for a billion-dollar PE-backed security company approaching maturity with Thoma Bravo.


    Show Notes

    • The cybersecurity poverty line quantified: out of 359 million businesses worldwide, fewer than 32,000 have a CISO — less than one in 10,000 — and this leadership gap compounds with the skills shortage and what Joe calls an "AI-enhanced market for lemons" where information asymmetry between buyers and vendors is getting worse
    • The real problem isn't missing technology — most organizations already have endpoints and firewalls — it's misconfigurations, ignored alerts, undeployed agents, and no SOC to respond, which is why secure-by-default design and hybrid product-service models like MDR create more predictable outcomes than tools alone
    • AI in the SOC is overhyped but not hype: Sophos runs 36,000 MDR customers and says the vast majority of Tier 1 (triage, false positive management) and Tier 2 (investigation, response) can now be performed by agents — but the industry lacks standard vocabulary for metrics like MTTR, letting vendors be "intentionally opaque" about what "response" actually means
    • Joe introduces the concept of "humans as the accountability API" in an agentic world — AI can approximate analyst intuition, but someone still needs to be held accountable for remediation decisions, and a fully autonomous SOC may just be "a protection product with a very long data pipeline"
    • The Pacific Rim story: Sophos spent five years engaged with a Chinese nation-state actor targeting their firewalls, deployed a kernel implant on fewer than a handful of attacker-controlled devices to observe exploit development in real time, and concealed targeted fixes among 150 other patches to avoid tipping off the adversary
    • Sophos's CISO Advantage program aims to deliver the intuitions of a skilled security leader to the hundreds of millions of organizations that could never hire one — Joe calls it fixing a 40-year-old market failure and says they're shipping it this year


    Más Menos
    45 m
  • Before the Breach: The Zero Day Clock and the Race Against Exploitation
    Mar 11 2026


    Show Description

    The Zero Day Clock is ticking — and the numbers should make every security leader uncomfortable. In this episode, I sit down with Sergej Epp, CISO at a leading security firm, who built the Zero Day Clock after a weekend experiment using AI to discover vulnerabilities firsthand. What he found shocked him: with no professional vulnerability research background and just a few hours of work, he was successfully finding zero days across major security projects using AI models and basic scaffolding.

    Sergej breaks down his concept of the "Verifier's Law" — the idea that offense has the cheapest verifier in cybersecurity because feedback is binary and instant (you either popped a shell or you didn't), while defense operates in a space where validation is expensive, ambiguous, and slow. We dig into what this asymmetry means for the industry, why 20 years of warnings from Ross Anderson, Bruce Schneier, Halvar Flake, and others have gone unheeded, and whether coordinated disclosure models are broken now that AI can reverse engineer a patch into a working exploit in minutes.

    We also discuss the tension between regulation and deregulation playing out in the U.S. and EU, why the answer might be outcome-based accountability rather than prescriptive compliance, and what a realistic defensible posture actually looks like when the mean time to exploit for actively exploited vulnerabilities is under two days — while most organizations are still operating on 30-day patch cycles.


    Show Notes

    • Sergej shares how a weekend AI experiment led him to discover multiple zero days across major security projects with no professional vulnerability research experience — and why that should alarm the entire industry
    • The "Verifier's Law" explained: offense has cheap, deterministic validators (pop a shell, exfiltrate data, trigger an XSS) while defense faces expensive, ambiguous validation (parsing SIM alerts, measuring security posture), giving AI-accelerated offense a structural advantage
    • The Zero Day Clock synthesizes 3,500+ CVE-exploit pairs and shows the mean time to exploit for actively exploited vulnerabilities is now under two days — while organizations still operate on 14-to-30-day patch cycles
    • 20 years of ignored warnings: from Ross Anderson's 2001 economics paper through Bruce Schneier, Halvar Flake's "the patch is the advisory" insight, and DARPA's Cyber Grand Challenge — the industry has consistently failed to act on clear signals
    • AI can now reverse engineer patches to identify underlying flaws and generate working exploits in minutes, potentially breaking coordinated disclosure models and compressing the window between patch release and active exploitation to near zero
    • The regulation paradox: the EU risks overregulating AI in ways that hamper defenders while attackers face no such constraints, while the U.S. is pushing deregulation that may remove the only forcing function for vendor accountability — Sergej and Chris discuss outcome-based regulation as a potential middle path
    • Defenders have a data advantage: by understanding their own environments, infrastructure, and processes, security teams can detect AI-driven attacks through behavioral anomalies like hallucinated API calls, non-existent user accounts, and other artifacts of AI-generated attack playbooks
    • The Zero Day Clock's real power is as a board-level communication tool — a single slide that translates the patching gap into a number executives and policymakers can't ignore, shifting the conversation from "are we compliant?" to "are we fast enough?"
    Más Menos
    5 m
  • Securing the Future with Autonomous Defense
    Feb 23 2026

    Summary:

    In this conversation, Chris Hughes and Stanislav Fort discuss the transformative role of AI in cybersecurity, particularly in vulnerability management. Stanislav shares insights on how AI can discover zero-day vulnerabilities in widely used codebases, the challenges of balancing AI-driven discoveries with quality assurance, and the importance of proactive security measures. They also explore the economic sustainability of AI in cybersecurity, the burden on maintainers, and the ongoing arms race between defenders and attackers. The discussion emphasizes the potential for AI to significantly enhance software security and the aspiration towards achieving zero vulnerabilities in critical infrastructure.


    Takeaways:

    AI is revolutionizing vulnerability management in cybersecurity.
    The ability to find long-hidden vulnerabilities is unprecedented.
    AI can enhance both offensive and defensive security measures.
    Proactive security integration into development pipelines is essential.
    The quality of vulnerability reports is declining due to AI-generated noise.
    Maintainers face increasing burdens from rapid AI-driven discoveries.
    AI can help secure open source projects effectively.
    Sustainability in AI cybersecurity requires financial backing.
    The arms race between attackers and defenders is intensifying with AI.
    Achieving zero vulnerabilities is an aspirational yet achievable goal.


    Chapters

    00:00 Introduction to AI in Cybersecurity
    02:52 The Evolution of AI and Vulnerability Discovery
    05:45 AI's Impact on Software Development
    08:59 Discovering Zero-Day Vulnerabilities
    11:48 The Great Bifurcation in Security Research
    14:52 Balancing AI-Driven Discoveries and Quality
    17:59 Proactive Security Measures in Software Development
    20:53 The Role of AI in Securing Open Source Projects
    23:54 Sustainability of AI in Cybersecurity
    27:07 Addressing the Burden on Maintainers
    30:09 The Tension Between Autonomy and Security
    33:03 The Arms Race Between Defenders and Attackers
    36:12 Aiming for Zero Vulnerabilities
    38:58 Conclusion and Future Outlook

    Más Menos
    41 m
  • Selling Cyber: Deal Flow and Market Signals with Momentum Cyber
    Feb 18 2026

    In this episode of Resilient Cyber I catch up with Momentum Cyber's Founder & CEO, Eric McAlpine.

    We will be unpacking 2025's M&A and capital market activities, using Momentum Cyber's 2025 Cybersecurity Almanac Report, as well as discussing some of the overlooked and untold details under the hood of cyber M&A, building world class teams and more.

    Más Menos
    42 m
  • Exploiting AI IDEs
    Feb 17 2026

    In this episode of Resilient Cyber, we will be sat down with Ari Marzuk, the researcher who published "IDEsaster", A Novel Vulnerability Class in AI IDE's.

    We will be discussing the rise of AI-driven development and modern AI coding assistants, tools and agents, and how Ari discovered 30+ vulnerabilities impacting some of the most widely used AI coding tools and the broader risks around AI coding.

    • Ari's background in offensive security — Ari has spent the past decade in offensive security, including time with Israeli military intelligence, NSO Group, Salesforce, and currently Microsoft, with a focus on AI security for the last two to three years.
    • IDEsaster: a new vulnerability class — Ari's research uncovered 30+ vulnerabilities and 24 CVEs across AI-powered IDEs, revealing not just individual bugs but an entirely new vulnerability class rooted in the shared base IDE layer that tools like Cursor, Copilot, and others are built on.
    • "Secure for AI" as a design principle — Ari argues that legacy IDEs were never built with autonomous AI agents in mind, and that the same gap likely exists across CI/CD pipelines, cloud environments, and collaboration tools as organizations race to bolt on AI capabilities.
    • Low barrier to exploitation — The vulnerabilities Ari found don't require nation-state sophistication to exploit; techniques like remote JSON schema exfiltration can be carried out with relatively simple prompt engineering and publicly known attack vectors.
    • Human-in-the-loop is losing its effectiveness — Even with diff preview and approval controls enabled, exfiltration attacks still triggered in Ari's testing, and approval fatigue from hundreds of agent-generated actions is pushing developers toward YOLO mode.
    • Least privilege and the capability vs. security trade-off — The same unrestricted access that makes AI coding agents so productive is what makes them vulnerable, and history suggests organizations will continue to optimize for utility over security without strong guardrails.
    • Top defensive recommendations — Ari emphasized isolation (containers, VMs) as the single most important control, followed by enforcing secure defaults that can't be easily overridden, and applying enterprise-level monitoring and governance to AI agent usage.
    • What's next — Ari is turning his attention to newer AI tools and attack surfaces but isn't naming targets yet. You can follow his work on LinkedIn, X, and his blog at makarita.com.
    Más Menos
    25 m
  • AI is Ready for Production - Security, Risk and Compliance Isn't
    Feb 10 2026

    In this episode of Resilient Cyber, I sit down with VP, Product Marketing and Strategy for Protegrity, James Rice.

    We will be discussing how traditional approaches to security aren't solving the AI security challenge, the importance of data-centric approaches for secure AI implementation and addressing issues such as AI data leakage.

    James and I dove into a lot of great topics, including:

    • Why traditional perimeter-based and infrastructure-centric security models are failing in the era of AI, and why organizations need to fundamentally rethink their approach to securing AI workloads.
    • The concept of data-centric security — protecting the data itself rather than the systems surrounding it — and why this shift is critical as data flows across cloud platforms, AI models, and agentic workflows.
    • The growing risk of AI data leakage and how sensitive information (PII, PHI, PCI, intellectual property) can inadvertently be exposed through AI training data, model outputs, prompt injection, and RAG pipelines.
    • Why many organizations find themselves stuck in an "AI circularity" — wanting to leverage AI but unable to do so because of the complexity of securing critical business data throughout the AI lifecycle.
    • The importance of embedding security controls inline within the AI pipeline — from data ingestion and model training to orchestration and output — rather than bolting security on after the fact.
    • How data protection techniques such as tokenization, anonymization, dynamic masking, and format-preserving encryption can enable organizations to use realistic, context-rich data for AI while maintaining compliance and reducing risk.
    • The challenge of securing agentic AI workflows, where autonomous agents continuously interact with enterprise data, making traditional access control models insufficient.
    • How organizations can balance the need for AI innovation and data utility with regulatory compliance requirements across frameworks like GDPR, HIPAA, PCI DSS, and emerging AI-specific regulations.
    • James's perspective on how security, risk, and compliance functions need to evolve to keep pace with the rapid productionization of AI across the enterprise.
    • The role of semantic guardrails in governing AI inputs and outputs, ensuring that protection is applied contextually based on how data is being used — not just where it resides.


    • About the Guest
      James Rice is VP of Product Marketing and Strategy at Protegrity, a global leader in data-centric security. He brings over 20 years of experience in security, risk, and compliance, having provided solution engineering, value engineering, and implementation services to Fortune 1000 organizations across industries. Prior to Protegrity, James held leadership roles at Pathlock (formerly Greenlight Technologies), Accenture, and PricewaterhouseCoopers.

    • About Protegrity
      Protegrity is a data-centric security platform that protects sensitive data across hybrid, multi-cloud, and AI environments. Their approach embeds security directly into the data itself — enabling enterprises to unlock insights, accelerate innovation, and meet global compliance with confidence. Protegrity's solutions include data discovery and classification, tokenization, anonymization, dynamic masking, and semantic guardrails for AI and analytics workflows.
      Learn more at protegrity.com
    Más Menos
    26 m
  • Hacking the OpenClaw Hype
    Feb 7 2026

    In this episode of Resilient Cyber, I sit down with Jamieson O'Reilly, Security Researcher and Founder @ Dvuln.

    Jamieson recently went viral for his hacking activities demonstrating the vulnerabilities and exploitation of OpenClaw (previously ClawdBot and Moltbot), from exposed servers, backdooring skills and demonstrating how to perform potential account takeovers.

    Jamieson is now helping secure the OpenClaw project.

    We will walk through his findings, implications of the rise of Personal AI Assistants (PAI) and the various potential risks and security ramifications of insecure adoption and usage.

    Más Menos
    35 m
  • Switching to Cyber - Navigating Cybersecurity Careers
    Feb 6 2026

    In this episode of Resilient Cyber, I sit down with longtime Cyber practitioners and leaders Helen Patton and Josiah Dykstra to dive into their latest book, "Switching to Cyber: The Mid-Career Guide to Launching a Cybersecurity Career".

    The book aims to help mid-career professionals pivot into the cyber career field and navigate finding their cyber niche, bridging skill gaps and conquering tech intimidation among more.

    Más Menos
    33 m