Episodios

  • Why You Fall For Scams
    Jan 7 2026
    Why do smart, capable people fall for scams even when the warning signs seem obvious in hindsight? In this episode, Dan Ariely joins us to examine how intuition often leads us in the wrong direction, especially under stress, uncertainty, or emotional pressure. A renowned behavioral economist, longtime professor of psychology and behavioral economics at Duke University, and bestselling author of Predictably Irrational, The Upside of Irrationality, Misbehaving, and Misbelief, Dan has spent decades studying why rational people consistently make choices that don't serve them. We talk about the deeply human forces that shape how we decide who to trust, and how easily those instincts can be exploited in high-stakes situations involving fraud, financial loss, and digital deception. Dan shares a deeply personal story about surviving severe burns and the long process of self-acceptance that followed, using his own experience to show how hiding, blending in, and social pressure quietly influence behavior in ways most of us never stop to question. We also explore why stress pushes people to search for patterns, stories, and a sense of control, even when those explanations aren't accurate. Dan explains how our minds operate like a "vintage Swiss Army knife," well suited for small, predictable communities but poorly equipped for modern risks like scams, cybersecurity threats, and low-probability, high-impact events. Topics include why near-misses teach the wrong lessons, why authority and urgency are so effective in manipulation, and why expecting people to be perfectly rational is a losing strategy. We also discuss practical ways to slow decisions down and bring in outside perspectives to help design safeguards that work with human nature. Show Notes: [01:52] Dan Ariely joins the episode to examine how human decision-making actually works under pressure.[03:41] How intuition can point us in the wrong direction during moments of stress and uncertainty.[05:26] Trust, authority, and urgency as core levers used in fraud and manipulation.[07:12] When decisions feel overwhelming, the brain's tendency to rely on shortcuts.[08:58] Dan explains why rational thinking often breaks down faster than we expect.[10:34] Near-misses and how they quietly reinforce false confidence instead of caution.[12:09] Why repeated exposure to risk doesn't necessarily make people better decision-makers.[13:55] Stress-driven pattern seeking and the human need for explanation and control.[15:32] Superstition, conspiracy thinking, and what they reveal about uncertainty tolerance.[17:18] Why modern threats like scams and cybercrime confuse brains built for simpler environments.[18:56] The "vintage Swiss Army knife" analogy and what it says about human cognition.[20:41] Authority cues and why skepticism often disappears in the presence of perceived expertise.[22:27] Slowing decisions down as one of the most reliable defenses against manipulation.[24:13] Dan reflects on how behavioral economics challenged traditional models of rational choice.[25:59] A personal story about surviving severe burns and the long path to self-acceptance.[27:44] How hiding and blending in can quietly shape behavior and self-perception.[29:31] Social pressure and its role in everyday compliance and risk-taking.[31:16] Why vulnerability doesn't look the way people expect it to.[33:02] Expecting perfect rationality and why that assumption consistently fails.[34:47] Designing systems that account for human limits instead of ignoring them.[36:33] The value of outside perspective when decisions carry real consequences.[38:19] Practical ways individuals can reduce risk by changing how they decide.[40:05] When slowing down matters more than having more information.[41:52] Applying behavioral insights to fraud prevention and digital safety.[43:38] Why better tools help, but mindset still plays a critical role.[45:24] Final thoughts on working with human nature rather than fighting it.[48:02] What listeners can take away about decision-making, risk, and self-awareness. Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. Links and Resources: Podcast Web PageFacebook Pagewhatismyipaddress.comEasy Prey on InstagramEasy Prey on TwitterEasy Prey on LinkedInEasy Prey on YouTubeEasy Prey on PinterestDan ArielyDan Ariely - LinkedIn Books by Dan ArielyDan Ariely - YouTube
    Más Menos
    52 m
  • Mobile Device Threats
    Dec 31 2025
    In a world where we're told to carry our entire lives in our pockets, we've reached a strange tipping point where the very devices meant to connect us have become windows into our private lives for those who wish us harm. It's no longer a matter of looking for the "shady" corners of the internet; today, the threats come from nation-state actors, advanced AI, and even the people we think we're hiring. We are living in an era where the most sophisticated hackers aren't just trying to break into your phone, they're trying to move into your business by pretending to be your best employee. Joining the conversation today is Jared Shepard, an innovative industry leader and the CEO of Hypori. A U.S. Army veteran with over 20 years of experience, Jared's journey is far from typical; he went from being a high school dropout to serving as a sniper and eventually becoming the lead technical planner for the Army's Third Corps. He is also the founder of Intelligent Waves and the chair of the nonprofit Warriors Ethos, bringing a perspective shaped by years of advising technologists in active war zones. We're going to dive deep into why Jared believes everything you own should be considered already compromised and why that realization is the first step toward true security. From the terrifying reality of his own 401k being stolen via identity theft to the future of "dumb terminals" that protect your privacy by storing nothing at all, this discussion challenges the status quo. We'll explore how to navigate a future where AI can fake your identity in real-time and why the ultimate battle in cybersecurity isn't against a specific country, but against our own human tendency toward laziness. Show Notes: [[02:12] Jared Shepard of Hypori is here to discuss how modern cyber threats actually play out in real life.[04:48] How modern attacks unfold slowly instead of triggering obvious alarms.[05:55] Why many victims don't realize anything is wrong until secondary systems start failing.[07:56] What identity theft looks like when accounts are targeted methodically over time.[08:48] How attackers prioritize persistence and access over immediate financial gain.[10:32] A real attempt to take over long-term financial accounts and how it surfaced.[13:07] Why financial institutions often respond late even when fraud is already underway.[15:44] The limits of traditional identity verification in an AI-driven threat environment.[16:52] Why layered authentication still fails when underlying identity data is compromised.[18:21] Deepfakes, voice cloning, and why video calls no longer prove much.[20:57] How laptop farms are used to bypass hiring controls and internal access checks.[22:18] Why insider-style access is increasingly coming from outside the organization.[23:33] Why some companies are quietly bringing back in-person steps for sensitive roles.[26:09] SIM farms, mobile identity abuse, and how scale changes detection.[28:47] The growing tension between personal privacy and corporate device control.[31:22] Why assuming device compromise changes everything downstream.[33:58] Isolating data from endpoints instead of trying to secure the device itself.[35:12] How moving compute and data off the endpoint reduces exposure without requiring device monitoring.[36:35] How pixel-only access limits data exposure even on compromised hardware.[39:11] Why AI training data introduces new security and poisoning risks.[41:46] Why recovery planning is often overlooked until it's too late.[44:18] The problem with victim-blaming and how it distorts security responses.[46:52] Why layered defenses matter more than any single tool or platform.[47:58] What practical preparation looks like for individuals, not just enterprises.[49:12] Rethinking privacy as controlled access rather than total lock-down. Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. Links and Resources: Podcast Web PageFacebook Pagewhatismyipaddress.comEasy Prey on InstagramEasy Prey on TwitterEasy Prey on LinkedInEasy Prey on YouTubeEasy Prey on PinterestJared Shepard - HyporiJared Shepard - LinkedInWarriors Ethos - Jared Shepard
    Más Menos
    50 m
  • Past, Present, and Future of AI agents
    Dec 24 2025
    The intersection of AI and cybersecurity is changing faster than anyone expected, and that pace is creating both incredible innovation and brand-new risks we're only beginning to understand. From deepfake ads that fool even seasoned security professionals to autonomous agents capable of acting on our behalf, the threat landscape looks very different than it did even a year ago. To explore what this evolution means for everyday people and for enterprises trying to keep up, I'm joined by Chris Kirschke, Field CISO at Tuskira and a security leader with more than two decades of experience navigating complex cyber environments. Chris talks about his unconventional path into the industry, how much harder it is for new professionals to enter cybersecurity today, and the surprising story of how he recently fell for a fake Facebook ad that showcased just how convincing AI-powered scams have become. He breaks down the four major waves of InfoSec from the rise of the web, through mobile and cloud, to the sudden, uncontrollable arrival of generative AI. He then explains why this fourth wave caught companies completely off guard. GenAI wasn't something organizations adopted thoughtfully; it appeared overnight, with thousands of employees using it long before security teams understood its impact. That forced long-ignored issues like data classification, permissions cleanup, and internal hygiene to the forefront. We also dive into the world of agentic AI which is AI that doesn't just analyze but actually acts and the incredible opportunities and dangers that come with it. Chris shares how low-code orchestration, continuous penetration testing, context engineering, and security "mesh" architectures are reshaping modern InfoSec. Chris spends a lot of time talking about the human side of all this and why guardrails matter, how easy it is to over-automate, and the simple truth that AI still struggles with the soft skills security teams rely on every day. He also shares what companies should think about before diving into AI, starting with understanding their data, looping in legal and privacy teams early, and giving themselves room to experiment without turning everything over to an agent on day one. Show Notes: [00:00] Chris Kirschke, Field CISO at Tuskira, is here to explore how AI is reshaping cybersecurity and why modern threats look so different today.[03:05] Chris shares his unexpected path from bartending into IT in the late '90s, reflecting on how difficult it has become for newcomers to enter cybersecurity today.[06:18] A convincing Facebook scam slips past his defenses, illustrating how AI-enhanced fraud makes traditional red flags far harder to spot.[09:32] GenAI's sudden arrival in the workplace creates chaos as employees adopt tools faster than security teams can assess risk.[12:08] The conversation shifts to AI-driven penetration testing and how continuous, automated testing is replacing traditional annual reports.[15:23] Agentic AI enters the picture as Chris explains how low-code orchestration and autonomous agents are transforming security workflows.[18:24] He discusses when consumers can safely rely on AI agents and why human-in-the-loop oversight remains essential for anything involving transactions or access.[21:48] AI's dependence on context becomes clear as organizations move toward context lakes to support more intelligent, adaptive security models.[25:46] He highlights early experiments where AI agents automatically fix vulnerabilities in code, along with the dangers of developers becoming over-reliant on automation.[29:50] AI emerges as a support tool rather than a replacement, with Chris emphasizing that communication, trust, and human judgment remain central to the security profession.[33:35] A mock deposition experience reveals how AI might help individuals prepare for high-stress legal or compliance scenarios.[37:13] Chris outlines practical guardrails for adopting AI—starting with data understanding, legal partnerships, and clear architectural patterns.[40:21] Chatbot failures remind everyone that AI can invent policies or explanations when it lacks guidance, underscoring the need for strong oversight.[41:32] Closing thoughts include where to find more of Chris's work and continue learning about Tuskira's approach to AI security. Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. Links and Resources: Podcast Web PageFacebook Pagewhatismyipaddress.comEasy Prey on InstagramEasy Prey on TwitterEasy Prey on LinkedInEasy Prey on YouTubeEasy Prey on PinterestTuskiraChris Kirschke -LinkedIn
    Más Menos
    43 m
  • You Are Traceable with OSINT
    Dec 17 2025
    Publicly available data can paint a much clearer picture of our lives than most of us realize, and this episode takes a deeper look at how those tiny digital breadcrumbs like photos, records, searches, even the background of a Zoom call can be pieced together to reveal far more than we ever intended. To help break this down, I'm joined by Cynthia Hetherington, Founder and CEO of The Hetherington Group, a longtime leader in open-source intelligence. She also founded Osmosis, the global association and conference for OSINT professionals, and she oversees OSINT Academy, where her team trains investigators, analysts, and practitioners from all experience levels. Cynthia shares how she started her career as a librarian who loved solving information puzzles and eventually became one of the earliest people applying internet research to real investigative work. She talks about the first wave of cybercrime in the 1990s, how she supported law enforcement before the web was even mainstream, and why publicly accessible data today is more powerful and more revealing than ever. We get into how OSINT actually works in practice, from identifying a location based on a sweatshirt logo to examining background objects in video calls. She also explains why the U.S. has fewer privacy protections than many assume, and how property records, social media posts, and online datasets combine to expose surprising amounts of personal information. We also explore the growing role of AI in intelligence work. Cynthia breaks down how tools like ChatGPT can accelerate analysis but also produce hallucinations that investigators must rigorously verify, especially when the stakes are legal or security-related. She walks through common vulnerabilities people overlook, the low-hanging fruit you can remove online, and why your online exposure often comes from the people living in your home. Cynthia closes by offering practical advice to protect your digital footprint and resources for anyone curious about learning OSINT themselves. This is a fascinating look at how much of your life is already visible, and what you can do to safeguard the parts you'd rather keep private. Show Notes: [01:17] Cynthia Hetherington, Founder & CEO of The Hetherington Group is here to discuss OSINT or Open-Source Intelligence.[02:40] Early cyber investigators began turning to her for help long before online research tools became mainstream.[03:39] Founding The Hetherington Group marks her transition from librarian to private investigator.[04:22] Digital vulnerability takes center stage as online data becomes widely accessible and increasingly revealing.[05:22] We get a clear breakdown of what OSINT actually is and what counts as "publicly available information."[06:40] A simple trash bin in a photo becomes a lesson in how quickly locations can be narrowed down.[08:03] Cynthia shares the sweatshirt example to show how a tiny image detail can identify a school and possibly a city.[09:32] Background clues seen during COVID video calls demonstrate how unintentional information leaks became routine.[11:12] A news segment with visible passwords highlights how everyday desk clutter can expose sensitive data.[12:14] She describes old threat-assessment techniques that relied on family photos and subtle personal cues.[13:32] Cynthia analyzes the balance and lighting of a Zoom backdrop, pointing out what investigators look for.[15:12] Virtual and real backgrounds each reveal different signals about a person's environment.[16:02] Reflections on screens become unexpected sources of intelligence as she notices objects outside the camera frame.[16:37] Concerns grow around how easily someone can be profiled using only public information.[17:13] Google emerges as the fastest tool for building a quick, surface-level profile of almost anyone.[18:32] Social media takes priority in search results and becomes a major driver of self-exposed data.[19:40] Cynthia compares AI tools to the early internet, describing how transformative they feel for investigators.[20:58] A poisoning case from the early '90s demonstrates how online expert communities solved problems before search engines existed.[22:40] She recalls using early listservs to reach forensic experts long before modern digital research tools were available.[23:44] Smarter prompts become essential as AI changes how OSINT professionals gather reliable information.[24:55] Cynthia introduces her C.R.A.W.L. method and explains how it mirrors the traditional intelligence lifecycle.[26:12] Hallucinations from AI responses reinforce the need for human review and verification.[27:48] We learn why repeatable processes are crucial for building trustworthy intelligence outputs.[29:05] Elegant-sounding AI answers illustrate the danger of unverified assumptions.[30:40] An outdated email-header technique becomes a reminder of how quickly OSINT methods evolve.[32:12] Managed attribution—hiding your digital identity—is explained along with when...
    Más Menos
    56 m
  • Anyone Could Walk In
    Dec 10 2025

    Sometimes we forget how much trust we place in the little things around us like a lock on a door or a badge on someone's shirt. We see those symbols and assume everything behind them is safe, but it doesn't always work that way. A person with enough confidence, or the right story, can slip through places we think are locked down tight, and most of us never notice it's happening.

    My guest today is Deviant Ollam, and he's one of the rare people who gets invited to break into buildings on purpose. He talks about how he fell into this unusual line of work, the odd moments that shaped his career, and why understanding human behavior matters just as much as understanding locks or alarms. Listening to him describe these situations, where he's walking through offices, popping doors, or blending in with repair crews, makes you realize how blind we can be to our own surroundings.

    We also get into the practical side of things: the mistakes companies make, the small fixes that go a long way, and why teaching employees to slow down and ask a few extra questions can make all the difference. It's an eye-opening conversation, especially if you've ever assumed your workplace is more secure than it really is.

    Show Notes:
    • [03:24] Deviant shares how early adventures, abandoned buildings, and curiosity about locks pulled him toward physical security.
    • [06:20] A story about a law firm reveals how an office "secure" door was bypassed instantly, exposing major hardware flaws.
    • [09:16] Discussion shifts to how the locksmith and safe technician community reacted to his public teaching and how that's changed over time.
    • [13:28] The topic turns to security theater and the gap between feeling safe and actually being protected.
    • [16:18] An explanation of symbolic locks versus real security products highlights how easily people mix up the two.
    • [19:11] Conversation moves into the lack of clear U.S. lock standards and why European systems make things easier for consumers.
    • [21:51] Layered security comes into focus, emphasizing that the goal is to delay and deter rather than stop every possible attack.
    • [24:35] Monitoring tools, overlooked windows, and forgotten blind spots show how attackers often choose the easiest entry point.
    • [27:38] We look at the politics of penetration tests and why coordinating with building management is essential.
    • [31:28] Escalation testing illustrates how long suspicious behavior can go unnoticed inside an organization.
    • [34:34] The need for simple, obvious reporting channels becomes clear when employees aren't sure who to alert.
    • [37:00] A breakdown of common cover stories shows why attackers lean on confidence and industry jargon.
    • [39:50] Urgency and pressure tactics surface as key components of social engineering and why "polite paranoia" helps.
    • [41:14] A viral prank underscores how easily an unverified person can be escorted into restricted areas.

    Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review.

    Links and Resources:
    • Podcast Web Page
    • Facebook Page
    • whatismyipaddress.com
    • Easy Prey on Instagram
    • Easy Prey on Twitter
    • Easy Prey on LinkedIn
    • Easy Prey on YouTube
    • Easy Prey on Pinterest
    • Deviant Ollam
    • Deviant Ollam - You Tube
    • Deviant Ollam - Instagram
    • Practical Lock Picking: A Physical Penetration Tester's Training Guide
    Más Menos
    43 m
  • The Scam You Never See Coming
    Dec 3 2025
    Fraud today doesn't feel anything like it used to. It's not just about somebody skimming a credit card at a gas pump or stealing a check out of the mail. It has gotten personal, messy, emotional. Scammers are building relationships, earning trust, and studying the little details of our lives so they can strike when we're tired, distracted, or dealing with something big. And honestly, most people have no idea how far it's gone. My guest, Ian Mitchell, has spent more than 25 years fighting fraud around the world and leading teams in the financial sector. He's the founder of The Knoble, a nonprofit bringing banks and industry leaders together to protect vulnerable people from scams, human trafficking, and exploitation. Ian has seen the evolution of fraud firsthand, from the old-school days of stolen cards to the organized global crime networks using technology, AI, and human manipulation to scale at a pace we've never experienced before. What stood out to me is Ian's belief that the strongest defense doesn't start with fancy tools or tighter security. It starts at home. Real conversations with our kids about safety online. Checking in on aging parents. Talking openly with people we trust so scammers can't isolate us and break us down. It's serious work, but Ian is hopeful. He believes there are far more good people than bad, and when we look out for each other, we're a lot harder to exploit. Show Notes: [00:58] Ian unexpectedly shifted from music and modeling into the world of fraud prevention.[01:19] Founding The Knoble and building a global network to fight human crimes and protect vulnerable populations.[01:49] A look at Follow the Money, the documentary project raising awareness about exploitation and financial crime.[02:19] Why Ian believes crimes of exploitation have moved directly into our homes and daily lives.[03:08] The early moment when Ian uncovered a major fraud ring while working at an internet company.[06:44] How canceling $300,000 in fraudulent orders changed the direction of his career.[08:11] Reflections on the "wild west" early days of online fraud and security.[11:01] How fraud evolved from stolen cards into emotional manipulation and trust-based scams.[12:49] The post-COVID surge in scams and the shift toward targeting individuals instead of systems.[14:03] Why fighting fraud today requires global coordination and an army of trained professionals.[16:38] Scammers coaching victims to distrust banks, friends, and even family members.[17:05] The longest romance-style scam Ian has seen — an eight-year manipulation before money was ever requested.[18:25] Discussion on timing, trust, and why even smart people can be caught off guard.[22:05] Ian shares his own experience dealing with identity theft and the complexity of proving it wasn't him.[23:22] AI and big data transforming broad scam attempts into precise, personalized attacks.[25:31] The alarming rise of sextortion schemes targeting kids ages 13–16 and why awareness is critical.[26:40] The urgent need for uncomfortable safety conversations within families.[28:09] Why Ian believes the first line of defense isn't technology — it's communication at home.[29:30] The emotional impact on scam victims: shame, isolation, and loss of confidence in judgment.[31:13] How AI can be used for good and why the industry must move quickly to fight back.[40:40] Three essential conversations families should start having right now.[41:21] Protecting children through parental controls, boundaries, and digital safety.[42:42] Encouraging open dialogue with aging parents about financial protection and autonomy.[44:19] Finding balance: staying vigilant without living in fear.[47:57] A hopeful reminder that there are far more good people than bad — and collective action matters.[48:30] Where to find Ian, learn more about The Knoble, and connect with his work. Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. Links and Resources: Podcast Web PageFacebook Pagewhatismyipaddress.comEasy Prey on InstagramEasy Prey on TwitterEasy Prey on LinkedInEasy Prey on YouTubeEasy Prey on PinterestThe KnobleIan Mitchell - LinkedIn
    Más Menos
    49 m
  • Hacking AI
    Nov 26 2025

    AI has brought incredible new capabilities into everyday technology, but it's also creating security challenges that most people haven't fully wrapped their heads around yet. As these systems become more capable and more deeply connected to the tools and data we rely on, the risks become harder to predict and much more complicated to manage.

    My guest today is Rich Smith, who leads offensive research at MindGard and has spent more than twenty years working on the front lines of cybersecurity. Rich has held leadership roles at organizations like Crash Override, Gemini, Duo Security, Cisco, and Etsy, and he's spent most of his career trying to understand how real attackers think and where systems break under pressure.

    We talk about how AI is changing the way attacks happen, why the old methods of testing security don't translate well anymore, and what happens when models behave in ways no one expected. Rich also explains why psychology now plays a surprising role in hacking AI systems, where companies are accidentally creating new openings for exploitation, and what everyday users should keep in mind when trusting AI with personal information. It's a fascinating look behind the curtain at what's really going on in AI security right now.

    Show Notes:
    • [01:00] Rich describes getting into hacking as a kid and bypassing his brother's disk password.
    • [03:38] He talks about discovering Linux and teaching himself through early online systems.
    • [05:07] Rich explains how offensive security became his career and passion.
    • [08:00] Discussion of curiosity, challenge, and the appeal of breaking systems others built.
    • [09:45] Rich shares surprising real-world vulnerabilities found in large organizations.
    • [11:20] Story about discovering a major security flaw in a banking platform.
    • [12:50] Example of a bot attack against an online game that used his own open-source tool.
    • [16:26] Common security gaps caused by debugging code and staging environments.
    • [17:43] Rich explains how AI has fundamentally changed offensive cybersecurity.
    • [19:30] Why binary vulnerability testing no longer applies to generative AI.
    • [21:00] The role of statistics and repeated prompts in evaluating AI risk and failure.
    • [23:45] Base64 encoding used to bypass filters and trick models.
    • [27:07] Differentiating between model safety and full system security.
    • [30:41] Risks created when AI models are connected to external tools and infrastructure.
    • [32:55] The difficulty of securing Python execution environments used by AI systems.
    • [35:56] How social engineering and psychology are becoming new attack surfaces.
    • [38:00] Building psychological profiles of models to manipulate behavior.
    • [42:14] Ethical considerations and moral questions around AI exploitation.
    • [44:05] Rich discusses consumer fears and hype around AI's future.
    • [45:54] Advice on privacy and cautious adoption of emerging technology.

    Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review.

    Links and Resources:
    • Podcast Web Page
    • Facebook Page
    • whatismyipaddress.com
    • Easy Prey on Instagram
    • Easy Prey on Twitter
    • Easy Prey on LinkedIn
    • Easy Prey on YouTube
    • Easy Prey on Pinterest
    • Mindgard
    • Rich.Smith@Mindgard.ai
    Más Menos
    48 m
  • The Ransomware War
    Nov 19 2025
    Ransomware isn't a lone hacker in a hoodie. It's an entire criminal industry complete with developers, brokers, and money launderers working together like a dark tech startup. And while these groups constantly evolve, so do the tools and partnerships aimed at stopping them before they strike. My guest today is Cynthia Kaiser, former Deputy Assistant Director of the FBI's Cyber Division and now the Head of the Ransomware Research Center at Halcyon. After two decades investigating global cyber threats and briefing top government leaders, she's now focused on prevention and building collaborations across government and industry to disrupt ransomware actors at their source. We talk about how ransomware groups operate, why paying a ransom rarely solves the problem, and what layered defense really means for organizations and individuals. Cynthia also shares how AI is reshaping both sides of the cyber arms race and why she believes hope, not fear, is the most powerful tool for defenders. Show Notes: [01:04] Cynthia Kaiser had a 20-year FBI career and has now transitioned from investigation to prevention at Halcyon.[03:58] The true scale of cyber threats is far larger than most people realize, even within the government.[04:19] Nation-state and criminal activity now overlap, making attribution increasingly difficult.[06:45] Cynthia outlines how ransomware spreads through phishing, credential theft, and unpatched systems.[08:08] Ransomware is an ecosystem of specialists including developers, access brokers, money launderers, and infrastructure providers.[09:55] Discussion of how many ransomware groups exist and the estimated cost of attacks worldwide.[11:37] Ransom payments dropped in 2023, but total business recovery costs remain enormous.[12:24] Paying a ransom can mark a company as an easy target and doesn't guarantee full decryption.[13:11] Example of a decryptor that failed completely and how Halcyon helped a victim recover.[14:35] The so-called "criminal code of ethics" among ransomware gangs has largely disappeared.[16:48] Hospitals continue to be targeted despite claims of moral restraint among attackers.[18:44] Prevention basics still matter including strong passwords, multi-factor authentication, and timely patching.[19:18] Cynthia explains the value of layered defense and incident-response practice drills.[21:22] Even individuals need cyber hygiene like unique passwords, MFA, and updated antivirus protection.[23:32] Deepfakes are becoming a major threat vector, blurring trust in voice and video communications.[25:17] Always verify using a separate communication channel when asked to send money or change payment info.[27:40] Real-world example: credential-stuffing attack against MLB highlights the need for two-factor authentication.[29:55] What to do once ransomware hits includes containment, external counsel, and calling trusted law-enforcement contacts.[32:44] Cynthia recounts being impersonated online and how she responded to protect others from fraud.[34:28] Many victims feel ashamed to report cybercrime, especially among older adults.[36:45] Scams often succeed because they align with real-life timing or emotional triggers.[38:32] Children and everyday users are also at risk from deceptive links and push-fatigue attacks.[39:26] Overview of Halcyon's Ransomware Research Center and its educational, collaborative goals.[42:15] The importance of public-private partnerships in defending hospitals and critical infrastructure.[43:38] How AI-driven behavioral detection gives defenders a new advantage.[44:48] Cynthia shares optimism that technology can reduce ransomware's impact.[45:43] Closing advice includes practicing backups, building layered defenses, and staying hopeful. Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. Links and Resources: Podcast Web PageFacebook Pagewhatismyipaddress.comEasy Prey on InstagramEasy Prey on TwitterEasy Prey on LinkedInEasy Prey on YouTubeEasy Prey on PinterestHalcyonCynthia Kaiser - LinkedIn
    Más Menos
    47 m
adbl_web_global_use_to_activate_DT_webcro_1694_expandible_banner_T1