Tech Talks Daily Podcast By Neil C. Hughes cover art

Tech Talks Daily

Tech Talks Daily

By: Neil C. Hughes
Listen for free

If every company is now a tech company and digital transformation is a journey rather than a destination, how do you keep up with the relentless pace of technological change? Every day, Tech Talks Daily brings you insights from the brightest minds in tech, business, and innovation, breaking down complex ideas into clear, actionable takeaways. Hosted by Neil C. Hughes, Tech Talks Daily explores how emerging technologies such as AI, cybersecurity, cloud computing, fintech, quantum computing, Web3, and more are shaping industries and solving real-world challenges in modern businesses. Through candid conversations with industry leaders, CEOs, Fortune 500 executives, startup founders, and even the occasional celebrity, Tech Talks Daily uncovers the trends driving digital transformation and the strategies behind successful tech adoption. But this isn't just about buzzwords. We go beyond the hype to demystify the biggest tech trends and determine their real-world impact. From cybersecurity and blockchain to AI sovereignty, robotics, and post-quantum cryptography, we explore the measurable difference these innovations can make. Whether improving security, enhancing customer experiences, or driving business growth, we also investigate the ROI of cutting-edge tech projects, asking the tough questions about what works, what doesn't, and how businesses can maximize their investments. Whether you're a business leader, IT professional, or simply curious about technology's role in our lives, you'll find engaging discussions that challenge perspectives, share diverse viewpoints, and spark new ideas. New episodes are released daily, 365 days a year, breaking down complex ideas into clear, actionable takeaways around technology and the future of business.Neil C. Hughes - Tech Talks Daily 2015 Politics & Government
Episodes
  • AI Psychosis Explained With Dr. Ragy Girgis From Columbia University
    Apr 10 2026

    How do we talk about artificial intelligence without ignoring the very human consequences it can have on our mental health?

    In this episode, I sit down with Dr. Ragy Girgis, Professor of Clinical Psychiatry at Columbia University, to unpack a topic that has quietly moved from the fringes of academic discussion into mainstream headlines. You have probably seen the term "AI psychosis" appearing more frequently, often surrounded by speculation, fear, or misunderstanding. But what does it actually mean, and how should we be thinking about it as these technologies become part of everyday life?

    Ragy brings a clinical and deeply considered perspective to the conversation. He explains that what we are seeing is not AI creating entirely new delusions out of thin air, but something more subtle and arguably more concerning. Large language models can reflect and reinforce ideas that already exist within a person's mind. For someone already vulnerable, that reinforcement can push a belief from uncertainty into absolute conviction. That shift, even if small, can have life-altering consequences. It raises uncomfortable questions about how persuasive technology interacts with fragile mental states.

    We also explore the comparison many people make with older internet rabbit holes, and why this new generation of AI tools feels different. There is something about conversational systems that mimic human interaction so convincingly that they can blur the line between reflection and validation. Ragy introduces a powerful analogy rooted in the story of Narcissus, which reframes the issue in a way that feels both timeless and unsettling. It is not about an external voice planting ideas, but about a mirror that becomes impossible to look away from.

    But this conversation is not about fear. It is about responsibility and awareness. We discuss practical steps that could help reduce risk, from how AI systems communicate their limitations, to the role of families and clinicians, and even the responsibility of tech companies to invest in research around early warning signs. There is a sense that we are only at the beginning of understanding this phenomenon, and that the decisions made now will shape how safely these tools evolve.

    So as AI continues to move closer to us, speaking in our language and responding in real time, how do we make sure it supports human wellbeing rather than quietly amplifying our most vulnerable moments?

    Useful Links

    • Connect with Dr. Ragy Girgis, Professor of Clinical Psychiatry at Columbia University
    • Time Magazine Article

    Visit the May Sponsors of Tech Talks Network and learn more about the NordLayer Browser.

    Show more Show less
    25 mins
  • Flexera: Why 2026 Is AI's 'Back to Basics' Moment
    Apr 9 2026

    Why are so many AI projects failing to deliver real business value, despite the hype and investment? In this episode, I sit down with Jay Litkey, SVP of Cloud & FinOps at Flexera, to explore the growing gap between AI ambition and measurable results.

    We discuss why findings from PwC reveal that only a small percentage of CEOs are seeing both revenue growth and cost savings from AI, and why the issue often comes down to a lack of clear outcomes, financial discipline, and governance rather than the technology itself. Jay shares what organizations are getting wrong, why many are stuck in experimentation mode, and what it really means to go back to basics in 2026.

    The conversation also reframes FinOps for the AI era, moving beyond cost control to a model that connects AI usage directly to business value, aligns finance with engineering, and introduces the guardrails needed to scale responsibly. If you are investing in AI or planning your next move, this episode offers a clear lens on how to turn potential into performance.

    Useful Links

    • Connect with Jay Litkey from Flexera
    • Learn More About Flexera

    Visit the May Sponsors of Tech Talks Network and learn more about the NordLayer Browser.

    Show more Show less
    19 mins
  • The Lucid Software Playbook For Aligning People, Process, And AI
    Apr 8 2026

    How do you bring people together to do better work when everything around them feels increasingly complex, distributed, and uncertain?

    In today's episode, I sat down with Jessica Guistolise from Lucid Software, and what struck me straight away was her belief that work has always been a group project, even if many organizations still behave as though it is not.

    Jessica shared how much of the friction we experience at work comes from misalignment, unclear expectations, and a lack of shared understanding. When teams are spread across time zones, systems, and now AI-powered workflows, those gaps only widen. Her perspective is simple but powerful. When people can actually see the work, rather than interpret it through documents, meetings, or assumptions, something shifts. Conversations become clearer, decisions become faster, and collaboration starts to feel human again.

    We also explored how visual collaboration platforms like those from Lucid Software are helping teams move away from scattered tools and disconnected workflows toward a more unified way of working. Jessica described it as having everything on one workbench, where teams can brainstorm, plan, and execute without constantly switching context.

    What really stayed with me was her focus on inclusivity in collaboration. Not everyone contributes in the same way, and visual environments can create space for different thinking styles, whether someone is outspoken, reflective, or somewhere in between. That idea of creating a shared language across teams, roles, and even personalities feels increasingly relevant in a world where communication often breaks down.

    Of course, no conversation right now would be complete without talking about AI. Jessica offered a refreshingly honest view. There is uncertainty, and there should be. But rather than avoiding it, she believes leaders need to make AI visible, map how it is used, define where human judgment matters, and encourage teams to experiment openly.

    One of the most interesting ideas she shared was reframing mistakes as early learnings. When teams feel safe to test, fail, and share what they discover, progress accelerates. When fear or blame enters the picture, everything slows down.

    We also touched on AI literacy and what it really means in practice. For Jessica, it comes down to clarity. Clear workflows, clear guardrails, and clear expectations about accountability. AI might assist, but humans remain responsible for outcomes. That mindset, combined with leadership that actively participates in experimentation, creates an environment where people feel confident stepping forward rather than holding back.

    This conversation left me thinking about how many organizations are still trying to layer AI onto unclear processes and expecting better results. Jessica's message is that clarity comes first, then technology can amplify it.

    So if work really is a group project, are we giving our teams the visibility and confidence they need to succeed, or are we still asking them to figure it out in the dark?

    Show more Show less
    31 mins

Featured Article: The Best Tech Podcasts for Industry Pros and Enthusiasts Alike


With global developments in the tech world breaking nearly every single day, it can feel impossible to keep up with the latest news. These podcasts—just a few of the best tech podcasts streaming now—are vital tools in a rapidly shifting technological environment. Covering everything you could ever want to know about technology, from the latest news and developments to the future of the industry and more, these listens will ensure you’re ahead of the curve.

No reviews yet