Artificial Intelligence Act - EU AI Act Podcast Por Inception Point Ai arte de portada

Artificial Intelligence Act - EU AI Act

Artificial Intelligence Act - EU AI Act

De: Inception Point Ai
Escúchala gratis

Welcome to "The European Union Artificial Intelligence Act" podcast, your go-to source for in-depth insights into the groundbreaking AI regulations shaping the future of technology within the EU. Join us as we explore the intricacies of the AI Act, its impact on various industries, and the legal frameworks established to ensure ethical AI development and deployment.

Whether you're a tech enthusiast, legal professional, or business leader, this podcast provides valuable information and analysis to keep you informed and compliant with the latest AI regulations.

Stay ahead of the curve with "The European Union Artificial Intelligence Act" podcast – where we decode the EU's AI policies and their global implications. Subscribe now and never miss an episode!

Keywords: European Union, Artificial Intelligence Act, AI regulations, EU AI policy, AI compliance, AI risk management, technology law, AI ethics, AI governance, AI podcast.

Copyright 2025 Inception Point Ai
Economía Política y Gobierno
Episodios
  • Europe's AI Reckoning: Six Months to Compliance as Brussels Tightens the Screws
    Feb 23 2026
    Imagine this: it's February 23, 2026, and I'm huddled in a Brussels café, steam rising from my espresso as I scroll through the latest dispatches on the EU AI Act. The regulation, formally Regulation (EU) 2024/1689, kicked off on August 2, 2024, but now, with high-risk obligations looming just six months away on August 2, 2026, the tension is electric. Prohibited practices like real-time facial recognition in public spaces have been banned since February 2025, and general-purpose AI models faced their transparency mandates last August. Yet, as Hamza Jadoon warned in his February 19 analysis, non-compliance could slap businesses with fines up to 35 million euros or 7% of global turnover—existential stakes for any tech outfit deploying AI in hiring, lending, or healthcare.

    Across town at the European Parliament, co-rapporteurs are pushing to ratify the Council of Europe's Framework Convention on AI, Human Rights, Democracy, and the Rule of Law. This binding treaty, born from talks starting in 2019, dovetails perfectly with the AI Act's risk-based framework, mandating Fundamental Rights Impact Assessments for high-risk public deployments. It insists on iterative risk management, human oversight—even for emerging agentic AIs—and the right to know when you're chatting with a bot. The Parliament's A10-0007/2026 report hails it as Europe's chance to export trustworthy AI, countering hybrid threats and power concentration while nurturing innovation in creative sectors hammered by generative AI.

    But here's the rub: the proposed AI Omnibus, floated by the European Commission in November 2025, signals a pivot from rigid rules to pragmatic deployment. According to 150sec's coverage, it delays high-risk deadlines by up to 18 months because technical standards lag—think incomplete guidelines on robustness and cybersecurity. Real Instituto Elcano critiques this as carving enforcement gaps, potentially letting malicious AI slip through, like persuasive systems fueling disinformation. Meanwhile, the Commission's first draft Code of Practice on AI transparency, per Kirkland & Ellis, maps "high-level" rules for watermarking AI-generated content by August 2026, with a final version eyed for June.

    Even copyright's in the fray. The European Parliament's January 2026 compromise amendments demand licensing regimes for GenAI training on protected works, threatening to bar non-compliant providers from the EU market. French President Emmanuel Macron echoed this resolve at India's AI Summit last week, vowing Europe as a "safe space" for innovation while prohibiting unacceptable risks.

    Listeners, as August 2026 barrels toward us, the AI Act isn't just law—it's a litmus test. Will it harmonize rights and tech, or fracture under delays? Businesses, dust off that 180-day compliance playbook: inventory systems, classify risks, bake in human oversight. Europe leads, but the world watches—will we build AI that amplifies humanity, or amplifies peril?

    Thank you for tuning in, and please subscribe for more. This has been a Quiet Please production, for more check out quietplease.ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI
    Más Menos
    4 m
  • EU AI Act Enforcement Looms: August 2026 Deadline Forces Global Compliance Reckoning
    Feb 21 2026
    Imagine this: it's February 21, 2026, and I'm huddled in my Berlin apartment, laptop glowing as the latest EU AI Act ripples hit my feed. Just ten days ago, on February 11, the European Commission dropped a bombshell report—leaked to MLex—outlining 2026 implementation priorities. High-stakes stuff for general-purpose AI models and high-risk systems like those powering hiring algorithms or medical diagnostics. They're fast-tracking transparency rules for GPAI while sidelining politically thorny measures, like full-blown cybersecurity mandates. Providers, wake up: August 2026 is when the hammer drops, with full enforceability kicking in.

    But here's the techie twist that's keeping me up at night—the Commission's already missed a key deadline on Article 6 guidance, that crucial clause classifying high-risk AI. Simmons & Simmons reports it was due early February, yet we're staring down a potential March or April release, tangled in the proposed Digital Omnibus package. This could delay high-risk obligations by up to 18 months, sparking fury from rights groups and uncertainty for innovators. Picture Italy, leading the charge: their Artificial Intelligence Act, Law No. 132, effective since October 2025, now mandates oversight committees in the Ministry of Labour for workplace AI. Fines up to €1,500 per employee for non-compliance? That's no sandbox—it's a compliance gauntlet for recruiters using biased CV scanners.

    Across the Channel, Ireland's gearing up with the General Scheme of the Regulation of Artificial Intelligence Bill 2026, birthing Oifig IS na hÉireann, a national AI office to wrangle enforcement. And don't get me started on the Council of Europe's Framework Convention on AI, Human Rights, Democracy, and the Rule of Law—ratified amid trilogues, it anchors the AI Act globally, demanding lifecycle safeguards from Brussels to beyond. Letslaw nails it: we're in a 2025-2026 transition, where providers must prove continuous risk management, Fundamental Rights Impact Assessments, and GDPR sync before market entry.

    This isn't just red tape; it's a paradigm shift. Agentic AI—those autonomous agents—loom large, demanding human oversight to avert hybrid threats or electoral meddling. Financial firms, per Fenergo's Mark Kettles, face explainability mandates: audit your black-box models now, or face penalties. Luxembourg's CNPD pushes Europrivacy certifications, blending AI Act with data strategy for trust anchors. Yet, Real Instituto Elcano warns of gaps—the Digital Omnibus might dilute malicious AI protections, undermining the Act's extraterritorial punch.

    Listeners, as we hurtle toward scalable AI, ponder this: will Europe's risk-based rigor foster innovation or stifle it? The EU's betting on trustworthy tech, but delays breed chaos. Proactive governance isn't optional—it's the new OS for AI survival.

    Thank you for tuning in, and please subscribe for more deep dives. This has been a Quiet Please production, for more check out quietplease.ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI
    Más Menos
    4 m
  • EU AI Act: A Tectonic Shift Shaping Europe's AI Landscape
    Feb 19 2026
    Imagine this: it's February 19, 2026, and I'm huddled in my Berlin startup office, staring at my laptop as the EU AI Act's shadow looms larger than ever. Prohibited practices kicked in last year on February 2, 2025, banning manipulative subliminal techniques and exploitative social scoring systems outright, as outlined by the European Commission. But now, with August 2, 2026, just months away, high-risk AI systems—like those in hiring at companies such as Siemens or credit scoring at Deutsche Bank—face full obligations: risk management frameworks, ironclad data governance, CE marking, and EU database registration.

    I remember the buzz last week when LegalNodes dropped their updated compliance guide, warning that obligations hit all high-risk operators even for pre-2026 deployments. Fines? Up to 35 million euros or 7% of global turnover—steeper than GDPR—enforced by national authorities or the European Commission. Italy's Law No. 132/2025, effective October 2025, amps it up with criminal penalties for deepfake dissemination, up to five years in prison. As a deployer of our emotion recognition tool for HR, we're scrambling: must log events automatically, ensure human oversight, and label AI interactions transparently per Article 50.

    Then came the bombshell from Nemko Digital last Tuesday: the European Commission missed its February 2 deadline for Article 6 guidance on classifying high-risk systems. CEN and CENELEC standards are delayed to late 2026, leaving us without harmonized benchmarks for conformity assessments. Perta Partners' timeline confirms GPAI models—like those powering ChatGPT—had to comply by August 2, 2025, with systemic risk evals for behemoths over 10^25 FLOPs. VerifyWise calls it a "cascading series," urging AI literacy training we rolled out in January.

    This isn't just red tape; it's a tectonic shift. Europe's risk-based model—prohibited, high-risk, limited, minimal—prioritizes rights over unchecked innovation. Deepfakes must be machine-readable, biometric categorization disclosed. Yet delays breed uncertainty: will the proposed Digital Omnibus push high-risk deadlines 16 months? As EDPS Wojciech Wiewiórowski blogged on February 18, implementation stumbles risk eroding trust. For innovators like me, it's a call to build resilient governance now—data lineage, audits, ISO 27001 alignment—turning constraint into edge against US laissez-faire.

    Listeners, the Act forces us to ask: Is AI a tool or tyrant? Will it stifle Europe's 11.75% text-mining adoption or forge trustworthy tech leadership? Proactive compliance isn't optional; it's survival.

    Thank you for tuning in, and please subscribe for more. This has been a Quiet Please production, for more check out quietplease.ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI
    Más Menos
    4 m
Todas las estrellas
Más relevante
It’s now possible to set up any text a little bit and put appropriate pauses and intonation in it. Here is just a plain text narrated by artificial intelligence.

Artificial voice, without pauses, etc.

Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.