Artificial Intelligence Act - EU AI Act Podcast Por Inception Point Ai arte de portada

Artificial Intelligence Act - EU AI Act

Artificial Intelligence Act - EU AI Act

De: Inception Point Ai
Escúchala gratis

Welcome to "The European Union Artificial Intelligence Act" podcast, your go-to source for in-depth insights into the groundbreaking AI regulations shaping the future of technology within the EU. Join us as we explore the intricacies of the AI Act, its impact on various industries, and the legal frameworks established to ensure ethical AI development and deployment.

Whether you're a tech enthusiast, legal professional, or business leader, this podcast provides valuable information and analysis to keep you informed and compliant with the latest AI regulations.

Stay ahead of the curve with "The European Union Artificial Intelligence Act" podcast – where we decode the EU's AI policies and their global implications. Subscribe now and never miss an episode!

Keywords: European Union, Artificial Intelligence Act, AI regulations, EU AI policy, AI compliance, AI risk management, technology law, AI ethics, AI governance, AI podcast.

Copyright 2025 Inception Point Ai
Economía Política y Gobierno
Episodios
  • EU Tightens AI Act Rules: High-Risk Systems Get 16-Month Extension, Nudifier Apps Banned Outright
    Mar 19 2026
    Imagine this: it's March 19, 2026, and I'm huddled in a Brussels café, laptop glowing amid the clatter of espresso machines, dissecting the latest twists in the EU AI Act. Just yesterday, on March 18, the European Parliament's Internal Market and Civil Liberties committees—IMCO and LIBE—voted overwhelmingly, 101 to 9, to back amendments in the Digital Omnibus package. Co-rapporteur Arba Kokalari from Sweden's EPP group called it a push for predictable rules that cut overlaps with sectoral laws like medical devices or toy safety, urging Europe to boost AI investment without punishing innovators.

    The heat is on high-risk systems—think biometrics in critical infrastructure, employment screening, or border management under Annex III. Original deadline? August 2, 2026. But MEPs, eyeing unfinished harmonized standards from bodies like CEN and CENELEC, propose pushing it to December 2, 2027. Annex I systems, those safety components in regulated products, get until August 2, 2028. Watermarking for AI-generated audio, images, or text? Extended to November 2, 2026, shorter than the Commission's February 2027 ask, per the Europarl press release.

    And here's the provocative punch: a outright ban on nudifier apps—those creepy AI tools morphing clothed images into explicit ones without consent. No safety measures? Straight to prohibited status, joining social scoring and real-time public biometrics on the unacceptable risk list. ITIF's March 13 report warns these data rules could stifle publicly available training data, tilting the field against EU firms versus U.S. giants like OpenAI.

    Compliance clock ticks loud. Penalties hit 7% of global turnover since August 2025, enforced via national market surveillance authorities and the centralized AI Office, now eyeing oversight of general-purpose models in VLOPs under the Digital Services Act. Legal Nodes' roadmap screams urgency: audit your HRIS chatbots, map risks, document everything from model training to ISO 42001 certs. Outsail notes HR leaders should prep for August anyway—12 months minimum to nail risk management, human oversight, and conformity assessments.

    Transatlantic divide sharpens, as Control Risks highlights: EU's risk-based iron fist versus lighter U.S. touches. Will this foster trustworthy AI or kneecap competitiveness? As plenary vote looms March 26, then trilogue with Council, one thing's clear—innovation demands clarity, not chaos. Providers outside EU, beware extraterritorial reach; appoint reps or face the fines.

    Listeners, thanks for tuning in—subscribe for more deep dives. This has been a Quiet Please production, for more check out quietplease.ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI
    Más Menos
    3 m
  • EU's AI Act Faces Make-or-Break Week: Will Business Pressure Defeat Deepfake Bans and Worker Protections?
    Mar 16 2026
    The European Union's artificial intelligence regulation is entering a critical inflection point, and what happens in the next seventy-two hours could reshape how the world's largest trading bloc governs machine learning. On Friday, March 13th, the European Council locked in a position that streamlines the AI Act through something called Omnibus VII, a legislative package designed to simplify the EU's digital framework while harmonizing AI rules across member states. But here's where it gets philosophically interesting: simplification, it turns out, is deeply political.

    The core debate centers on timing and risk tolerance. The original AI Act promised comprehensive protection by August 2026, with high-risk systems like facial recognition and hiring algorithms falling under strict rules. Now, requirements for systems listed in Annex III would apply from December 2027, while Annex I systems won't face enforcement until August 2028. The European Commission framed this as necessary breathing room for AI developers, but critics argue the postponement fundamentally undermines the law's credibility months before it takes effect.

    What's genuinely compelling is the fight over what gets banned versus what gets delayed. The Council's proposal explicitly prohibits generating non-consensual sexual content, a direct response to the Grok scandal where X's artificial intelligence tool allowed users to create deepfakes of real people, including children. The European Union launched investigations into X's practices and is now considering sweeping restrictions on any AI system that generates sexualized videos, images, or audio without consent. Over one hundred organizations including Amnesty International and Interpol have called for urgent action.

    Yet here's the tension: while the EU moves decisively on deepfakes and child safety, it's simultaneously pushing back deadlines for systems that determine whether someone gets hired, denied a loan, or flagged by law enforcement. The Information Technology Industry Council warned that shortening the grace period for generative AI transparency requirements to three months creates legal uncertainty, while forty-eight EU-based trade associations pressed for even broader rollbacks, arguing the regulations will entrench advantages for dominant players and disadvantage European competitors.

    The political agreement reached by European Parliament lawmakers on March 11th now heads to committee vote on March 18th. What emerges from Brussels over the next five days will signal whether the EU's "rights-driven" approach to artificial intelligence can genuinely balance innovation with fundamental protections, or whether business pressure will hollow out the law before it even begins.

    Thank you for tuning in to this analysis of artificial intelligence regulation at the inflection point. Please subscribe for more exploration of how technology and governance collide. This has been a Quiet Please production, for more check out quietplease.ai

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI
    Más Menos
    3 m
  • Five Months to AI Compliance: How August 2026 Could Cost Your Organization 7% of Global Revenue
    Mar 14 2026
    Five months. That's what separates your AI infrastructure from legal exposure that could cost your organization seven percent of global turnover. The EU AI Act's full high-risk enforcement arrives August second, twenty twenty-six, and according to recent analysis from the International Association of Privacy Professionals, most organizations still haven't completed basic AI inventory work.

    Here's what's actually happening right now. Two enforcement waves already passed. Prohibited practices like social scoring systems, manipulative AI designed to exploit psychological vulnerabilities, and real-time biometric surveillance in public spaces have been illegal since February twenty twenty-five. That's over a year of potential compliance violations for companies that haven't formally documented these restrictions. The second wave hit last August when foundation model rules activated. Now comes the third wave, and it's the one that fundamentally reshapes how enterprises deploy AI.

    The mechanics are getting tense because the European Parliament just reached a preliminary political agreement on the Digital Omnibus—essentially a last-minute rewrite proposal from the European Commission intended to ease compliance burdens. According to IAPP reporting from March eleventh, the compromise contains extensions that would push high-risk requirements to December twenty twenty-seven instead of August twenty twenty-six. But here's the tension point: that proposal is still under negotiation. Multiple law firms and PwC are advising organizations to treat August second as the binding deadline because nothing is certain until formal harmonized standards exist, and those won't arrive until Q four twenty twenty-six at the earliest.

    The scope is wider than most technology leaders realize. Article nine mandates continuous risk management covering both intended use and reasonably foreseeable misuse. Article ten requires data governance with specific attention to bias detection in sensitive populations. Article fourteen requires autonomous agents in high-risk contexts to support immediate interruption with full logging of reasoning steps. Most agentic AI architectures deployed today don't have these constraints built in.

    What's intellectually compelling here is that this regulation didn't emerge from thin air. It represents a deliberate choice that AI innovation should remain human-centered. The EU's framework classifies every system by risk level, assigns compliance obligations accordingly, and structures penalties that make compliance engineering cheaper than avoiding it. That's the regulatory architecture: make doing it right the economically rational choice.

    The parallel obligations already active under Article four require documented AI literacy training for everyone operating AI systems. Very few organizations have formal programs. Even fewer have documentation ready for enforcement actions.

    The real lesson for your organization isn't the August deadline. It's that regulatory compliance is now an engineering decision, not a legal afterthought. Thank you for tuning in, and please do subscribe. This has been a Quiet Please production. For more, check out quietplease dot ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI
    Más Menos
    4 m
Todas las estrellas
Más relevante
It’s now possible to set up any text a little bit and put appropriate pauses and intonation in it. Here is just a plain text narrated by artificial intelligence.

Artificial voice, without pauses, etc.

Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.