Artificial Intelligence Act - EU AI Act Podcast Por Inception Point Ai arte de portada

Artificial Intelligence Act - EU AI Act

Artificial Intelligence Act - EU AI Act

De: Inception Point Ai
Escúchala gratis

Welcome to "The European Union Artificial Intelligence Act" podcast, your go-to source for in-depth insights into the groundbreaking AI regulations shaping the future of technology within the EU. Join us as we explore the intricacies of the AI Act, its impact on various industries, and the legal frameworks established to ensure ethical AI development and deployment.

Whether you're a tech enthusiast, legal professional, or business leader, this podcast provides valuable information and analysis to keep you informed and compliant with the latest AI regulations.

Stay ahead of the curve with "The European Union Artificial Intelligence Act" podcast – where we decode the EU's AI policies and their global implications. Subscribe now and never miss an episode!

Keywords: European Union, Artificial Intelligence Act, AI regulations, EU AI policy, AI compliance, AI risk management, technology law, AI ethics, AI governance, AI podcast.

Copyright 2025 Inception Point Ai
Economía Política y Gobierno
Episodios
  • EU's AI Act Faces Make-or-Break Week: Will Business Pressure Defeat Deepfake Bans and Worker Protections?
    Mar 16 2026
    The European Union's artificial intelligence regulation is entering a critical inflection point, and what happens in the next seventy-two hours could reshape how the world's largest trading bloc governs machine learning. On Friday, March 13th, the European Council locked in a position that streamlines the AI Act through something called Omnibus VII, a legislative package designed to simplify the EU's digital framework while harmonizing AI rules across member states. But here's where it gets philosophically interesting: simplification, it turns out, is deeply political.

    The core debate centers on timing and risk tolerance. The original AI Act promised comprehensive protection by August 2026, with high-risk systems like facial recognition and hiring algorithms falling under strict rules. Now, requirements for systems listed in Annex III would apply from December 2027, while Annex I systems won't face enforcement until August 2028. The European Commission framed this as necessary breathing room for AI developers, but critics argue the postponement fundamentally undermines the law's credibility months before it takes effect.

    What's genuinely compelling is the fight over what gets banned versus what gets delayed. The Council's proposal explicitly prohibits generating non-consensual sexual content, a direct response to the Grok scandal where X's artificial intelligence tool allowed users to create deepfakes of real people, including children. The European Union launched investigations into X's practices and is now considering sweeping restrictions on any AI system that generates sexualized videos, images, or audio without consent. Over one hundred organizations including Amnesty International and Interpol have called for urgent action.

    Yet here's the tension: while the EU moves decisively on deepfakes and child safety, it's simultaneously pushing back deadlines for systems that determine whether someone gets hired, denied a loan, or flagged by law enforcement. The Information Technology Industry Council warned that shortening the grace period for generative AI transparency requirements to three months creates legal uncertainty, while forty-eight EU-based trade associations pressed for even broader rollbacks, arguing the regulations will entrench advantages for dominant players and disadvantage European competitors.

    The political agreement reached by European Parliament lawmakers on March 11th now heads to committee vote on March 18th. What emerges from Brussels over the next five days will signal whether the EU's "rights-driven" approach to artificial intelligence can genuinely balance innovation with fundamental protections, or whether business pressure will hollow out the law before it even begins.

    Thank you for tuning in to this analysis of artificial intelligence regulation at the inflection point. Please subscribe for more exploration of how technology and governance collide. This has been a Quiet Please production, for more check out quietplease.ai

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI
    Más Menos
    3 m
  • Five Months to AI Compliance: How August 2026 Could Cost Your Organization 7% of Global Revenue
    Mar 14 2026
    Five months. That's what separates your AI infrastructure from legal exposure that could cost your organization seven percent of global turnover. The EU AI Act's full high-risk enforcement arrives August second, twenty twenty-six, and according to recent analysis from the International Association of Privacy Professionals, most organizations still haven't completed basic AI inventory work.

    Here's what's actually happening right now. Two enforcement waves already passed. Prohibited practices like social scoring systems, manipulative AI designed to exploit psychological vulnerabilities, and real-time biometric surveillance in public spaces have been illegal since February twenty twenty-five. That's over a year of potential compliance violations for companies that haven't formally documented these restrictions. The second wave hit last August when foundation model rules activated. Now comes the third wave, and it's the one that fundamentally reshapes how enterprises deploy AI.

    The mechanics are getting tense because the European Parliament just reached a preliminary political agreement on the Digital Omnibus—essentially a last-minute rewrite proposal from the European Commission intended to ease compliance burdens. According to IAPP reporting from March eleventh, the compromise contains extensions that would push high-risk requirements to December twenty twenty-seven instead of August twenty twenty-six. But here's the tension point: that proposal is still under negotiation. Multiple law firms and PwC are advising organizations to treat August second as the binding deadline because nothing is certain until formal harmonized standards exist, and those won't arrive until Q four twenty twenty-six at the earliest.

    The scope is wider than most technology leaders realize. Article nine mandates continuous risk management covering both intended use and reasonably foreseeable misuse. Article ten requires data governance with specific attention to bias detection in sensitive populations. Article fourteen requires autonomous agents in high-risk contexts to support immediate interruption with full logging of reasoning steps. Most agentic AI architectures deployed today don't have these constraints built in.

    What's intellectually compelling here is that this regulation didn't emerge from thin air. It represents a deliberate choice that AI innovation should remain human-centered. The EU's framework classifies every system by risk level, assigns compliance obligations accordingly, and structures penalties that make compliance engineering cheaper than avoiding it. That's the regulatory architecture: make doing it right the economically rational choice.

    The parallel obligations already active under Article four require documented AI literacy training for everyone operating AI systems. Very few organizations have formal programs. Even fewer have documentation ready for enforcement actions.

    The real lesson for your organization isn't the August deadline. It's that regulatory compliance is now an engineering decision, not a legal afterthought. Thank you for tuning in, and please do subscribe. This has been a Quiet Please production. For more, check out quietplease dot ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI
    Más Menos
    4 m
  • EU AI Act Crunch Time: Compliance Deadlines Loom as Europe Tightens the Screws on Big Tech
    Mar 12 2026
    Imagine this: it's early March 2026, and I'm huddled in a Brussels café, steam rising from my espresso as I scroll through the latest on the EU AI Act. The air buzzes with urgency—deadlines loom like storm clouds over the tech horizon. Just days ago, on March 5, the European Commission dropped the second draft of its voluntary Code of Practice for labeling AI-generated content, straight out of Article 50's transparency playbook. This isn't some dusty guideline; it's a streamlined blueprint for developers and deployers, blending secured metadata with digital watermarking, even floating a standardized EU icon to flag deepfakes and synth-text before they flood our feeds.

    Think about it, listeners. Prohibited AI practices—think manipulative social scoring or emotion recognition in workplaces—have been banned since February 2025, with fines up to 7% of global turnover. Article 4's AI literacy training? Enforceable then too, yet Ajith P.'s analysis reveals most US enterprises, even those piping AI into Europe via Article 2's extraterritorial hooks, haven't documented a single session. Five months from August 2, 2026, when high-risk obligations hit—Annex III's risk management, data governance, CE marking for systems in recruitment, credit scoring, biometrics—and panic sets in. Banks in Virginia profiling customers? Automatically high-risk, no exceptions, per the appliedAI Institute's study of 106 enterprise systems.

    Yet paradoxes abound. Bruegel warns the Commission risks enforcement bias amid US trade tensions, while EY notes the Digital Omnibus might stretch high-risk timelines to December 2027 if standards from CEN/CENELEC land in Q4 2026. Finland's already enforcing via full powers since December 2025; Germany's Bundesnetzagentur gears up. Meanwhile, the European Parliament just greenlit the EU's signature on the Council of Europe's Framework Convention on AI—co-led by José Cepeda and Paulo Cunha—cementing global baselines for human rights, democracy, and auditability that dovetail with the AI Act's phased rollout.

    Euronews reports Parliament pushing a registry for copyrighted works in AI training, clashing with CCIA's cries of a creativity-killing tax. As a techie pondering this, I wonder: will watermarking tame the chaos of generative AI, or stifle innovation? The Act, Regulation 2024/1689 since August 2024, aims to balance it all, setting a benchmark experts at the World Economic Forum hail as world-first. But with GPAI models under EU AI Office scrutiny since August 2025, one thing's clear—compliance isn't optional; it's the new OS upgrade.

    Thanks for tuning in, listeners—subscribe for more deep dives. This has been a Quiet Please production, for more check out quietplease.ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI
    Más Menos
    3 m
Todas las estrellas
Más relevante
It’s now possible to set up any text a little bit and put appropriate pauses and intonation in it. Here is just a plain text narrated by artificial intelligence.

Artificial voice, without pauses, etc.

Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.