Artificial Intelligence Act - EU AI Act Podcast Por Inception Point Ai arte de portada

Artificial Intelligence Act - EU AI Act

Artificial Intelligence Act - EU AI Act

De: Inception Point Ai
Escúchala gratis

Welcome to "The European Union Artificial Intelligence Act" podcast, your go-to source for in-depth insights into the groundbreaking AI regulations shaping the future of technology within the EU. Join us as we explore the intricacies of the AI Act, its impact on various industries, and the legal frameworks established to ensure ethical AI development and deployment.

Whether you're a tech enthusiast, legal professional, or business leader, this podcast provides valuable information and analysis to keep you informed and compliant with the latest AI regulations.

Stay ahead of the curve with "The European Union Artificial Intelligence Act" podcast – where we decode the EU's AI policies and their global implications. Subscribe now and never miss an episode!

Keywords: European Union, Artificial Intelligence Act, AI regulations, EU AI policy, AI compliance, AI risk management, technology law, AI ethics, AI governance, AI podcast.

Copyright 2025 Inception Point Ai
Economía Política y Gobierno
Episodios
  • EU Delays High-Risk AI Rules Until 2027, Bans Non-Consensual Deepfake Nudifiers
    Mar 30 2026
    Imagine this: it's March 30, 2026, and I'm huddled in my Berlin apartment, screens glowing with the latest from Brussels, where the European Parliament just dropped a bombshell on the EU AI Act. Last Thursday, MEPs voted 569 to 45 to adopt their position on the Digital Omnibus proposal, delaying high-risk AI rules and slapping a ban on those creepy nudifier apps. Picture it—systems that strip clothes off real people's images without consent? Gone, unless they've got ironclad safeguards, as Parliament and the Council of the EU both pushed in their March positions.

    I scroll through Europarl's press release, heart racing. High-risk systems—like biometrics in border management at places like Frankfurt Airport, or AI hiring tools at companies in employment sectors—now get pushed to December 2, 2027. That's for Annex III stuff: critical infrastructure, education, law enforcement. Annex I systems, embedded in regulated products like medical devices under EU safety laws, slide to August 2, 2028. Why? Guidance and standards aren't ready by the original August 2, 2026 deadline. The European Commission proposed this in November 2025, citing industry pleas, and now Parliament's on board, setting fixed dates for legal certainty.

    But here's the techie twist that keeps me up at night: watermarking for AI-generated audio, images, videos, or text? Providers have until November 2, 2026—shortened from six months, per Parliament's amendments. Meanwhile, General-Purpose AI models, think GPAI like those from the European AI Office's Code of Practice released July 10, 2025, face full enforcement audits come August 2, 2026. Legacy models get until 2027. EY's quick guide nails it: no more grace periods; fines loom if you're not documenting, mitigating biases, or ensuring human oversight.

    Trilogues kick off soon between Parliament, Council—who aligned on reinstating provider registration in the EU database—and Commission. IMCO and LIBE committees paved the way March 18, with plenary vibes still echoing from the expected March 26 vote. SMEs and now small mid-caps get extended support, easing literacy mandates amid workplace AI risks that IndustriALL Europe flags as needing dedicated laws.

    This isn't just bureaucracy; it's a reckoning. Delays buy time for ethical AI in justice systems or employment, but CIOs like those Jason Hookey advises at Info-Tech Research Group warn of limbo—rush compliance sans guidance, or risk liabilities? Brian Levine of FormerGov cuts deep: enterprises own the risk now, regulations or not. As enforcement hybridizes—national authorities plus the AI Office, Board, and Scientific Panel—will uneven rollout fracture Europe's edge? Or spark innovation, watermarking deepfakes before they erode trust?

    Listeners, the EU AI Act's evolution forces us to ponder: can we balance innovation with safeguards, or will haste breed shadows? Thanks for tuning in—subscribe for more deep dives. This has been a Quiet Please production, for more check out quietplease.ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI
    Más Menos
    4 m
  • EU Delays AI Crackdown While Banning Deepfake Nudes
    Mar 28 2026
    Imagine this: it's late March 2026, and I'm huddled in a Brussels café, laptop glowing amid the clatter of espresso machines, as the European Parliament drops a bombshell on the EU AI Act. Just yesterday, on March 26th, MEPs in plenary session voted overwhelmingly—569 in favor, only 45 against—to amend the Digital Omnibus package, delaying key high-risk AI rules and slapping a ban on those creepy "nudifier" apps that strip clothes from photos without consent. According to the European Parliament's press release, this omnibus tweak pushes compliance for listed high-risk systems—like biometrics in law enforcement or AI in employment screening—to December 2, 2027, while systems under sectoral safety laws, think medical devices, get until August 2, 2028.

    Why the shift? Picture the chaos: the original August 2, 2026 deadline loomed like a digital guillotine, but standards and guidance from the EU AI Office weren't ready. As CIO.com reports, this leaves chief information officers in a planning pickle—rush without blueprints or bet on the delay? The Council's negotiating mandate from March 13 aligned closely, setting up trilogues with the Commission. Yet, transparency hits sooner: providers must watermark AI-generated audio, images, videos, or text by November 2, 2026, per the Parliament's stance. And Article 12 record-keeping? Still locked for August 2, 2026—no limbo there.

    Zoom out to the big picture. The EU AI Act, forged in 2024 and live since August 1 that year, is the world's first AI rulebook, risk-tiered from prohibited manipulative biometrics (already banned February 2025) to general-purpose models like those powering ChatGPT, governed since August 2025. Only eight of 27 member states have named their national authorities, warns AIActo.eu, exposing enforcement gaps. Cybersecurity expert Brian Levine of FormerGov nails it: enterprises own the risk now, delays or not—fines up to 7% of global turnover await slip-ups.

    This isn't just bureaucracy; it's a philosophical pivot. Does delaying high-risk mandates stifle innovation in sandboxes, now pushed to December 2027, or give startups breathing room? In Berlin's tech hubs or Paris's AI labs, teams scramble: audit logs today mean market edge tomorrow, as Supra-Wall advises. Thought-provoking, right? The Act extraterritorially ropes in non-EU firms if they touch Europe—hello, Silicon Valley. As the EU AI Office ramps up in March 2026 guidance, per their enforcement notes, it's clear: AI's promise of efficiency clashes with perils of bias in justice systems or critical infrastructure. Will trilogues seal this by summer, or revert to 2026 crunch time? One thing's certain—the Act's teeth are sharpening, forcing us to code responsibly.

    Thanks for tuning in, listeners—subscribe for more deep dives into tomorrow's tech today. This has been a Quiet Please production, for more check out quietplease.ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI
    Más Menos
    4 m
  • EU Delays AI Act's Strictest Rules Until 2027, Giving Tech Giants and SMEs Crucial Breathing Room
    Mar 26 2026
    Imagine this: it's March 26, 2026, and I'm huddled in my Berlin apartment, laptop glowing like a digital hearth, as the EU AI Act's latest drama unfolds. Just days ago, on March 19, the European Sting reported that MEPs, with rapporteurs Arba Kokalari and Michael McNamara leading the charge, voted 101 to 9 to back postponing key high-risk AI rules. Why? Harmonized standards, common specifications, and national competent authorities aren't ready by the original August 2, 2026 deadline. This Digital Omnibus proposal, from the European Parliament's A10-0073/2026 report, shifts high-risk obligations for systems under Article 6(2) and Annex III to December 2, 2027, and those under Article 6(1) and Annex I to August 2, 2028. No more fixed-date panic; it's now tied to readiness, as Nemko's digital analysis highlights, easing the scramble for conformity assessments in medical devices and beyond.

    Think about it, listeners: the AI Act, Regulation (EU) 2024/1689, kicked off August 1, 2024, banning prohibited practices like social scoring by February 2025 and hitting general-purpose AI models—think OpenAI's GPTs—by August 2025. Providers like those behind foundation models now face the AI Office's sharpened claws, empowered under Article 75 to slap fines up to 3% of global turnover, per Trusaic's March 25 breakdown by Robert Sheen. But this Omnibus tweak clarifies the AI Office's role, excluding Annex I products while looping in same-provider general-purpose systems, and cuts the generative AI marking grace period from six to three months post-August 2026.

    As a tech ethicist tweaking my own high-risk hiring algorithm, I feel the ripple. Businesses in healthcare, finance, and law enforcement—deployers in 27 member states—gain breathing room, but the clock ticks. Aurora Trust warns SMEs need 3-6 months for compliance audits, EU database registration, and human oversight training. Push Annex I references to Annex B, and suddenly embedded AI in regulated products dodges dual bureaucracy, slashing costs without skimping on safety.

    This isn't delay for delay's sake; it's pragmatic evolution. The Council echoes Parliament, reinstating provider registrations and pushing AI sandboxes to December 2027. Extraterritorial bite means U.S. giants like Google must comply if outputs touch EU soil. Provocative question: Does this flexibility turbocharge EU innovation, or just let risky AI linger? In a world where GPAI blurs creator and deployer, the AI Office's implementing acts under Regulation 2019/1020 could redefine enforcement.

    The Act's genius is risk-tiering—unacceptable risks banned, high-risk scrutinized—but implementation snags expose the human in the machine. As Quantamix notes, full enforcement looms by 2027, urging us to build trustworthy AI now.

    Thanks for tuning in, listeners—subscribe for more deep dives. This has been a Quiet Please production, for more check out quietplease.ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI
    Más Menos
    4 m
Todas las estrellas
Más relevante
It’s now possible to set up any text a little bit and put appropriate pauses and intonation in it. Here is just a plain text narrated by artificial intelligence.

Artificial voice, without pauses, etc.

Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.