Artificial Intelligence Act - EU AI Act Podcast Por Inception Point Ai arte de portada

Artificial Intelligence Act - EU AI Act

Artificial Intelligence Act - EU AI Act

De: Inception Point Ai
Escúchala gratis

Welcome to "The European Union Artificial Intelligence Act" podcast, your go-to source for in-depth insights into the groundbreaking AI regulations shaping the future of technology within the EU. Join us as we explore the intricacies of the AI Act, its impact on various industries, and the legal frameworks established to ensure ethical AI development and deployment.

Whether you're a tech enthusiast, legal professional, or business leader, this podcast provides valuable information and analysis to keep you informed and compliant with the latest AI regulations.

Stay ahead of the curve with "The European Union Artificial Intelligence Act" podcast – where we decode the EU's AI policies and their global implications. Subscribe now and never miss an episode!

Keywords: European Union, Artificial Intelligence Act, AI regulations, EU AI policy, AI compliance, AI risk management, technology law, AI ethics, AI governance, AI podcast.

Copyright 2025 Inception Point Ai
Economía Política y Gobierno
Episodios
  • EU's AI Act Sprint: Grace Periods and Loopholes as August Deadline Looms
    Feb 28 2026
    Imagine this: it's late February 2026, and I'm hunched over my desk in Berlin, the glow of my triple-monitor setup casting shadows on stacks of legal briefs. The EU AI Act, that monumental Regulation 2024/1689 adopted back in June 2024 by the European Parliament and Council, is barreling toward its full enforcement on August 2nd, just months away. As a tech policy analyst who's tracked this beast from its cradle, I can't shake the electric tension in the air—excitement laced with dread.

    Just this week, Euractiv dropped a bombshell: the European Commission has delayed high-risk AI guidelines yet again, missing the February 2nd target and pushing back what was already a revised timeline. Member states like those in the CADE project warn that several haven't even named their national supervisory authorities. It's chaos in the implementation sprint, listeners, with CEN-CENELEC scrambling to finalize standards by late 2026 for that presumption of conformity.

    Enter the AI Omnibus proposal from the Commission in November 2025, as Pinsent Masons reports—a frantic bid to lighten the load before August. They're floating grace periods: six months extra for retrofitting transparency in generative AI already out there, up to February 2027. Small and mid-cap firms get concessions on registration if self-assessments show low real-world risk. AI literacy? Shifted from companies to the Commission and states. And get this: EU-level regulatory sandboxes for SMEs, expanding those national testing grounds to fend off fragmentation.

    But peel back the layers, and it's thought-provoking unease. AGPLaw outlines the risk tiers crystal clear—banned manipulative systems exploiting vulnerabilities, high-risk mandates for healthcare, law enforcement, education under Annex III, like critical infrastructure management or biometric categorization inferring sensitive traits. Providers must nail risk management, data governance, technical docs. Reed Smith clocks it alongside the Cyber Resilience Act in September and Data Act in the same breath.

    Yet Cambridge Analytica's ghost haunts us, per their deep dive. The Act bans overt political profiling but greenlights behavioral inference in "low-risk" realms—marketing, ads, content recs. Think OCEAN personality models from Facebook likes, now powering Meta's $500 billion ad empire or Pymetrics' hiring games. It's surveillance capitalism rebranded as personalization: lenders profiling from app data, recommenders exploiting psych vulnerabilities. High-risk gets oversight; commerce gets a wink. Does this prevent another CA? No—it segments the infrastructure, preserving profitability while democracies breathe easier.

    As August looms, businesses in Brussels boardrooms and Canadian SMEs eyeing EU clients via Onley Law are stress-testing compliance. The Act's extraterritorial bite means global ripple. Will it foster ethical innovation or stifle it with bureaucracy? One thing's sure: AI's genie's out, and Europe's rewriting the bottle.

    Thanks for tuning in, listeners—subscribe for more deep dives. This has been a Quiet Please production, for more check out quietplease.ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI
    Más Menos
    4 m
  • EU AI Act 2026: Europe's High-Stakes Reckoning With Regulated Intelligence
    Feb 26 2026
    Imagine this: it's February 26, 2026, and I'm huddled in my Berlin apartment, staring at my laptop as the EU AI Act's gears grind louder than ever. The Act, formally adopted by the European Council on May 21, 2024, and entering force last August, isn't some distant dream anymore—it's reshaping how we code, deploy, and dream with artificial intelligence right here in the heart of Europe.

    Just days ago, on February 24, Crowell & Moring's client alert hit my feed, spotlighting 2026 as the reckoning for HR teams across the continent. High-risk AI systems—like those automating candidate selection at firms in Brussels or performance evals in Paris—are now demanding mandatory human oversight, transparency blasts to employee reps, and rigorous risk assessments. Picture this: your AI predicts turnover at a Munich startup, but under the Act, it needs trained overseers ready to override, or face fines up to 7% of global turnover. The Digital Omnibus package, unveiled by the European Commission on November 19, 2025, offers a lifeline—pushing some deadlines to December 2027 if harmonized standards lag, but companies like those in Belgium, bound by Collective Bargaining Agreement No. 39, can't wait; they must consult works councils now.

    Euractiv broke the news last week: the Commission delayed high-risk AI guidance again, originally due February 2, missing the mark to sift stakeholder feedback. High-risk means stricter rules for everything from education tools in Amsterdam schools to recruitment bots at OpenAI deployers in Dublin. Meanwhile, Future Prep warns that EU AI governance flips to execution mode this year—boards in London-adjacent firms scrambling for evidence-backed controls and risk classifications.

    But here's the intellectual gut-punch: as the Council of Europe Framework Convention on AI, Human Rights, Democracy, and the Rule of Law gains traction—endorsed in recent European Parliament reports by co-rapporteurs—the Act bridges to global baselines. It bans manipulative AI, emotion recognition in workplaces, and social scoring, echoing prohibitions that tech giants like OpenAI have griped slow innovation. Silicon Canals reported back in February 2025 that startups weren't ready for the first enforcement wave; now, with phased rollouts hitting August 2026, the scramble intensifies. Copyright shadows loom too—Axel Voss's February 25 European Parliament report on generative AI demands licensing clarity under the CDSM Directive, barring non-compliant GenAI from EU markets to protect creators in Rome's studios.

    This isn't just red tape; it's a philosophical pivot. Does mandating FRIA—Fundamental Rights Impact Assessments—for public AI deployments foster trustworthy tech, or stifle the agentic AI revolution? As an engineer tweaking models in my flat, I wonder: will Europe's human-centric firewall export to Brazil or U.S. states like California, or fracture into a patchwork? The Act forces us to code with conscience, blending robustness, cybersecurity, and post-market monitoring. Yet delays signal the tension—innovation versus safety—in our silicon rush.

    Listeners, the EU AI Act isn't regulating AI; it's redefining our digital soul. Thank you for tuning in—subscribe for more deep dives. This has been a Quiet Please production, for more check out quietplease.ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI
    Más Menos
    4 m
  • Europe's AI Reckoning: Six Months to Compliance as Brussels Tightens the Screws
    Feb 23 2026
    Imagine this: it's February 23, 2026, and I'm huddled in a Brussels café, steam rising from my espresso as I scroll through the latest dispatches on the EU AI Act. The regulation, formally Regulation (EU) 2024/1689, kicked off on August 2, 2024, but now, with high-risk obligations looming just six months away on August 2, 2026, the tension is electric. Prohibited practices like real-time facial recognition in public spaces have been banned since February 2025, and general-purpose AI models faced their transparency mandates last August. Yet, as Hamza Jadoon warned in his February 19 analysis, non-compliance could slap businesses with fines up to 35 million euros or 7% of global turnover—existential stakes for any tech outfit deploying AI in hiring, lending, or healthcare.

    Across town at the European Parliament, co-rapporteurs are pushing to ratify the Council of Europe's Framework Convention on AI, Human Rights, Democracy, and the Rule of Law. This binding treaty, born from talks starting in 2019, dovetails perfectly with the AI Act's risk-based framework, mandating Fundamental Rights Impact Assessments for high-risk public deployments. It insists on iterative risk management, human oversight—even for emerging agentic AIs—and the right to know when you're chatting with a bot. The Parliament's A10-0007/2026 report hails it as Europe's chance to export trustworthy AI, countering hybrid threats and power concentration while nurturing innovation in creative sectors hammered by generative AI.

    But here's the rub: the proposed AI Omnibus, floated by the European Commission in November 2025, signals a pivot from rigid rules to pragmatic deployment. According to 150sec's coverage, it delays high-risk deadlines by up to 18 months because technical standards lag—think incomplete guidelines on robustness and cybersecurity. Real Instituto Elcano critiques this as carving enforcement gaps, potentially letting malicious AI slip through, like persuasive systems fueling disinformation. Meanwhile, the Commission's first draft Code of Practice on AI transparency, per Kirkland & Ellis, maps "high-level" rules for watermarking AI-generated content by August 2026, with a final version eyed for June.

    Even copyright's in the fray. The European Parliament's January 2026 compromise amendments demand licensing regimes for GenAI training on protected works, threatening to bar non-compliant providers from the EU market. French President Emmanuel Macron echoed this resolve at India's AI Summit last week, vowing Europe as a "safe space" for innovation while prohibiting unacceptable risks.

    Listeners, as August 2026 barrels toward us, the AI Act isn't just law—it's a litmus test. Will it harmonize rights and tech, or fracture under delays? Businesses, dust off that 180-day compliance playbook: inventory systems, classify risks, bake in human oversight. Europe leads, but the world watches—will we build AI that amplifies humanity, or amplifies peril?

    Thank you for tuning in, and please subscribe for more. This has been a Quiet Please production, for more check out quietplease.ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI
    Más Menos
    4 m
Todas las estrellas
Más relevante
It’s now possible to set up any text a little bit and put appropriate pauses and intonation in it. Here is just a plain text narrated by artificial intelligence.

Artificial voice, without pauses, etc.

Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.