Artificial Intelligence Act - EU AI Act Podcast Por Inception Point Ai arte de portada

Artificial Intelligence Act - EU AI Act

Artificial Intelligence Act - EU AI Act

De: Inception Point Ai
Escúchala gratis

Obtén 3 meses por US$0.99 al mes

Welcome to "The European Union Artificial Intelligence Act" podcast, your go-to source for in-depth insights into the groundbreaking AI regulations shaping the future of technology within the EU. Join us as we explore the intricacies of the AI Act, its impact on various industries, and the legal frameworks established to ensure ethical AI development and deployment.

Whether you're a tech enthusiast, legal professional, or business leader, this podcast provides valuable information and analysis to keep you informed and compliant with the latest AI regulations.

Stay ahead of the curve with "The European Union Artificial Intelligence Act" podcast – where we decode the EU's AI policies and their global implications. Subscribe now and never miss an episode!

Keywords: European Union, Artificial Intelligence Act, AI regulations, EU AI policy, AI compliance, AI risk management, technology law, AI ethics, AI governance, AI podcast.

Copyright 2025 Inception Point Ai
Economía Política y Gobierno
Episodios
  • EU AI Act Transforms from Theory to Operational Reality, Shaping Global Tech Landscape
    Dec 13 2025
    Let me take you straight into Brussels, into a building where fluorescent lights hum over stacks of regulatory drafts, and where, over the past few days, the EU AI Act has quietly shifted from abstract principle to operational code running in the background of global tech.

    Here’s the pivot: as of this year, bans on so‑called “unacceptable risk” AI are no longer theory. According to the European Commission and recent analysis from Truyo and Electronic Specifier, systems for social scoring, manipulative nudging, and certain real‑time biometric surveillance are now flat‑out illegal in the European Union. That’s not ethics talk; that’s market shutdown talk.

    Then, in August 2025, the spotlight swung to general‑purpose AI models. King & Spalding and ISACA both point out that rules for these GPAI systems are now live: transparency, documentation, and risk management are no longer “nice to have” – they’re compliance surfaces. If you’re OpenAI, Anthropic, Google DeepMind, or a scrappy European lab in Berlin or Paris, the model card just turned into a quasi‑legal artifact. And yes, the EU backed this with a Code of Practice that many companies are treating as the de facto baseline.

    But here’s the twist from the last few weeks: the Digital Omnibus package. The European Commission’s own digital‑strategy site confirms that on 19 November 2025, Brussels proposed targeted amendments to the AI Act. Translation: the EU just admitted the standards ecosystem and guidance aren’t fully ready, so it wants to delay some of the heaviest “high‑risk” obligations. Reporting from King & Spalding and DigWatch frames this as a pressure‑release valve for banks, hospitals, and critical‑infrastructure players that were staring down impossible timelines.

    So now we’re in this weird liminal space. Prohibitions are in force. GPAI transparency rules are in force. But many of the most demanding high‑risk requirements might slide toward 2027 and 2028, with longstop dates the Commission can’t move further. Businesses get breathing room, but also more uncertainty: compliance roadmaps have become living documents, not Gantt charts.

    Meanwhile, the European AI Office in Brussels is quietly becoming an institutional supernode. The Commission’s materials and the recent EU & UK AI Round‑up describe how that office will directly supervise some general‑purpose models and even AI embedded in very large online platforms. That’s not just about Europe; that’s about setting de facto global norms, the way GDPR did for privacy.

    And looming over all of this are the penalties. MetricStream notes fines that can reach 35 million euros or 7 percent of global annual turnover. That’s not a governance nudge; that’s an existential risk line item on a CFO’s spreadsheet.

    So the question I’d leave you with is this: when innovation teams in San Francisco, Bengaluru, and Tel Aviv sketch their next model architecture, are they really designing for performance first, or for the EU’s risk taxonomy and its sliding but very real deadlines?

    Thanks for tuning in, and make sure you subscribe so you don’t miss the next deep dive into how law rewires technology. This has been a quiet please production, for more check out quiet please dot ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI
    Más Menos
    4 m
  • EU Builds Gigantic AI Operating System, Quietly Patches It
    Dec 11 2025
    Picture this: Europe has built a gigantic operating system for AI, and over the past few days Brussels has been quietly patching it.

    The EU Artificial Intelligence Act formally entered into force back in August 2024, but only now is the real story starting to bite. The European Commission, under President Ursula von der Leyen, is scrambling to make the law usable in practice. According to the Commission’s own digital strategy site, they have rolled out an “AI Continent Action Plan,” an “Apply AI Strategy,” and even an “AI Act Service Desk” to keep everyone from startups in Tallinn to medtech giants in Munich from drowning in paperwork.

    But here is the twist listeners should care about this week. On November nineteenth, the Commission dropped what lawyers are calling the Digital Omnibus, a kind of mega‑patch for EU tech rules. Inside it sits an AI Omnibus, which, as firms like Sidley Austin and MLex report, quietly proposes to delay some of the toughest obligations for so‑called high‑risk AI systems: think law‑enforcement facial recognition, medical diagnostics, and critical infrastructure controls. Instead of hard dates, compliance for many of these use cases would now be tied to when Brussels actually finishes the technical standards and guidance it has been promising.

    That sounds like a reprieve, but it is really a new kind of uncertainty. Compliance Week notes that companies are now asking whether they should invest heavily in documentation, auditing, and model governance now, or wait for yet another “clarification” from the European AI Office. Meanwhile, unacceptable‑risk systems, like manipulative social scoring, are already banned, and rules for general‑purpose AI models begin phasing in next year, backed by a Commission‑endorsed Code of Practice highlighted by ISACA. In other words, if you are building or deploying foundation models in Europe, the grace period is almost over.

    So the EU AI Act is becoming two things at once. For policymakers in Brussels and capitals like Paris and Berlin, it is a sovereignty play: a chance to make Europe the “AI continent,” complete with AI factories, gigafactories, and billions in InvestAI funding. For engineers and CISOs in London, San Francisco, or Bangalore whose systems touch EU users, it is starting to look more like a living API contract: continuous updates, version drift, and a non‑negotiable requirement to log, explain, and sometimes throttle what your models are allowed to do.

    The real question for listeners is whether this evolving rulebook nudges AI toward being more trustworthy, or just more bureaucratic. When deadlines slip but documentation expectations rise, the only safe bet is that AI governance is no longer optional; it is infrastructure.

    Thanks for tuning in, and don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI
    Más Menos
    3 m
  • EU AI Act Transforms Into Live Operating System Upgrade for AI Builders
    Dec 8 2025
    Let’s talk about the week the EU AI Act stopped being an abstract Brussels bedtime story and turned into a live operating system upgrade for everyone building serious AI.

    The European Union’s Artificial Intelligence Act has been in force since August 2024, but the big compliance crunch was supposed to hit in August 2026. Then, out of nowhere on November 19, the European Commission dropped the so‑called Digital Omnibus package. According to the Commission’s own announcement, this bundle quietly rewires the timelines and the plumbing of the AI Act, tying it to cybersecurity, data rules, and even a new Data Union Strategy designed to feed high‑quality data into European AI models.

    Here’s the twist: instead of forcing high‑risk AI systems into full compliance by August 2026, the Commission now proposes a readiness‑based model. ComplianceandRisks explains that high‑risk obligations would only really bite once harmonised standards, common specifications, and detailed guidance exist, with a long‑stop of December 2027 for the most sensitive use cases like law enforcement and education. Law firm analyses from Crowell & Moring and JD Supra underline the same point: Brussels is effectively admitting that you cannot regulate what you haven’t technically specified yet.

    So on paper it’s a delay. In practice, it’s a stress test. Raconteur notes that companies trading into the EU still face phased obligations starting back in February 2025: bans on “unacceptable risk” systems like untargeted biometric scraping, obligations for general‑purpose and foundation models from August 2025, and full governance, monitoring, and incident‑reporting architectures for high‑risk systems once the switch flips. You get more time, but you have fewer excuses.

    Inside the institutions, the AI Board just held its sixth meeting, where the Commission laid out how it will use interim guidelines to plug the gap while standardisation bodies scramble to finish technical norms. That means a growing stack of soft law: guidance, Q&As, sandboxes. DLA Piper points to a planned EU‑level regulatory sandbox, with priority access for smaller players, but don’t confuse that with a safe zone; it is more like a monitored lab environment.

    The politics are brutal. Commentators like Eurasia Review already talk about “backsliding” on AI rules, especially for neighbours such as Switzerland, who now must track moving targets in EU law while competing on speed. Meanwhile, UK firms, as Raconteur stresses, risk fines of up to 7 percent of global turnover if they sell into the EU and ignore the Act.

    So where does that leave you, as a listener building or deploying AI? The era of “move fast and break things” in Europe is over. The new game is “move deliberately and log everything.” System inventories, model cards, training‑data summaries, risk registers, human‑oversight protocols, post‑market monitoring: these are no longer nice‑to‑haves, they are the API for legal permission to innovate.

    The EU AI Act isn’t just a law; it’s Europe’s attempt to encode a philosophy of AI into binding technical requirements. If you want to play on the EU grid, your models will have to speak that language.

    Thanks for tuning in, and don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai

    This content was created in partnership and with the help of Artificial Intelligence AI
    Más Menos
    4 m
Todas las estrellas
Más relevante
It’s now possible to set up any text a little bit and put appropriate pauses and intonation in it. Here is just a plain text narrated by artificial intelligence.

Artificial voice, without pauses, etc.

Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.