Humans of Martech Podcast Por Phil Gamache arte de portada

Humans of Martech

Humans of Martech

De: Phil Gamache
Escúchala gratis

Obtén 3 meses por US$0.99 al mes

Future-proofing the humans behind the tech. Follow Phil Gamache and Darrell Alfonso on their mission to help future-proof the humans behind the tech and have successful careers in the constantly expanding universe of martech.©2025 Humans of Martech Inc. Economía Exito Profesional Marketing Marketing y Ventas
Episodios
  • 200: Matthew Castino: How Canva measures marketing
    Dec 16 2025
    What’s up everyone, today we have the pleasure of sitting down with Matthew Castino, Marketing Measurement Science Lead @ Canva.(00:00) - Intro (01:10) - In This Episode (03:50) - Canva’s Prioritization System for Marketing Experiments (11:26) - What Happened When Canva Turned Off Branded Search (18:48) - Structuring Global Measurement Teams for Local Decision Making (24:32) - How Canva Integrates Marketing Measurement Into Company Forecasting (31:58) - Using MMM Scenario Tools To Align Finance And Marketing (37:05) - Why Multi Touch Attribution Still Matters at Canva (42:42) - How Canva Builds Feedback Loops Between MMM and Experiments (46:44) - Canva’s AI Workflow Automation for Geo Experiments (51:31) - Why Strong Coworker Relationships Improve Career SatisfactionSummary: Canva operates at a scale where every marketing decision carries huge weight, and Matt leads the measurement function that keeps those decisions grounded in science. He leans on experiments to challenge assumptions that models inflate. As the company grew, he reshaped measurement so centralized models stayed steady while embedded data scientists guided decisions locally, and he built one forecasting engine that finance and marketing can trust together. He keeps multi touch attribution in play because user behavior exposes patterns MMM misses, and he treats disagreements between methods as signals worth examining. AI removes the bottlenecks around geo tests, data questions, and creative tagging, giving his team space to focus on evidence instead of logistics. About MatthewMatthew Castino blends psychology, statistics, and marketing intuition in a way that feels almost unfair. With a PhD in Psychology and a career spent building measurement systems that actually work, he’s now the Marketing Measurement Science Lead at Canva, where he turns sprawling datasets and ambitious growth questions into evidence that teams can trust.His path winds through academia, health research, and the high-tempo world of sports trading. At UNSW, Matt taught psychology and statistics while contributing to research at CHETRE. At Tabcorp, he moved through roles in customer profiling, risk systems, and US/domestic sports trading; spaces where every model, every assumption, and every decision meets real consequences fast. Those years sharpened his sense for what signal looks like in a messy environment.Matt lives in Australia and remains endlessly curious about how people think, how markets behave, and why measurement keeps getting harder, and more fun.Canva’s Prioritization System for Marketing ExperimentsCanva’s marketing experiments run in conditions that rarely resemble the clean, product controlled environment that most tech companies love to romanticize. Matthew works in markets filled with messy signals, country level quirks, channel specific behaviors, and creative that behaves differently depending on the audience. Canva built a world class experimentation platform for product, but none of that machinery helps when teams need to run geo tests or channel experiments across markets that function on completely different rhythms. Marketing had to build its own tooling, and Matthew treats that reality with a mix of respect and practicality.His team relies on a prioritization system grounded in two concrete variables.SpendUncertaintyLarge budgets demand measurement rigor because wasted dollars compound across millions of impressions. Matthew cares about placing the most reliable experiments behind the markets and channels with the biggest financial commitments. He pairs that with a very sober evaluation of uncertainty. His team pulls signals from MMM models, platform lift tests, creative engagement, and confidence intervals. They pay special attention to MMM intervals that expand beyond comfortable ranges, especially when historical spend has not varied enough for the model to learn. He reads weak creative engagement as a warning sign because poor engagement usually drags efficiency down even before the attribution questions show up.“We try to figure out where the most money is spent in the most uncertain way.”The next challenge sits in the structure of the team. Matthew ran experimentation globally from a centralized group for years, and that model made sense when the company footprint was narrower. Canva now operates in regions where creative norms differ sharply, and local teams want more authority to respond to market dynamics in real time. Matthew sees that centralization slows everything once the company reaches global scale. He pushes for embedded data scientists who sit inside each region, work directly with marketers, and build market specific experimentation roadmaps that reflect local context. That way experimentation becomes a partner to strategy instead of a bottleneck.Matthew avoids building a tower of approvals because heavy process often suffocates marketing momentum. He prefers a model where teams follow shared principles, ...
    Más Menos
    56 m
  • 199: Anna Aubuchon: Moving BI workloads into LLMs and using AI to build what you used to buy
    Dec 9 2025
    What’s up everyone, today we have the pleasure of sitting down with Anna Aubuchon, VP of Operations at Civic Technologies.(00:00) - Intro (01:15) - In This Episode (04:15) - How AI Flipped the Build Versus Buy Decision (07:13) - Redrawing What “Complex” Means (12:20) - Why In House AI Provides Better Economics And Control (15:33) - How to Treat AI as an Insourcing Engine (21:02) - Moving BI Workloads Out of Dashboards and Into LLMs (31:37) - Guardrails That Keep AI Querying Accurate (38:18) - Using Role Based AI Guardrails Across MCP Servers (44:43) - Ops People are Creators of Systems Rather Than Maintainers of Them (48:12) - Why Natural Language AI Lowers the Barrier for First-Time Builders (52:31) - Technical Literacy Requirements for Next Generation Operators (56:46) - Why Creative Practice Strengthens Operational LeadershipSummary: AI has reshaped how operators work, and Anna lays out that shift with the clarity of someone who has rebuilt real systems under pressure. She breaks down how old build versus buy habits hold teams back, how yearly AI contracts quietly drain momentum, and how modern integrations let operators assemble powerful workflows without engineering bottlenecks. She contrasts scattered one-off AI tools with the speed that comes from shared patterns that spread across teams. Her biggest story lands hard. Civic replaced slow dashboards and long queues with orchestration that pulls every system into one conversational layer, letting people get answers in minutes instead of mornings. That speed created nerves around sensitive identity data, but tight guardrails kept the team safe without slowing anything down. Anna ends by pushing operators to think like system designers, not tool babysitters, and to build with the same clarity her daughter uses when she describes exactly what she wants and watches the system take shape.About AnnaAnna Aubuchon is an operations executive with 15+ years building and scaling teams across fintech, blockchain, and AI. As VP of Operations at Civic Technologies, she oversees support, sales, business operations, product operations, and analytics, anchoring the company’s growth and performance systems.She has led blockchain operations since 2014 and built cross-functional programs that moved companies from early-stage complexity into stable, scalable execution. Her earlier roles at Gyft and Thomson Reuters focused on commercial operations, enterprise migrations, and global team leadership, supporting revenue retention and major process modernization efforts.How AI Flipped the Build Versus Buy DecisionAI tooling has shifted so quickly that many teams are still making decisions with a playbook written for a different era. Anna explains that the build versus buy framework people lean on carries assumptions that no longer match the tool landscape. She sees operators buying AI products out of habit, even when internal builds have become faster, cheaper, and easier to maintain. She connects that hesitation to outdated mental models rather than actual technical blockers.AI platforms keep rolling out features that shrink the amount of engineering needed to assemble sophisticated workflows. Anna names the layers that changed this dynamic. System integrations through MCP act as glue for data movement. Tools like n8n and Lindy give ops teams workflow automation without needing to file tickets. Then ChatGPT Agents and Cloud Skills launched with prebuilt capabilities that behave like Lego pieces for internal systems. Direct LLM access removed the fear around infrastructure that used to intimidate nontechnical teams. She describes the overall effect as a compression of technical overhead that once justified buying expensive tools.She uses Civic’s analytics stack to illustrate how she thinks about the decision. Analytics drives the company’s ability to answer questions quickly, and modern integrations kept the build path light. Her team built the system because it reinforced a core competency. She compares that with an AI support bot that would need to handle very different audiences with changing expectations across multiple channels. She describes that work as high domain complexity that demands constant tuning, and the build cost would outweigh the value. Her team bought that piece. She grounds everything in two filters that guide her decisions: core competency and domain complexity.Anna also calls out a cultural pattern that slows AI adoption. Teams buy AI tools individually and create isolated pockets of automation. She wants teams to treat AI workflows as shared assets. She sees momentum building when one group experiments with a workflow and others borrow, extend, or remix it. She believes this turns AI adoption into a group habit rather than scattered personal experiments. She highlights the value of shared patterns because they create a repeatable way for teams to test ideas without rebuilding from scratch.She closes by urging operators to update their ...
    Más Menos
    1 h
  • 198: Pam Boiros: 10 Ways to support women and build more inclusive AI
    Dec 2 2025
    What’s up everyone, today we have the pleasure of sitting down with Pam Boiros, Fractional CMO and Marketing advisor, and Co-Founder Women Applying AI.(00:00) - Intro (01:13) - In This Episode (03:49) - How To Audit Data Fingerprints For AI Bias In Marketing (07:39) - Why Emotional Intelligence Improves AI Prompting Quality (10:14) - Why So Many Women Hesitate (15:40) - Why Collaborative AI Practice Builds Confidence In Marketing Ops Teams (18:31) - How to Go From AI Curious to AI Confident (24:32) - Joining The 'Women Applying AI' Community (27:18) - Other Ways to Support Women in AI (28:06) - Role Models and Visibility (32:55) - Leadership’s Role in Inclusion (35:57) - Mentorship for the AI Era (38:15) - Why Story Driven Communities Strengthen AI Adoption for Women (42:17) - AI’s Role in Women’s Worklife Harmony (45:22) - Why Personal History Strengthens Creative LeadershipSummary: Pam delivers a clear, grounded look at how women learn and lead with AI, moving from biased datasets to late-night practice sessions inside Women Applying AI. She brings sharp examples from real teams, highlights the quiet builders shaping change, and roots her perspective in the resilience she learned from the women in her own family. If you want a straightforward view of what practical, human-centered AI adoption actually looks like, this episode is worth your time.About PamPam Boiros is a consultant who helps marketing teams find direction and build plans that feel doable. She leads Marketing AI Jump Start and works as a fractional CMO for clients like Reclaim Health, giving teams practical ways to bring AI into their day-to-day work. She’s also a founding member of Women Applying AI, a new community that launched in Sep 2025 that creates a supportive space for women to learn AI together and grow their confidence in the field.Earlier in her career, Pam spent 12 years at a fast-growing startup that Skillsoft later acquired, then stepped into senior marketing and product leadership there for another three and a half years. That blend of startup pace and enterprise structure shapes how she guides her clients today.How To Audit Data Fingerprints For AI Bias In MarketingAI bias spreads quietly in marketing systems, and Pam treats it as a pattern problem rather than a mistake problem. She explains that models repeat whatever they have inherited from the data, and that repetition creates signals that look normal on the surface. Many teams read those signals as truth because the outputs feel familiar. Pam has watched marketing groups make confident decisions on top of datasets they never examined, and she believes this is how invisible bias gains momentum long before anyone sees the consequences.Pam describes every dataset as carrying a fingerprint. She studies that fingerprint by zooming into the structure, the gaps, and the repetition. She looks for missing groups, inflated representation, and subtle distortions baked into the source. She builds this into her workflow because she has seen how quickly a model amplifies the same dominant voices that shaped the data. She brings up real scenarios from her own career where women were labeled as edge cases in models even though they represented half the customer base. These patterns shape everything from product recommendations to retention scores, and she believes many teams never notice because the numbers look clean and objective."Every dataset has a fingerprint. You cannot see it at first glance, but it becomes obvious once you look for who is overrepresented, who is underrepresented, or who is misrepresented."Pam organizes her process into three cycles that marketers can use immediately.The habit works because it forces scrutiny at every stage, not just at kickoff.Before building, trace the data source, the people represented, and the people missing.While building, stress test the system across groups that usually sit at the margins.After launch, monitor outputs with the same rhythm you use for performance analysis.She treats these cycles as an operational discipline. She compares the scale of bias to a compounding effect, since one flawed assumption can multiply into hundreds of outputs within hours. She has seen pressure to ship faster push teams into trusting defaults, which creates the illusion of objectivity even when the system leans heavily toward one group’s behavior. She wants marketers to recognize that AI audits function like quality control, and she encourages them to build review rituals that continue as the model learns. She believes this daily maintenance protects teams from subtle drift where the model gradually leans toward the patterns it already prefers.Pam views long term monitoring as the part that matters most. She knows how fast AI systems evolve once real customers interact with them. Bias shifts as new data enters the mix. Entire segments disappear because the model interprets their silence as disengagement. Other segments dominate because ...
    Más Menos
    49 m
Todavía no hay opiniones