Humans of Martech Podcast Por Phil Gamache arte de portada

Humans of Martech

Humans of Martech

De: Phil Gamache
Escúchala gratis

Future-proofing the humans behind the tech. Follow Phil Gamache and Darrell Alfonso on their mission to help future-proof the humans behind the tech and have successful careers in the constantly expanding universe of martech.©2026 Humans of Martech Inc. Economía Exito Profesional Marketing Marketing y Ventas
Episodios
  • 214: Austin Hay: Claude Code is creating a new class of elite marketers and the mental models that make it click
    Apr 7 2026
    What's up everyone, today we have the pleasure of sitting down with Austin Hay, Martech, Revtech, and GTM systems advisor, AND – AI builder, writer, and ex-founder. In This Episode:(00:00) - Austin-audio (01:16) - In This Episode (01:54) - Sponsor: RevenueHero (02:48) - Sponsor: Mammoth Growth (04:09) - How Code-Driven AI Workflows Outperform Chat-Based Prompting (14:55) - How to Start Building With Claude Code When You Have No Time (19:45) - The Programming Concepts Non-Developers Need to Build With Claude Code (23:49) - How to Turn Repeating Prompts Into Automations That Run Themselves (31:11) - Sponsor: MoEngage (32:07) - Sponsor: Knak (33:37) - Why Spending All Your Time in Meetings Is a Career Liability (36:28) - Why the Best First Claude Code Project Is the Task That Already Annoys You (40:22) - Why T-Shaped Marketers With Claude Code Will Cover the Work of Entire Teams (46:27) - Why Marketing Taste Matters More Than Technical Skill in the AI Era (49:43) - How Early-Career Professionals Build Judgment When Entry-Level Work Gets Automated (53:14) - How Austin Hay Runs His Career as a FlywheelAustin Hay has spent 15 years moving between the technical and strategic ends of marketing, starting as the 4th employee at Branch, building and selling a mobile growth consultancy that was acqui-hired by mParticle, and eventually rising to VP of Growth before moving on to Ramp as Head of Martech. He later co-founded Clarify, a CRM startup he took from zero to $100K+ ARR while completing a Wharton MBA. Today he works as a fractional advisor to scaling companies on martech, revtech, and GTM systems, teaches thousands of practitioners through his Martech course at Reforge, and writes the Growth Stack Mafia newsletter on Substack.Austin spent months as a chatbot skeptic before Claude Code changed his view entirely. In this conversation, he maps the gap between using AI through a chat interface and wielding it as code in your actual environment, explains why meeting-heavy schedules are a compounding career liability, and makes the case for a new class of professional he calls the white collar super saiyan.---## How Code-Driven AI Workflows Outperform Chat-Based PromptingMost marketers use AI the same way they used Google in 2005. Open the interface, type something in, read what comes back, copy it somewhere. Austin Hay did this for months. He was not an early Claude Code adopter. He says this upfront, almost as a confession. He thought it was another chatbot.What broke him was specific. He was querying financial data at his startup, Clarify, through Runway, an FP&A platform connected to QuickBooks. Every SQL change required the same round trip: write the query in terminal, copy it to Claude, get feedback, paste it back, run it. He built a folder just to manage the back-and-forth. The model couldn't see his local files. The chat UI had upload limits. He was stuck in what he calls a world of calling and answering. Functional. But slow. And bounded in a way you eventually stop ignoring.Claude Code gave him access. When you type claude in a terminal, the model reads your actual files — the data as it lives in your repository, not a paste you copied, not a summary you wrote. It runs commands against your system, observes what happens, and acts on the result. The round trip ends. You stop relaying information and start working in the same environment. That is a different thing than a smarter chatbot.The shift combined with several unlocks arriving at once: Opus as a model, MCPs that worked reliably, a Max plan that made unlimited credits economical, and an agent architecture built around memory files and commands. All of it hit critical mass for Austin in January. He says the last 6 months felt like 3 years. You can hear in how he talks about it that he means it.The 2 chasms he had written about in his newsletter turned out to be real and distinct. Adopting AI at all is chasm 1. Crossing from chat to code is chasm 2. Most practitioners have cleared the first. Almost none have cleared the second. And the view from the other side, Austin says, is unrecognizable.> "It's this culmination of many things that I think really hit this critical mass in about January of this year."Key takeaway: Install Claude Code, open a terminal, point it at a folder with files you actually work with — SQL queries, drafts, data exports, notes — and run a real task on them. The gap between giving AI access to your environment and describing your environment through a chat window is immediate and felt, and that feeling is what changes the mental model.---## How to Start Building With Claude Code When You Have No TimeThe time problem is real. You have a 9-to-5. Your weekends disappear. Nobody at your company is running AI hackathons. "Learn the command line" is not advice you can act on between your Thursday syncs.Austin doesn't dismiss this. But he points at the part most people miss: they know step 1 (chat interface) and they see step 3...
    Más Menos
    1 h y 3 m
  • 213: John Whalen: The next marketing advantage is pre-testing ideas on synthetic users
    Mar 31 2026
    What’s up everyone, today we have the pleasure of sitting down with Dr. John Whalen, Cognitive Scientist, Author, and Founder at Brilliant Experience.Summary: John has spent his career studying how people actually think, and his conclusion is uncomfortable for anyone who believes their marketing decisions are more rational than they are. In this episode, John explores how synthetic users built from cognitive science principles can fill the massive research gap that most teams quietly ignore, and why removing the human interviewer from the room might be the fastest way to finally hear the truth.In this Episode…(00:00) - Intro (01:13) - In This Episode (04:31) - What Are Synthetic Users and Why Do They Matter? (10:00) - How Synthetic Users Make Stakeholders Hungry for Real Human Research (15:56) - Pre-Testing on Synthetic Users: Shortcut or Smart Step? (18:53) - How to Actually Build a Synthetic User: Tools, Layers, and Agentic Systems (40:51) - Is the Average Persona Dead? Scale, Diversity, and the World Model (43:01) - Asking the Uncomfortable Questions: What AI Agents Reveal That Humans Won't (49:30) - Ending the Quant vs. Qual Debate with Statistically Relevant Qualitative Data (56:37) - Mining the 'Why' Behind Silent Behavioral Data with Synthetic Users (01:02:31) - Designing for Agent Users: The Coming Shift to Human-and-Machine-Centered Design (01:05:28) - The Happiness Question: Dogs, Nature, and Staying AnalogAbout JohnDr. John Whalen is a Cognitive Scientist, Author, and Founder of Brilliant Experience, where he applies cognitive science principles to help organizations design products and experiences that align with how people actually think and make decisions. He’s also an educator, teaching two AI customer research courses on Maven.His work explores the intersection of human psychology and marketing, including the emerging practice of pre-testing ideas on synthetic users to give brands a faster and more informed competitive edge. He is also the author of a book on the science of designing for the human mind, bringing academic rigor to practical business challenges.How Synthetic User Research Works and When to Trust ItSynthetic user research sounds like something creepy out of a dystopian science fiction film, and John is the first to admit the terminology does nobody any favors. When asked about what synthetic users actually are and what they mean for research, he admited: if he had been on the branding team, he would have pushed hard for something like “dynamic personas” instead. The name creates unnecessary friction before the conversation even starts. And that friction matters when you’re trying to get skeptical executives or methiculous researchers to take the whole thing seriously.Under the hood, specialized AI tools simulate how a defined audience segment would respond to a question, concept, or stimulus, without recruiting, scheduling, incentivizing, or waiting on real human participants. John runs a class where he collects genuine human data first, then feeds comparable inputs into these tools to benchmark accuracy head-to-head. The results are pretty wild. AI-generated responses align with real human findings somewhere between 85% and 100% of the time on major topics and consumer needs. That is not a peer-reviewed clinical trial, and John is not pretending otherwise. But 85% alignment is enough signal to stop reflexively dismissing the method and start asking harder, more specific questions about exactly where it fits into a research stack.So what does this mean for you and your company though? Think all the decisions that currently live in a black hole of zero structured input. How many product calls, campaign concepts, and messaging pivots happen with nothing more than a conference room full of people who all read the same talking heads on LinkedIn? John argues that low cost, round-the-clock accessibility, and minimal public exposure make these tools a natural fit for precisely those moments: pressure-checking a hypothesis at 11pm, testing whether a pitch direction even makes sense before it touches a client, or deciding whether a concept deserves the time and money required for proper validation.“If these are only going to keep getting better and better, which they are, then logically, what kinds of decisions right now go completely by gut and no research, and what could we use to help us frame that?”One of the more underappreciated angles John raises is global inclusivity. Large organizations routinely test in the US and Western Europe, then extrapolate those findings to markets in Southeast Asia, Latin America, or Sub-Saharan Africa because local research budgets simply do not exist. Big nono. Synthetic personas trained on broader, more representative data could at minimum provide directional signals for those markets, making research more geographically honest without a proportional spike in spend.The early AI bias problem, where models essentially mirrored the...
    Más Menos
    1 h y 8 m
  • 212: Tobias Konitzer: The Causal AI revolution and the boomerang effect in marketing decision science
    Mar 24 2026
    Summary: Tobi challenged marketing’s fixation on prediction. He has built highly accurate LTV models, but accuracy alone does not move revenue. Marketing is intervention. Correlation shows patterns; causality tells you what happens when you pull a lever. That shift reshapes experimentation, explains why dynamic allocation can outperform static A B tests, and highlights how self learning systems can backfire or get stuck in local maxima. It also fuels his skepticism of unleashing agentic AI on historical data without a causal layer. If you want to change outcomes instead of forecast them, your systems need to understand levers and log decisions you can actually audit.(00:00) - Intro (01:22) - In This Episode (04:07) - Why Predictive Models Fail Without Causal Inference (09:49) - How to Validate Causal Impact on Customer Lifetime Value (13:04) - Reducing Uncertainty Around Causal Effects by Optimizing Levers, Not Labels (17:01) - Why Dynamic Allocation Works Better Than Fixed Horizon A B Testing (31:54) - The Boomerang Effect and Why Uninformed AI Sabotages Early Results (40:15) - Escaping Local Maxima and The Failure of Randomly Initialized Decisioning (44:04) - Why Agentic AI Trained on Data Warehouse Correlations Reinforces Bias (49:00) - The Power of Composable Decisioning (53:06) - How Machine Decisioning Transcends Marketing (01:01:41) - Why Clear Priority Hierarchies Improve Executive Decision MakingAbout TobiasTobias Konitzer, PhD is VP of AI at GrowthLoop, where he’s chasing closed-loop marketing powered by reinforcement learning, causality, and agentic systems. He’s spent the past decade focused on one core problem: moving beyond prediction to actually influencing outcomes.Previously, Tobi was Chief Innovation Officer at Fenix Commerce, helping major eCommerce brands modernize checkout and delivery with machine learning. He also founded Ocurate, a venture-backed startup that predicted customer lifetime value to optimize ad bidding in real time, raising $5.5M and scaling to $500K+ ARR before its acquisition. Earlier, he co-founded PredictWise, building psychographic and behavioral targeting models that drove over $2M in revenue.Tobi earned his PhD in Computational Social Science from Stanford and worked at Facebook Research on large-scale ML and bias correction. Originally from Germany and based in the Bay Area since 2013, he writes frequently about causal thinking, machine decisioning, and the future of marketing.Why Predictive Models Fail Without Causal InferencePrediction dominates most marketing roadmaps. Teams invest months refining churn models, tightening confidence intervals, and debating which threshold deserves a campaign. Tobi built an entire company on that logic. His team produced highly accurate lifetime value predictions using deep learning and granular event data. The forecasts were sharp. The lift curves were clean. Buyers were impressed.Then lifecycle marketers asked a more uncomfortable question: what action should follow the score?A predictive model encodes the current trajectory of a customer under existing policies. It describes what will likely happen if nothing changes. Marketing changes things constantly. The moment you intervene, you alter the system that generated the prediction. The forecast reflects yesterday’s conditions, not tomorrow’s strategy.> “Prediction tells you the future if you do nothing. Causation tells you how to change it.”Consider the Prediction Trap.On the left, the status quo labels a person as high churn risk. The function is observation. The outcome is a description of what happens if you leave the system untouched. On the right, a lever gets pulled. The function is intervention. The outcome is directional change.That shift in function changes how you work.Prediction thinking centers on segmentation:Who is likely to churn?Who is likely to buy?Who looks like high LTV?Causal thinking centers on levers:Which incentive reduces churn?Which sequence increases repeat purchase?Which offer raises lifetime value incrementally?Tobi often uses an LTV example to expose the trap. Suppose high LTV customers frequently viewed a specific product early in their journey. A team might redesign the onboarding flow to feature that product more aggressively. The correlation looks persuasive. The causal effect remains unknown.Several alternative explanations could drive the pattern:The product may correlate with a specific acquisition channel.The product may have been highlighted during a limited campaign.The product view may signal prior brand familiarity.Only an intervention test can estimate incremental impact. Correlation can guide hypothesis generation, but it cannot validate the lever itself.Tobi also highlights a deeper issue. Acting on predictions introduces compounding uncertainty across multiple layers:The predictive model carries statistical variance.The translation from model features to campaign strategy introduces interpretation bias.The ...
    Más Menos
    1 h y 5 m
Todavía no hay opiniones