Episodios

  • Strategies For Building A Product Using LLMs At DataChat
    Mar 3 2024
    SummaryLarge Language Models (LLMs) have rapidly captured the attention of the world with their impressive capabilities. Unfortunately, they are often unpredictable and unreliable. This makes building a product based on their capabilities a unique challenge. Jignesh Patel is building DataChat to bring the capabilities of LLMs to organizational analytics, allowing anyone to have conversations with their business data. In this episode he shares the methods that he is using to build a product on top of this constantly shifting set of technologies.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Jignesh Patel about working with LLMs; understanding how they work and how to build your ownInterviewIntroductionHow did you get involved in machine learning?Can you start by sharing some of the ways that you are working with LLMs currently?What are the business challenges involved in building a product on top of an LLM model that you don't own or control? In the current age of business, your data is often your strategic advantage. How do you avoid losing control of, or leaking that data while interfacing with a hosted LLM API?What are the technical difficulties related to using an LLM as a core element of a product when they are largely a black box? What are some strategies for gaining visibility into the inner workings or decision making rules for these models?What are the factors, whether technical or organizational, that might motivate you to build your own LLM for a business or product? Can you unpack what it means to "build your own" when it comes to an LLM?In your work at DataChat, how has the progression of sophistication in LLM technology impacted your own product strategy?What are the most interesting, innovative, or unexpected ways that you have seen LLMs/DataChat used?What are the most interesting, unexpected, or challenging lessons that you have learned while working with LLMs?When is an LLM the wrong choice?What do you have planned for the future of DataChat?Contact InfoWebsiteLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksDataChatCMU == Carnegie Mellon UniversitySVM == Support Vector MachineGenerative AIGenomicsProteomicsParquetOpenAI CodexLLamaMistralGoogle VertexLangchainRetrieval Augmented GenerationPrompt EngineeringEnsemble LearningXGBoostCatboostLinear RegressionCOGS == Cost Of Goods SoldBruce Schneier - AI And TrustThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    Más Menos
    49 m
  • Improve The Success Rate Of Your Machine Learning Projects With bizML
    Feb 18 2024
    SummaryMachine learning is a powerful set of technologies, holding the potential to dramatically transform businesses across industries. Unfortunately, the implementation of ML projects often fail to achieve their intended goals. This failure is due to a lack of collaboration and investment across technological and organizational boundaries. To help improve the success rate of machine learning projects Eric Siegel developed the six step bizML framework, outlining the process to ensure that everyone understands the whole process of ML deployment. In this episode he shares the principles and promise of that framework and his motivation for encapsulating it in his book "The AI Playbook".AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Eric Siegel about how the bizML approach can help improve the success rate of your ML projectsInterviewIntroductionHow did you get involved in machine learning?Can you describe what bizML is and the story behind it? What are the key aspects of this approach that are different from the "industry standard" lifecycle of an ML project?What are the elements of your personal experience as an ML consultant that helped you develop the tenets of bizML?Who are the personas that need to be involved in an ML project to increase the likelihood of success? Who do you find to be best suited to "own" or "lead" the process?What are the organizational patterns that might hinder the work of delivering on the goals of an ML initiative?What are some of the misconceptions about the work involved in/capabilities of an ML model that you commonly encounter?What is your main goal in writing your book "The AI Playbook"?What are the most interesting, innovative, or unexpected ways that you have seen the bizML process in action?What are the most interesting, unexpected, or challenging lessons that you have learned while working on ML projects and developing the bizML framework?When is bizML the wrong choice?What are the future developments in organizational and technical approaches to ML that will improve the success rate of AI projects?Contact InfoLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksThe AI Playbook: Mastering the Rare Art of Machine Learning Deployment by Eric SiegelPredictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die by Eric SiegelColumbia UniversityMachine Learning Week ConferenceGenerative AI WorldMachine Learning Leadership and Practice CourseRexer AnalyticsKD NuggetsCRISP-DMRandom ForestGradient DescentThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    Más Menos
    50 m
  • Using Generative AI To Accelerate Feature Engineering At FeatureByte
    Feb 11 2024
    SummaryOne of the most time consuming aspects of building a machine learning model is feature engineering. Generative AI offers the possibility of accelerating the discovery and creation of feature pipelines. In this episode Colin Priest explains how FeatureByte is applying generative AI models to the challenge of building and maintaining machine learning pipelines.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Colin Priest about applying generative AI to the task of building and deploying AI pipelinesInterviewIntroductionHow did you get involved in machine learning?Can you start by giving the 30,000 foot view of the steps involved in an AI pipeline? Understand the problemFeature ideationFeature engineeringExperimentOptimizeProductionizeWhat are the stages of that process that are prone to repetition? What are the ways that teams typically try to automate those steps?What are the features of generative AI models that can be brought to bear on the design stage of an AI pipeline? What are the validation/verification processes that engineers need to apply to the generated suggestions?What are the opportunities/limitations for unit/integration style tests?What are the elements of developer experience that need to be addressed to make the gen AI capabilities an enhancement instead of a distraction? What are the interfaces through which the AI functionality can/should be exposed?What are the aspects of pipeline and model deployment that can benefit from generative AI functionality? What are the potential risk factors that need to be considered when evaluating the application of this functionality?What are the most interesting, innovative, or unexpected ways that you have seen generative AI used in the development and maintenance of AI pipelines?What are the most interesting, unexpected, or challenging lessons that you have learned while working on the application of generative AI to the ML workflow?When is generative AI the wrong choice?What do you have planned for the future of FeatureByte's AI copilot capabiliteis?Contact InfoLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksFeatureByteGenerative AIThe Art of WarOCR == Optical Character RecognitionGenetic AlgorithmSemantic LayerPrompt EngineeringThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0Support The Machine Learning Podcast
    Más Menos
    45 m
  • Learn And Automate Critical Business Workflows With 8Flow
    Jan 28 2024
    SummaryEvery business develops their own specific workflows to address their internal organizational needs. Not all of them are properly documented, or even visible. Workflow automation tools have tried to reduce the manual burden involved, but they are rigid and require substantial investment of time to discover and develop the routines. Boaz Hecht co-founded 8Flow to iteratively discover and automate pieces of workflows, bringing visibility and collaboration to the internal organizational processes that keep the business running.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Boaz Hecht about using AI to automate customer support at 8FlowInterviewIntroductionHow did you get involved in machine learning?Can you describe what 8Flow is and the story behind it?How does 8Flow compare to RPA tools that companies are using today? What are the opportunities for augmenting or integrating with RPA frameworks?What are the key selling points for the solution that you are building? (does AI sell? Or is it about the realized savings?)What are the sources of signal that you are relying on to build model features?Given the heterogeneity in tools and processes across customers, what are the common focal points that let you address the widest possible range of functionality?Can you describe how 8Flow is implemented? How have the design and goals evolved since you first started working on it?What are the model categories that are most relevant for process automation in your product?How have you approached the design and implementation of your MLOps workflow? (model training, deployment, monitoring, versioning, etc.)What are the open questions around product focus and system design that you are still grappling with?Given the relative recency of ML/AI as a profession and the massive growth in attention and activity, how are you addressing the challenge of obtaining and maximizing human talent?What are the most interesting, innovative, or unexpected ways that you have seen 8Flow used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on 8Flow?When is 8Flow the wrong choice?What do you have planned for the future of 8Flow?Contact InfoLinkedInPersonal WebsiteParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.Links8FlowRobotic Process AutomationThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0Support The Machine Learning Podcast
    Más Menos
    43 m
  • Considering The Ethical Responsibilities Of ML And AI Engineers
    Jan 28 2024
    SummaryMachine learning and AI applications hold the promise of drastically impacting every aspect of modern life. With that potential for profound change comes a responsibility for the creators of the technology to account for the ramifications of their work. In this episode Nicholas Cifuentes-Goodbody guides us through the minefields of social, technical, and ethical considerations that are necessary to ensure that this next generation of technical and economic systems are equitable and beneficial for the people that they impact.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Nicholas Cifuentes-Goodbody about the different elements of the machine learning workflow where ethics need to be consideredInterviewIntroductionHow did you get involved in machine learning?To start with, who is responsible for addressing the ethical concerns around AI?What are the different ways that AI can have positive or negative outcomes from an ethical perspective? What is the role of practitioners/individual contributors in the identification and evaluation of ethical impacts of their work?What are some utilities that are helpful in identifying and addressing bias in training data?How can practitioners address challenges of equity and accessibility in the delivery of AI products?What are some of the options for reducing the energy consumption for training and serving AI?What are the most interesting, innovative, or unexpected ways that you have seen ML teams incorporate ethics into their work?What are the most interesting, unexpected, or challenging lessons that you have learned while working on ethical implications of ML?What are some of the resources that you recommend for people who want to invest in their knowledge and application of ethics in the realm of ML?Contact InfoWorldQuant University's Applied Data Science LabLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksUNESCO Recommendation on the Ethics of Artificial IntelligenceEuropean Union AI ActHow machine learning helps advance access to human rights informationDisinformation, Team JorgeChina, AI, and Human RightsHow China Is Using A.I. to Profile a MinorityWeapons of Math DestructionFairlearnAI Fairness 360Allen Institute for AI NYTAllen Institute for AITransformersAI4ALLWorldQuant UniversityHow to Make Generative AI GreenerMachine Learning Emissions CalculatorPracticing Trustworthy Machine LearningEnergy and Policy Considerations for Deep LearningNatural Language ProcessingTrolley ProblemProtected Classesfairlearn (scikit-learn)BERT ModelThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    Más Menos
    39 m
  • Build Intelligent Applications Faster With RelationalAI
    Dec 31 2023
    SummaryBuilding machine learning systems and other intelligent applications are a complex undertaking. This often requires retrieving data from a warehouse engine, adding an extra barrier to every workflow. The RelationalAI engine was built as a co-processor for your data warehouse that adds a greater degree of flexibility in the representation and analysis of the underlying information, simplifying the work involved. In this episode CEO Molham Aref explains how RelationalAI is designed, the capabilities that it adds to your data clouds, and how you can start using it to build more sophisticated applications on your data.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Molham Aref about RelationalAI and the principles behind it for powering intelligent applicationsInterviewIntroductionHow did you get involved in machine learning?Can you describe what RelationalAI is and the story behind it? On your site you call your product an "AI Co-processor". Can you explain what you mean by that phrase?What are the primary use cases that you address with the RelationalAI product? What are the types of solutions that teams might build to address those problems in the absence of something like the RelationalAI engine?Can you describe the system design of RelationalAI? How have the design and goals of the platform changed since you first started working on it?For someone who is using RelationalAI to address a business need, what does the onboarding and implementation workflow look like?What is your design philosophy for identifying the balance between automating the implementation of certain categories of application (e.g. NER) vs. providing building blocks and letting teams assemble them on their own?What are the data modeling paradigms that teams should be aware of to make the best use of the RKGS platform and Rel language?What are the aspects of customer education that you find yourself spending the most time on?What are some of the most under-utilized or misunderstood capabilities of the RelationalAI platform that you think deserve more attention?What are the most interesting, innovative, or unexpected ways that you have seen the RelationalAI product used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on RelationalAI?When is RelationalAI the wrong choice?What do you have planned for the future of RelationalAI?Contact InfoLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksRelationalAISnowflakeAI WinterBigQueryGradient DescentB-TreeNavigational DatabaseHadoopTeradataWorst Case Optimal JoinSemantic Query OptimizationRelational AlgebraHyperGraphLinear AlgebraVector DatabasePathwayData Engineering Podcast EpisodePineconeData Engineering Podcast EpisodeThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    Más Menos
    58 m
  • Building Better AI While Preserving User Privacy With TripleBlind
    Nov 22 2023
    SummaryMachine learning and generative AI systems have produced truly impressive capabilities. Unfortunately, many of these applications are not designed with the privacy of end-users in mind. TripleBlind is a platform focused on embedding privacy preserving techniques in the machine learning process to produce more user-friendly AI products. In this episode Gharib Gharibi explains how the current generation of applications can be susceptible to leaking user data and how to counteract those trends.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Gharib Gharibi about the challenges of bias and data privacy in generative AI modelsInterviewIntroductionHow did you get involved in machine learning?Generative AI has been gaining a lot of attention and speculation about its impact. What are some of the risks that these capabilities pose? What are the main contributing factors to their existing shortcomings?What are some of the subtle ways that bias in the source data can manifest?In addition to inaccurate results, there is also a question of how user interactions might be re-purposed and potential impacts on data and personal privacy. What are the main sources of risk?With the massive attention that generative AI has created and the perspectives that are being shaped by it, how do you see that impacting the general perception of other implementations of AI/ML? How can ML practitioners improve and convey the trustworthiness of their models to end users?What are the risks for the industry if generative models fall out of favor with the public?How does your work at Tripleblind help to encourage a conscientious approach to AI?What are the most interesting, innovative, or unexpected ways that you have seen data privacy addressed in AI applications?What are the most interesting, unexpected, or challenging lessons that you have learned while working on privacy in AI?When is TripleBlind the wrong choice?What do you have planned for the future of TripleBlind?Contact InfoLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksTripleBlindImageNet Geoffrey Hinton PaperBERT language modelGenerative AIGPT == Generative Pre-trained TransformerHIPAA Safe Harbor RulesFederated LearningDifferential PrivacyHomomorphic EncryptionThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    Más Menos
    47 m
  • Enhancing The Abilities Of Software Engineers With Generative AI At Tabnine
    Nov 13 2023
    SummarySoftware development involves an interesting balance of creativity and repetition of patterns. Generative AI has accelerated the ability of developer tools to provide useful suggestions that speed up the work of engineers. Tabnine is one of the main platforms offering an AI powered assistant for software engineers. In this episode Eran Yahav shares the journey that he has taken in building this product and the ways that it enhances the ability of humans to get their work done, and when the humans have to adapt to the tool.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Eran Yahav about building an AI powered developer assistant at TabnineInterviewIntroductionHow did you get involved in machine learning?Can you describe what Tabnine is and the story behind it?What are the individual and organizational motivations for using AI to generate code? What are the real-world limitations of generative AI for creating software? (e.g. size/complexity of the outputs, naming conventions, etc.)What are the elements of skepticism/oversight that developers need to exercise while using a system like Tabnine?What are some of the primary ways that developers interact with Tabnine during their development workflow? Are there any particular styles of software for which an AI is more appropriate/capable? (e.g. webapps vs. data pipelines vs. exploratory analysis, etc.)For natural languages there is a strong bias toward English in the current generation of LLMs. How does that translate into computer languages? (e.g. Python, Java, C++, etc.)Can you describe the structure and implementation of Tabnine? Do you rely primarily on a single core model, or do you have multiple models with subspecialization?How have the design and goals of the product changed since you first started working on it?What are the biggest challenges in building a custom LLM for code? What are the opportunities for specialization of the model architecture given the highly structured nature of the problem domain?For users of Tabnine, how do you assess/monitor the accuracy of recommendations? What are the feedback and reinforcement mechanisms for the model(s)?What are the most interesting, innovative, or unexpected ways that you have seen Tabnine's LLM powered coding assistant used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI assisted development at Tabnine?When is an AI developer assistant the wrong choice?What do you have planned for the future of Tabnine?Contact InfoLinkedInWebsiteParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksTabNineTechnion UniversityProgram SynthesisContext StuffingElixirDependency InjectionCOBOLVerilogMidJourneyThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    Más Menos
    1 h y 5 m