Targeting AI  By  cover art

Targeting AI

By: TechTarget Editorial
  • Summary

  • Hosts Shaun Sutner, TechTarget News senior news director, and AI news writer Esther Ajao interview AI experts from the tech vendor, analyst and consultant community, academia and the arts as well as AI technology users from enterprises and advocates for data privacy and responsible use of AI. Topics are related to news events in the AI world but the episodes are intended to have a longer, more ”evergreen” run and they are in-depth and somewhat long form, aiming for 45 minutes to an hour in duration. The podcast will occasionally host guests from inside TechTarget and its Enterprise Strategy Group and Xtelligent divisions as well and also include some news-oriented episodes featuring Sutner and Ajao reviewing the news.
    Copyright 2023 All rights reserved.
    Show more Show less
activate_primeday_promo_in_buybox_DT
Episodes
  • AWS GenAI strategy based on multimodel ecosystem, plus Titan, Q and Bedrock
    Jul 15 2024

    AWS is quietly building a generative AI ecosystem in which its customers can use many large language models from different vendors, or choose to employ the tech giant's own models, Q personal assistants, GenAI platforms and Trainium and Inferentia AI chips.

    AWS is also invested in working with its more than 100,000 AI customers, which the vendor calls partners. The tech giant provides not only the GenAI tools, but also the cloud infrastructure that undergirds GenAI deployment in enterprises.

    "We believe that there's no one model that's going to meet all the customer use cases," said Rohan Karmarkar, managing director of partner solutions architecture at AWS, on the Targeting AI podcast from TechTarget Editorial. "And if the customers want to really unlock the value, they might use different models or a combination of different models for the same use case."

    Customers find and deploy the LLMs on Amazon Bedrock, the tech giant's GenAI platform. The models are from leading GenAI vendors such as Anthropic, AI21 Labs, Cohere, Meta, Mistral and Stability AI, and also include models from AWS' Titan and Q lines.

    Karmarkar said AWS differentiates itself from its hyperscaler competitors, which all have their own GenAI systems, with an array of tooling needed to implement GenAI applications as well as AI GPUs from AI hardware giant Nvidia and AWS' own custom silicon infrastructure.

    AWS also prides itself on its security technology and GenAI competency system that pre-vets and validates partners' competencies in putting GenAI to work for enterprise applications.

    The tech giant is also agnostic on the question of proprietary versus open source and open models, a big debate in the GenAI world at the moment.

    "There's no one decision criteria. I don't think we are pushing one [model] over another," Karmarkar said. "We're seeing a lot of customers using Anthropic, the Claude 3 model, which has got some of the best performance out there in the industry."

    "It's not an open source model, but we've also seen customers use Mistral and [Meta] Llama, which have much more openness," he added.

    Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving

    coverage of artificial intelligence, unified communications, analytics and data management technologies. He is a veteran journalist with more than 35 years of news experience. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. They co-host the Targeting AI podcast.

    Show more Show less
    22 mins
  • Walmart uses generative AI for payroll, employee experience
    Jul 1 2024

    The biggest global retailer sees itself as a tech giant.

    And with 25,000 engineers and its own software ecosystem, Walmart isn't waiting to see how GenAI technology will play out.

    The company is already providing its employees -- referred to by the retailer as associates -- with in-house GenAI tools such as the My Assistant conversational chatbot.

    Associates can use the consumer-grade ChatGPT-like tool to frame a press release, write out guiding principles for a project, or for whatever they want to accomplish.

    "What we're finding is as we teach our business partners what is possible, they come up with an endless set of use cases," said David Glick, senior vice president of enterprise business services at Walmart, on the Targeting AI podcast from TechTarget Editorial.

    Another point of emphasis for Walmart and GenAI is associate healthcare insurance claims.

    Walmart built a summarization agent that has reduced the time it takes to process complicated claims from a day or two to an hour or two, Glick said.

    An important area in which Glick is implementing GenAI technology is in payroll.

    "What I consider our most sacrosanct duty is to pay our associates accurately and timely," he said.

    Over the years, humans have monitored payroll. Now GenAI is helping them.

    "We want to scale up AI for anomaly detection so that we're looking at where we see things that might be wrong," Glick said. "And how do we have someone investigate and follow up on that."

    Meanwhile, as for the "build or buy" dilemma, Walmart tends to come down on the build side.

    The company uses a variety of large language models and has built its own machine learning platform, Element, for them to sit atop.

    "The nice thing about that is that we can have a team that's completely focused on what is the best set of LLMs to use," Glick said. "We're looking at every piece of the organization and figuring out how can we support it with generative AI."

    Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. He is a veteran journalist with more than 30 years of news experience. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. They co-host the Targeting AI podcast.

    Show more Show less
    24 mins
  • Lenovo stakes claim to generative AI at the edge
    Jun 17 2024

    While Apple garnered wide attention for its recent embrace of generative AI for iPhones and Macs, rival end point device maker Lenovo already had a similar strategy in place.

    The multinational consumer products vendor, based in China, is known for its ThinkPad line of laptops and for mobile phones made by its Motorola subsidiary.

    But Lenovo also has for a few years been advancing a “pocket to cloud” approach to computing. That strategy now includes GenAI capabilities residing on smartphones, AI PCs and laptops and more powerful cloud processing power in Lenovo data centers and customers’ private clouds.

    Since OpenAI’s ChatGPT large language model (LLM) disrupted the tech world in November 2022, GenAI systems have largely been cloud-based. Queries from edge devices run a GenAI prompt in the cloud, which returns the output to the user’s device.

    Lenovo’s strategy -- somewhat like Apple’s new one -- is to flip that paradigm and locate GenAI processing at the edge, routing outbound prompts to the data center or private cloud when necessary.

    The benefits include security, privacy, personalization and lower latency -- resulting in faster LLM responses and reducing the need for expensive compute, according to Lenovo.

    “Running these workloads at edge, on device, I'm not taking potentially proprietary IP and pushing that up into the cloud and certainly not the public cloud,” said Tom Butler, executive director, worldwide communication commercial portfolio at Lenovo, on the Targeting AI podcast from TechTarget Editorial.

    The edge devices that Lenovo talks about aren’t limited to the ones in your pocket and on your desk. They also include remote cameras and sensors in IoT AI applications such as monitoring manufacturing processes and facility security.

    “You have to process this data where it's created,” said Charles Ferland, vice president, general manager of edge computing at Lenovo, on the podcast. “And that is running on edge devices that are deployed in a gas station, convenience store, hospital, clinics -- wherever you want.”

    Meanwhile, Lenovo in recent months rolled out partnerships with some big players in GenAI including Nvidia and Qualcomm.

    The vendor is also heavily invested in working with neural processing units, or NPUs, in edge devices and innovative cooling systems for AI servers in its data centers.

    Shaun Sutner is a journalist with 35 years of experience, including 25 years as a reporter for daily newspapers. He is a senior news director for TechTarget Editorial's information management team, covering AI, analytics and data management technology. Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems. Together, they host the Targeting AI podcast.

    Show more Show less
    43 mins

What listeners say about Targeting AI

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.