• Power and Responsibility of Large Language Models | Safety & Ethics | OpenAI Model Spec + RLHF | Anthropic Constitutional AI | Episode 27

  • Jun 17 2024
  • Duración: 17 m
  • Podcast

Power and Responsibility of Large Language Models | Safety & Ethics | OpenAI Model Spec + RLHF | Anthropic Constitutional AI | Episode 27  Por  arte de portada

Power and Responsibility of Large Language Models | Safety & Ethics | OpenAI Model Spec + RLHF | Anthropic Constitutional AI | Episode 27

  • Resumen

  • With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey powers to maximize usefulness and harmlessness.

    REFERENCE

    OpenAI Model Spec

    https://cdn.openai.com/spec/model-spec-2024-05-08.html#overview

    Anthropic Constitutional AI

    https://www.anthropic.com/news/claudes-constitution



    For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.

    Más Menos
activate_primeday_promo_in_buybox_DT

Lo que los oyentes dicen sobre Power and Responsibility of Large Language Models | Safety & Ethics | OpenAI Model Spec + RLHF | Anthropic Constitutional AI | Episode 27

Calificaciones medias de los clientes

Reseñas - Selecciona las pestañas a continuación para cambiar el origen de las reseñas.