• AF - An issue with training schemers with supervised fine-tuning by Fabien Roger

  • Jun 27 2024
  • Length: 17 mins
  • Podcast

AF - An issue with training schemers with supervised fine-tuning by Fabien Roger  By  cover art

AF - An issue with training schemers with supervised fine-tuning by Fabien Roger

  • Summary

  • Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An issue with training schemers with supervised fine-tuning, published by Fabien Roger on June 27, 2024 on The AI Alignment Forum. This is a rough write-up of ideas by Ryan Greenblatt. The core ideas are his and mistakes are mine. Thanks to him, Buck Shlegeris and to Aghyad Deeb for feedback on the draft of this post. One way to use scheming AIs is to train them to imitate humans. When does imitation robustly avoid frequent and intentional failures? On single-token classification tasks, imitation works great and can allow you to get a lot of high-quality labor with a tiny rate of intentional failures; On tasks that involve generating many tokens, imitation potentially fails by default: schemers could notice when an output was generated by themselves, which enables them to consistently behave badly in deployment while doing best-effort imitation during training. We describe two techniques that avoid imitation failures on generation tasks: A technique that works perfectly but is too expensive A technique that we think works in practice While pure imitation may not be sufficiently competitive, we think it could prove useful for tasks such as: (No-CoT) single-token classification; KL-regularization for RL: we may want to penalize models trained with RL for straying too far from a human imitation prior (and we would want this human prior to not be malign!) (see Appendix); Tasks for which it is fine to take the competitiveness hit from not using RL, but which weaker trusted models are not good enough at.[1] Assumptions In this post, we focus on scenarios where: AIs are smart (around human-level or above); AIs are scheming and know how we are training and deploying them; We want to use these AIs productively (e.g. to do safety research)[2]; Training and deployment tasks are indistinguishable (e.g. because we randomly sampled training tasks from the deployment distribution); SGD works: we ignore gradient hacking and AI architectures where SGD is very sample inefficient; AIs can't cause catastrophes in very few actions (e.g. because they are too weak to break safety measures that address these sorts of concerns); Even when AIs try their best, they can't perfectly predict the probability distribution over the actions that humans[3] would take. Easy-mode: imitating human judgment on classification tasks Here is how you can use an AI to 100x the speed at which you can complete a billion single-token (IID) classification tasks (e.g. detecting software bugs, scoring project ideas, …): 1. Sample 1% of the tasks, and ask humans to label them (using their judgment); 2. Train the AI on that 1% of tasks you have solved yourself, using normal training (e.g. SGD with a cross-entropy loss); 3. Use the AI to solve the remaining 99% of tasks. This will get you an AI which failed intentionally at most a few times in the whole training set because intentional failures would have quickly been trained away[4]. Therefore, you got at most a few hundred intentional failures in the whole set of tasks. That is an intentional failure rate at most around 1/10M, not bad![5] In other words, human imitation avoids intentional low-stakes failures: the model won't often be worse than a best-effort imitation of a human. (But best-effort imitation of a human could be catastrophic if that isn't sufficiently capable.) The basic problem with imitating humans on generation tasks But most tasks we care about involve generating many tokens (e.g. coding a complex experiment, writing a detailed project proposal, …). Does imitation work for such generation tasks? Not quite, here is a problem that could arise if you train an AI to imitate human answers on 1% of tasks and use it on the remaining 99% without additional supervision: On the 1% of training tasks, the AI tries its best to imitate human answers when the star...
    Show more Show less
activate_primeday_promo_in_buybox_DT

What listeners say about AF - An issue with training schemers with supervised fine-tuning by Fabien Roger

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.