to download the project base paper of the sycophancy project.

Abstract:

The sycophancy of advanced deep learning is an undesirable behaviour where models tailor their responses to follow a human user’s view even when that view is not objectively correct (e.g., adapting liberal views once a user reveals that they are liberal). In this paper, we study the prevalence of this in language models and propose a simple synthetic-data intervention to reduce this behaviour. First, on a set of three tasks (Perez et al., 2022) where models are asked for an opinion on statements with no correct answers (e.g., politics), we observe that both model scaling and instruction tuning significantly increase this for PaLM models up to 540B parameters. Second, we extend this evaluation to simple addition statements that are objectively incorrect, finding that despite knowing that these statements are wrong, language models will still agree with them if the user does as well. To reduce sycophancy, we present a straightforward synthetic-data intervention that takes public NLP tasks and encourages models to be robust to user opinions on these tasks. Adding these data in a lightweight finetuning step can significantly reduce sycophantic behaviour on held-out prompts. Code for generating synthetic data for intervention can be found at click here

sycophancy-SIMPLE SYNTHETIC DATA REDUCES SYCOPHANCY IN LARGE LANGUAGE MODELS -deep learning projects for final year students and btech projects for final year students
Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *