On Targeted Manipulation and Deception when Optimizing LLMs for User FeedbackShare on Twitter Facebook LinkedIn Previous Next