It is sometimes proposed that we build an AI intended to maximize human happiness. (One early proposal suggested that AIs be trained to recognize pictures of people with smiling faces and then to take such recognized pictures as reinforcers, so that the grown version of the AI would value happiness.) There's a lot that would allegedly predictably go wrong with an approach like that.
[todo: - in tutorial page?
- the 'argument path' from smiley faces to pleasure to happiness to 'true happiness' to DWIM with the 'just' fading at each step]