• You need to be able to identify the goal itself, to the AGI, such that the AGI is then oriented on achieving that goal\. This isn't trivial for numerous reasons\. If you put a picture of a pink\-painted car in front of a webcam and say "do this", all the AI has is the sensory pixel\-field from the webcam\. Should it be trying to achieve more pink pixels in future webcam sensory data? Should it be trying to make the programmer show it more pictures? Should it be trying to make people take pictures of cars? Assuming you can in fact identify the concept that singles out the futures to achieve, is the rest of the AI hooked up in such a way as to optimize that concept?
I was talking to Chelsea Finn about IRL a few weeks ago, and she said that they had encountered the situation where they
- Demonstrated the intended behavior (I think it was putting a block into a slot)
- Trained the robot to recognize success
- Trained the robot reproduce that behavior, i.e. to do something it would recognize as success
At which point it positioned the block so that it looked (to its cameras) like the block was in a slot, while in fact it was far away.
I think they then added joint position information so that the AI could more reliably estimate whether the block was in the slot, and that fixed the problem.
Of course this problem can be solved in many ways and this instance doesn't illustrate the full difficulty etc. but I think it's a nice illustration anyway.