To the extent we can set up all of these problems as parts of a learning problem, it just seems like an empirical question which ones will be hard, and how hard they will be. I think that you are wrong about this empirical question, and you think I am wrong, but perhaps we can agree that it is an empirical question?
The main thing I'd be nervous about is having the difference in our opinions be testable before the mission-critical stage. Like, maybe simple learning systems exhibit pathologies and you're like "Oh that'll be fixed with sufficient predictive power" and I say "Even if you're right, I'm not sure the world doesn't end before then." Or conversely, maybe toy models seem to learn the concept perfectly and I'm like "That's because you're using a test set that's an identical set of problems to the training set" and you're like "That's a pretty good model for how I think superhuman intelligence would also go, because it would be able to generalize better over the greater differences" and I'm like "But you're not testing the mission-critical part of the assumption."
The historical track record for hand-coding vs. learning is not good. For example, even probabilistic reasoning seems at this point like it's something that our agents should learn on their own (to the extent that probability is relevant to ML, it is increasingly as a technique relevant to analyzing ML systems rather than as a hard-coded feature of their reasoning).
We might have an empirical disagreement about to what extent theory plays a role in practice in ML, but I suspect we also have a policy disagreement about how important transparency is in practice to success - i.e., how likely we are to die like squirrels if we try to use a system whose desired/required dynamics we don't understand on an abstract level.
So it seems natural to first make sure that everything can be attacked as a learning problem, before trying to solve a bunch of particular learning problems by hand.
I'm not against trying both approaches in parallel.