You could argue that a superintelligence would be efficient at all tasks as follows:
Assume that:
- An AI will not knowingly be biased (if it knew it had a bias, it would correct it).
- predicting the residual error of one's predictions is a task that superintelligences are definitionally better at than humans.
Then: superintelligences are efficient at all tasks.
The proof is by contradiction. Suppose a superintelligence has some residual error in some task that humans can predict. Then, by (2), the superintelligence can also predict that residual error. But (1) asserts that a superintelligence cannot know that it is biased, so a superintelligence cannot be efficient at any task.