You don't go to the airport because you really like airports; you go to the airport so that, in the future, you'll be in Oxford\. An air conditioner is an artifact selected from possibility space such that the future consequence of running the air conditioner will be cold air\. A butterfly, by virtue of its DNA having been repeatedly selected to have previously brought about the past consequence of replication, will, under stable environmental conditions, bring about the future consequence of replication\. A rat that has previously learned a maze, is executing a policy that previously had the consequence of reaching the reward pellets at the end: A series of turns or behavioral rule that was neurally reinforced in virtue of the future conditions to which it led the last time it was executed\. This policy will, given a stable maze, have the same consequence next time\. Faced with a superior chessplayer, we enter a state of Vingean uncertainty in which we are more sure about the final consequence of the chessplayer's moves \- that it wins the game \- than we have any surety about the particular moves made\. To put it another way, the main abstract fact we know about the chessplayer's next move is that the consequence of the move will be winning\. As a chessplayer becomes strongly superhuman, its play becomes instrumentally efficient in the sense that no abstract description of the moves takes precedence over the consequence of the move\. A weak computer chessplayer might be described in terms like "It likes to move its pawn" or "it tries to grab control of the center", but as the chess play improves past the human level, we can no longer detect any divergence from "it makes the moves that will win the game later" that we can describe in terms like "it tries to control the center \(whether or not that's really the winning move\)"\. In other words, as a chessplayer becomes more powerful, we stop being able to describe its moves that will ever take priority over our beliefs that the moves have a certain consequence\.
I'm not quite sure of this.
Suppose there are two different super-human chess AI's with different styles -- call them UberTal %note:Widely regarded as a creative genius and the best attacking player of all time, Tal played in a daring, combinatorial style. https://en.wikipedia.org/wiki/Mikhail_Tal% and UberPetrosian %note: He was nicknamed "Iron Tigran" due to his almost impenetrable defensive playing style, which emphasised safety above all else. https://en.wikipedia.org/wiki/Tigran_Petrosian% -- such that a human chess (and AI) expert who watched a match between the two could reliably guess who was who, without being told which AI was playing white and which was playing black (and of course without being able to beat either one).
Would such a situation contradict the claim you are making here?
Or would you argue that we might see such a situation with only weakly-superhuman AI's, but that the further the AI's advanced beyond human abilities, the less we'd be able to detect a characteristic style?