An A.I. would have an answer to the question, “What do you want to do today?”
It seems a child (that can speak somewhat coherently) can answer this question. We might quibble about whether the answer makes sense, is feasible, etc. We might also object that an A.I. could be programmed to answer this question. To the latter objection, it seems relevant to ask how much metaphorical programming the child (or any human deemed ‘intelligent’) has received.
T.V. shows, movies, adverts, talk with playmates, etc., have an effect on the child’s expressed preference of what she wants to do that day. She might simply mimic her friend, sibling, or parent, a character in a story (book, t.v. show, etc.), or something she heard someone or something else declare. Would we consider the child intelligent no matter what she says? Would silence (no response) have a meaning as well?
The objection that the A.I. could be programmed to answer the question certainly seems relevant. It seems to imply, however, that programming makes the programmed less (or even not) intelligent. Judging machine/robotic intelligence based on human intelligence has at least two problems. First, the machine is not human, so why would it need to mimic human ability to be intelligent? Second, humans have varying levels of intelligence, so what is the minimum level of intelligence a machine would need to have? How can explicitly limit that given the amount of variation displayed by humans (not to mention other animals)?
I do not claim that this test is perfect, but I think it points to a perspective that does not require an A.I. to mimic human intelligence in all respects. Much remains to be fleshed out–and many objections remain to the test beyond those I have considered–but it serves as a thought experiment that challenges how we might usually evaluate artificial intelligence. I hope it sparks better questions, and better tests, than the one I mention.