Are humans the only creatures with intelligence? Get your machines ready to disprove the idea—Nuance Communications, Inc., a provider of voice and language solutions, has announced an upcoming annual competition for programs that can solve the Winograd Schema Challenge. The competition will be hosted with CommonsenseReasoning.org, a research group dedicated to furthering and promoting research in commonsense reasoning.
The Winograd Schema Challenge, developed by Hector Levesque of the University of Toronto, is a test designed to judge whether a program has accurately modeled human intelligence. It is something of a follow-up for the Turing Test, the old standard of measurement for artificial intelligence. The goal of the Turing Test was to convince a human that he or she was speaking with another human and not a machine; no program ever passed, and the ones that came close relied on tricking or deceiving the human subject.
The Winograd Schema Challenge, on the other hand, relies on a series of multiple-choice questions in which answers are obvious to a human subject, but not necessarily intuitive for a machine without human intelligence. For instance, a sample question may be: “The trophy would not fit in the brown suitcase because it was too big. What was too big? Answer 0: the trophy or Answer 1: the suitcase?” While a human would be able to employ spatial understanding to answer correctly, a machine may simply look at the arrangement of words and answer wrongly. To answer correctly would imply higher-level artificial intelligence.
Charles Ortiz, senior principal manager of AI and senior research scientist of the Natural Langauge and Artificial Intelligence laboratory at Nuance Communications, said in a statement, “Competitions such as the Winograd Schema Challenge can help guide more systematic research efforts that will, in the process, allow us to realize new systems that push the boundaries of current AI capabilities and lead to smarter personal assistants and intelligent systems.”
The first submission deadline will be October 1, 2015. The grand prize winner will receive $25,000; in the case of multiple winners, the judges will base their decision upon further testing or examination of traces of program execution, and if no program meets those thresholds, a first prize of $3000 and second prize of $2000 will be given to the two highest-scoring entries.
Edited by Maurice Nagle