Vulcan’s “Digital Aristotle” Moves Toward Vision of Computers that Answer Scientific Questions

The days when any one person could be a credible expert on all the world’s scientific knowledge, capable of distilling complex data and concepts into a clear, understandable answer, are said to have ended with Aristotle. Now Paul Allen, the billionaire co-founder of Microsoft, is pursuing a vision of a computer program that harnesses scientific knowledge from the modern world and reasons through it to create clear answers to questions, sort of like a “Digital Aristotle.”

The concept first got off the ground in 2004 when Vulcan, the company that oversees Allen’s business and charitable projects, sponsored three competing teams to perform some initial proof-of-concept experiments. Earlier this week, I met with Mark Greaves, Vulcan’s director of knowledge systems, to learn more about how this program known as “Project Halo” is progressing.

“It’s going to change the world,” Greaves says. “To be able to ask questions and get answers that are easy to understand—it’s fundamental to human interaction, and it’s so hard to do now on the Internet.”

Greaves, 43, came to this project in 2005 from the Defense Advanced Research Projects Agency. He has a doctorate in philosophy from Stanford University, with a special interest in logic and how meaning is derived from sentences. He also has a master’s in computer science from UCLA. His new task on Project Halo has some similarities to other work in semantic Web search and artificial intelligence, and it is trying to solve a very difficult technical problem.

As Greaves tells the story, this project has come a long way since its early days. Back in 2004, Vulcan’s contractors took 70 pages of Advanced Placement chemistry material, loaded it into the computer, and then tested it to see how well it could answer AP-level chemistry questions. There were two big shortcomings at the time: the process cost about $10,000 a page to input the information, and the knowledge engineers didn’t have enough expertise in chemistry to structure chemistry knowledge appropriately.

Now, Vulcan’s contractors have brought the cost down to $100 a page, Greaves says. In the last major experiment completed in 2006, the software system was able to correctly answer about 40 percent of the questions on AP exams for chemistry, biology, and physics. The knowledge bases were assembled by students with expertise in a given field of science, not highly-trained knowledge engineers.

“We lowered the cost by two orders of magnitude, with an increase in performance,” Greaves says.

There’s still plenty of work to do to make this software program, now called Aura, more workable. It is unable to ace the test for a variety of reasons, Greaves says. Sometimes it lacks enough specialized knowledge dumped into its knowledge base, sometimes its reasoning algorithm is too weak, or there’s a bug in the code, or the user does a poor job in translating the natural-language form of the question into the variety of English the machine can understand.

The example he gave is when a question is phrased like “an alien drops a rock from the top of a 100-foot high wall. How long does it take to hit the ground?” It’s a standard physics question, but the machine can’t answer it directly, because extraneous detail like the fact that an alien is involved will throw it off course. No current system can reliably understand unrestricted English, the way questions are usually phrased in real life. “Our system can’t answer questions like how a teacher asks it. There’s too many semantics involved. It’s beyond the state of the art,” Greaves says.

So the knowledge engineers at Vulcan’s contractors—which include Menlo Park, CA-based SRI International, Boeing’s Phantom Works research center in Bellevue, WA, and Germany-based Ontoprise—train graduate students to phrase the question to the computer in a way it can understand, Greaves says. Sometimes that means being more precise about things that were implicit in the original question, or stripping away extraneous detail.

The new goal is to fine-tune the processes so the machine can get 75 percent of the answers right on the AP exams. There were 12 students involved in the first experiment, and now Vulcan’s contrators will expand the pool to 20 or 30 with the hope of getting a more statistically robust result this time, Greaves says. Project Halo also includes an element of wikis, which harness the power of crowds of Internet users to build up bases of scientific knowledge that the machine can reason through, Greaves says. This has the advantage of tapping the expertise of people around the world who can assemble almanac-style information that computers can later access and use to answer questions. (Think about how Seattle has 594,000 residents, and about 7,085 people per square mile. Enter enough of these factoids, and eventually the machine could answer a question about which cities are most dense, rather than simply refer to an online document that has some data on the subject.)

Vulcan doesn’t say publicly how much it has invested in the project, or exactly what commercial applications it may have in mind. Greaves did suggest, however, that it might be used for educating students, test preparation, or as a research assistant to scientists. He wouldn’t go on record with a prediction on how long it will take to make this software ready for prime-time usage. But he didn’t sound shy about whether the vision of a computer that answers questions is achievable. “We’ll do what it takes to get the Digital Aristotle out there. The goal is to have an impact on the world,” Greaves says.

Trending on Xconomy

By posting a comment, you agree to our terms and conditions.

Comments are closed.