IDAvatars, Working With Watson, Seeks to Answer Patients’ Questions

Watson, meet Sophie.

In September, IBM (NYSE: IBM) announced 100 of its development partners on its Watson supercomputer had brought a product or service to market. Among them was iDAvatars, a Mequon, WI-based startup that creates avatars like Sophie, a virtual medical assistant who becomes more familiar with a user with each interaction.

Norrie Daroga, who founded iDAvatars in 2013 and remains CEO, says nearly all of the 100 IBM partners are focused on “content.” By that, he means they first send Watson information to leverage the system’s ability to make sense of huge datasets. And second, they receive Watson-provided insights, typically in text or chart form, which are used by the source app or service.

By contrast, iDAvatars—originally known as Geppetto Avatars—is focused on user experience, Daroga says.

“Watson has a tremendous amount of content, provided by a lot of people,” he says. “Our product sits on top of that. When content is retrieved, our technology allows users to interact and ask questions. That’s of considerable interest to organizations that have this content capability.”

Sophie can ask and answer questions, record responses, and pass along information to other systems. For example, patients who receive care from the U.S. Department of Veterans Affairs (VA), one of iDAvatars’ customers, can chat with her online about diabetes management or symptoms of post-traumatic stress disorder. In August, the company was awarded an $800,000 subcontract to design “intelligent digital avatars” for the VA.

Daroga says the VA is an especially good candidate for online patient engagement software, because many veterans live dozens or even hundreds of miles from the health system’s nearest location.

VA patients will be able to interact with two different animated characters, says Daroga. The first is a “general knowledge avatar” that can provide information about scheduling or hospital and clinic locations. The second is Sophie, clad in a white lab coat with a stethoscope draped from her neck. She’s designed to field questions about specific diseases and conditions, at times drawing on Watson to find answers.

Daroga says the VA gave iDAvatars about 2,000 answers to inquiries about health problems, which required Sophie to be able to understand up to 30,000 different questions that should trigger one of the answers. “There might be 14 ways to ask a question, but they should all receive the same answer,” he says.

The avatars not only can track what users say, but also how they’re saying it. Daroga says iDAvatars connects to Watson using application programming interfaces, one of which can parse the tone of a verbal exchange. In addition, Sophie uses the cameras on a user’s computer, phone, or tablets to assess his or her mood.

Also under the hood of the software are artificial intelligence, data analytics, and natural language processing capabilities, Daroga says. “It’s at least six technologies that you have to have experts in.”

The company built its own voice-to-text module from scratch, despite IBM’s having developed similar capabilities in Watson. The supercomputer’s ability to listen, process, and quickly respond was on display when it took on human opponents in the game show “Jeopardy!”

Part of the reason iDAvatars uses proprietary technologies in tandem with ones developed by IBM is that Watson is continually becoming more knowledgeable and powerful. Daroga says when his team started to map out how to integrate with the supercomputer two years ago, IBM only offered a single application programming interface to its partners. Now more than 30 are available, he says.

Another improvement Big Blue has made is giving sets of questions a more robust back end through a “natural language classifier” tagging technique. The way it works, describes Daroga, is users rate answers they receive from Watson, which factors in the feedback when someone asks a similar question in the future.

“Over time, Watson picks up nuances,” Daroga says. “It learns that the question really being asked is a little bit different from what we think the person is asking us.”

The company’s other customers include German pharma Bayer, and Intel (NASDAQ: INTC), which uses motion-sensing technology developed by iDAvatars in its RealSense 3D Camera. Daroga says PC manufacturers like Lenovo, Dell, and HP (NYSE: HPQ) are working to integrate the Intel cameras into their laptops, one of which he used to run this reporter through a quick exercise with Sophie.

She instructed me to raise my hand and switch between splaying my fingers and clenching them into a fist as many times as possible in 15 seconds. The camera, which can also track range of motion, counted my reps correctly and the results suggested I’m slightly faster with my dominant right hand than I am with my left. Daroga says patients who have suffered nerve damage might supplement their physical therapy regimens by interacting with Sophie. One advantage of high-tech rehabilitation is patients can monitor their progress without having to log anything themselves, he says.

IDAvatars, which Daroga says has about 20 employees worldwide, has raised a total of about $3.5 million from investors. The company had sought to raise $5 million or more in a Series A round, but it has started to make enough sales that it’s now bootstrapping. Daroga says he expects the company to be profitable in 2015 on projected revenues of $1 million.

For now, the company is focused on signing up additional customers, including South Korean electronics giant Samsung. Daroga says he’s met with executives from the company to explore incorporating iDAvatars-created characters into Samsung’s S Health wellness app.

While there appears to be significant growth potential, developing avatars who speak other languages could be challenging. The voice-to-text technology that iDAvatars developed for its characters cannot be easily adapted to other languages, Daroga says, though he notes that English is the predominant language for medical terms. He says developers could quickly make headway on simple questions and answers, and other low-hanging fruit. However, a significant investment in translation technology would be required to make avatars designed by his company—whose motto is “The Art of Empathy”—multilingual.

“It’s a $10 million or $100 million project somebody like IBM would tackle,” he says. “It gets difficult if you want to have a conversation about feelings.”

Trending on Xconomy