-
-
'Hearables' could diagnose disease, if we let them
Technology that is always listening might sound scary, but it could save lives.
James Trew, @itstrew
03.14.18 in Wearables
Poppy Crum is the Chief Scientist at Dolby labs, and no stranger to the conference circuit. Her talk at this year's SXSW -- "A Hearable Future: Sound & Sensory Interface" -- promised to dive into the hidden possibilities that sound and the human ear have to offer technology. Unfortunately, and perhaps ironically, Crum's talk was plagued by audio problems throughout (through no fault of her own).
"The Ear is this incredible hub of insight between our internal state and the external world," Crum told the audience, before having to ask the technician to reduce the reverb on her microphone. Only moments earlier her laptop (and therefore presentation) had died thanks to a technician plugging it into the wrong outlet.
Crum handled the inconvenience deftly, taking a poignant question from an audience member asking if technology could offer her hope -- she was going to be fully deaf in 10 years. Crum said it could "we want to de-stigmatize wearing hearables, we want that" before explaining her goal of helping to democratize the hearing technology space, as six companies currently owning 98 percent of the market "that's not okay." Technical glitches and Crum's elegant handling of them had the audience cheering in support.
"The ear directs the eye" Crum added, talking about how situational awareness is often lead by our hearing, and not by our sight. Footsteps coming up behind us, or our ability to place a sound in 3D space long before we see what's causing it.
Current audio wearables, or, if we must, "hearables," are starting to take advantage of more than just delivering enhanced sound. Companies like Here, Nura Sound and Bragi (among others) have introduced sound augmentation with varied success. But most are still teetering on the edge of audio assistance -- reducing background noise, or adding to the sound we already hear. Crum thinks we can do much more, and with technology that already exists.
But that advantage comes at a price. "The power of the hearable is only realized if we let the device have access and process our personal data," Crum told Engadget after her talk. How tech firms have handled, or protected our personal data, hasn't exactly been a success so far, so we're forced to make the eternal choice between convenience/progress and privacy. "I think we know what to do to protect that data," Crum added, "but it's what we do with the understanding of that data [that's important]," hinting the pay off could be worth it, but it's a long road ahead.
Let's be clear, we're not just talking about better voice recognition, or knowing when to lower the music in our cars to calm our stressful drive. Using just our voices, scientists can predict the onset of multiple sclerosis, diabetes (through physiological changes that affect your vocal tract) and even psychosis (through vocal patterns).
No way? Seems Sci Fi like?
But do you want to let Amazon, Google or Apple be the ones to diagnose you? And have that data in its coffers? My guess is no.
"The ear is a very special place where we can gain some of the richest insight into our bodies and the external world."
Crum's definition of a "hearable" and the insights they can offer goes beyond wireless earbuds though. "Hearable are devices that listen, they don't even need a transducer, they could just listen to your body." This includes one key area of technology slowly but surely invading our living space: virtual assistants.
Right here, we have a technology that, if we let it, could listen to our daily lives, and offer up all sorts of insight: health issues, lifestyle assistance, entertainment recommendations (and enhancement) and more. But letting someone... some thing listen in on our daily lives is probably an adjustment that will take a while to earn the trust it deserves. "They might just know more about us than we know." Crum reminds us. And that's both horrifying and exciting at the same time.
Catch up on the latest news from SXSW 2018 right here.
Commentary, This is a rich audio company, who has no vested reason to say what it has regarding the ear. They have the vision, it is a source separate from QUIK that says the same things or even more....was a LOT of fun to read. How the heck could they diagnose these disease states like DM from voice changes...wow. THis Dolby talk suggests that Voice alone will not be enough- that means at least 2 of the QUIK cores.....BF says when 2 of the cores are needed they win about 75% of the time. We will have great things to read......
will track along.
Last edited:
16 minutes ago
-
Up from the archives
Fast Forward: Q&A With Dolby's Chief Scientist, Poppy Crum
Poppy Crum, Dolby's chief scientist and an adjunct professor at Stanford University, talks about the evolving hearable market, augmented reality, and multisensory virtual reality.
ByDan Costa
- March 23, 2017 11:17AM EST
Twitter
- SXSW this year, but the conversations extended beyond headsets. To be truly immersive, experiences need to incorporate all five senses. Sight, sound, touch, smell, and even taste.
the company knows more about technology and human senses than Dolby, which has pioneered everything from surround sound to HDRimaging. I was fortunate enough to sit down with Dolby Laboratories' Chief Scientist, Poppy Crum, at the show.
Crum is also an adjunct professor at Stanford University in the Center for Computer Research in Music and Acoustics and the Program in Symbolic Systems. Crum was at SXSW as part of the IEEE's Tech for Humanity Series. She understands sound and a whole lot more.
Costa: Poppy, thanks so much for joining us today.
Crum: Thank you for having me. It's great to be here.
So we're going to talk about hearables, we're going to talk about augmented reality, we're going to talk about maybe a little virtual reality, maybe we'll talk about those painless migraines that the two of us experience from time to time. First of all, your role at Dolby. What does a work day look like for you? What do you do when you get to the office?
We have a large team of computational neuroscientists and people that are experts in sensory perception. If you look back at the history of Dolby and go back, even 50 years, at the core of the company, it has always been an understanding of human experience. I think it helped differentiate how we think about building technology.
So on a daily basis, the people on my team and the people I work with, we look across technologies. We're not just sound anymore, it's really about a holistic sense. We have labs, and there are a lot of experiments that go on throughout the day.
Our new building has up to 100 labs, but we've got some amazing biophysical labs. My background is as a neuro physiologist—the same with a lot of people on our teams. And there's human physiology that's happening on a daily basis to think about new technologies, and there's some very seminal work happening on thinking about how we experience information that's multisensory and really looking towards how are we going to consume content in the future that is so rich and what that's going to mean for how it affects our bodies, how it affects how we engage with others and our senses.
One of the things that you've talked about at the show and that we've covered a lot more on PCMag is this hearables segment. Hearables is not a word that I think a lot of our audience is familiar with. When you hear the word hearables, what does that encompass to you? What does it mean?
Right now, I think it's a term that's still being defined. I like to take it as a very large subset of products and possibilities. It's a wireless device or in some cases, even wired. Because Amazon EchoBest Price at Amazon I will call a hearable and that's plugged in, but it's a device that has microphones, or sensors, but it's capturing data from the environment, using that data in some way to enhance your experience. Also there are a lot of companies right now that are thinking in the wearable space, a device that's wireless, worn on your body, but there are also static hearables if we look at Google Home$129.00 at Best Buy and Amazon Echo, and those are transformational.
A hearable doesn't necessarily mean that it's about just enhancing the sounds around you. It might be taking information about the sounds around you and using that to somehow give you an enhanced experience in the world. You could think about it's capturing ... the ear, it turns out, is a great place to pick up biophysical information so that you can capture a lot of information there. You can imagine, obviously, analytics capturing information just about the sounds around you, your conversations, using that as a way to enhance your day or optimize you.
There's also a lot of concerns with this. One thing we see that I think is worth calling out, and I think it's what's going to transform and help define what this space are changes in regulations.
Right now, hearing aids are hearables. Hearing aids are an augmented reality device, but you have this class of consumer devices that have the capability to help mitigate hearing loss, have the capability to be an augmented reality device for someone with normal hearing, have the ability to even be a gaming device. You're getting the crossover of these fields the medical device that's falling into this larger class and you're going to have a consumer device class that is clearly crossing these boundaries and doing a lot of similar processing for us.
In terms of hearing aids, people think of hearing aids as once you start to lose your hearing, you can then get a hearing aid that will then give you some of your hearing back, but there are lots of interesting things that could happen if people started augmenting their hearing with ... they have normal hearing, but they want to have something more than normal hearing.
Absolutely. I'm always a big believer in not creating this arbitrary boundary where we say, "Now I have hearing loss." Our hearing, because of many elements in the world and sound, even aspirin is an ototoxic that you have to be-
Is that right?
Absolutely. The combination of elements that you put in our hearing starts degrading in our early 20s and probably even earlier with some of the loud sounds that people listen to.
Especially South by Southwest.
Yes. Hearables have so many different capabilities you can have. Whether it's having control over streaming your content directly to your device - wirelessly - control of elements, the spatialization of information. Things we talk about --. there's been a big push in taking information, augmenting our visual sense with Google Glass or some of these devices from other companies, and really - what we want -- the sonic component of that [information] can be very critical to allowing us to get past what we might call a capacity limitation. And to take information from our world and really represent it as a sound.
It seems to me that the thing that's made people think differently about sound and voice controls and voice interactions, but also privacy, are devices like Google Home and Amazon Echo, which are really the first mainstream voice interfaces that we've had for this digital world and it comes with all these different consequences, which I think we're just starting to sort out. Where do you see that market heading?
That's a great question. I have to say, I think these devices are transformational in so many ways, and I am a strong user and a strong sharer of them, partly to also understand and look at where they're going to evolve to and how I use them in my life on a daily basis. The idea that people are willing to have microphones always on, that's a big leap. What can we do with that?
Right now,voice is a wonderful thing. It's giving people control; it's getting them onboard to have this device as an assistant in their lives. Ten years down the road, I don't want to have to control more things in my life. I want my devices to ... I'm going to trust my data more than I'm going to trust me to know what I need in some cases and I want the devices to be...
Anticipatory.
Absolutely. I want them to be proactive and capturing a lot of information about the sounds around me, whether I've been coughing and modulate my temperature. Or just to make my appointments for me when I need to or to be able also just to make our lives moving forward without having to control all of the devices.
You could imagine that the Echo could detect whether you've been sniffling. It could detect whether or not you sound like you have a sore throat, or if you've been coughing, and then lets you know that it sounds like you have a cold.
To go ahead and schedule your doctor's appointment for you before you [know you need it]. That sounds like a little far reaching to want that, but at the same time I think we're going to get there. I think the step of having voice control, integrating these into our lives will make us comfortable with it.
In the future, the only time we'll be uncomfortable is when things don't work and when it's gone and when it's absent.
I had an experience with an Echo that let me realize how much it had transcended the interactions that I was having and could help people across different demographics. But yes, I think people have thought about these devices a lot for accessibility, which is wonderful. Like the access that it can provide to both very young children or different age ranges or different people that...
People that have disabilities or shortcomings or gaps that could be filled in using technology.
Absolutely. In my case, I had a relative that was in the hospital and he passed away a few weeks ago. I had bought him, right when Echo came out; I bought him an Echo as an accessibility device. Here's the transformation. I took it to the hospital, and devices like this are phenomenal for a hospital setting. The privacy issues do become important to consider, but we use it predominately for music playing in that setting.
But at this point and time, my relative was not very vocal, had not been speaking, and we'd been playing music on the Echo that we thought was what he would want to hear, Bach and very calming music and honestly, some of the last words he said, and I'm not kidding, which I remember, were, "Alexa, play Al Green." He wanted to her Al Green, Sly and The Family Stone and this device gave him that accessibility. It was empowering and at a time where it was very powerful.
There are a lot of technologies at play there. You've got the fact that Al Green is available and that there's this vast music library that's a voice command away and then you've got the voice command itself, which makes it possible for him to request it personally. So there's a lot going on there.
I think when you bring up the privacy issue, that's another question is that before Alexa is making our doctor's appointments, I suspect that some pharmaceutical company will be offering to give us a cold medicine table or our allergies are acting up and offer a Zyrtec, and I think that's almost an intermediate step we have to get through where who's going to control all that data that is being captured and that we're giving away in this audio format.
We have to embrace it. If we don't think about the regulation side and we don't think about how we're going to make people comfortable with providing you, even more, data than we currently are. I think Amazon right now says, "We are only listening when you say Alexa," but these devices to do what they really can do you have to be listening all the time.
There's a big trend right now of companies, insurance companies, whether it's car insurance or health insurance offering consumers a deal or a way of having lower rates if they allow their data to be tracked--if they give away their data. I think it's very powerful. I think it will be part of our future, there's no question about that, but that future of what the ramifications of sharing that data, that hasn't even been defined yet, and it's hard to predict. So, we really have to think about what that future looks like.
Also, I think there's a permanence issue where people are, "I don't mind if a give my healthcare company or my insurance provider the number of steps I do every week," but that information doesn't go anywhere, and the steps you do this week will be searchable and indexed 30 years from now, and that idea of digital permanence which we live with today, which really, in human history, never really happened before. When you add the fact that you could have a microphone on in your kitchen all the time, all that data will not go anywhere. Amazon will always have it.
That future is unwritten, and we don't know the ramifications. We haven't defined these regulations, but those can even change in the future because many things can change.
And here's another thing. I think culturally, the EU has tried to enact much more consumer protective regulatory legislation right now, but it's not clear what it means because the data exists. We need to ensure interoperability when you're traveling. We need assure security for small IoT devices. That's a really critical thing. I think groups like NIST are very active to solve this.
When you look at what Dolby does and all the technologies that they're working on, you start to see common themes, and one of them is that the company is really trying to give humans super sensory perception, superhuman powers. It sounds over the top. It sounds hyperbolic, but there's a bunch of examples of people getting superhuman powers using technology. Can we just talk about a couple of those?
Absolutely. So I come from a background as a neurophysiologist, who's thinking about how we integrate these things to technology and there are a couple of things that are important. When we think about what augmented reality is or what technology can do for us today ... when I first joined Dolby, maybe we were in the process of working on Dolby Vision and Dolby Vision is a high dynamic range and wider color gamut imaging technology. Just to get an idea, the typical display you would have bought about three years ago would have been 300 to 400 candelas per square meter. The moon, the natural moon is about one to two thousand, sunlight on black pavement, you're getting up to 15,000 candelas per square meter.
So display technology was very far away from what our actual sensory system could handle. And in the process of development, we were working with some content and devices that allowed us to get up to 20,000 candelas per square meter in the brightness that was produced-
No comments:
Post a Comment