Thursday, November 16, 2017

Nick H....

This quote is so good....


As Saint Exupery said in his biography “Wind, Sand and Stars”,


“Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away”
cc snip...

In addition to the strong interest we’ve seen from companies addressing wearable and hearable designs, we’re also seeing growing interest from companies developing voice-enabled IoT products. The short story here is voice has not only become the next interface, current trends suggest that always-on / always listening voice is on pace to become the dominant interface for a wide variety of technology products, and with that, 




give rise to the term: Internet of Voice.

Nick Hunn 1 yr ago... must read material

http://www.nickhunn.com/iov-the-internet-of-voice/


IoV – The Internet of Voice
January 26th, 2017 | Published in Usability & Design | 1 Comment

Forget the Internet of Things 
– it’s a bubble. The majority of products currently claiming to be IoT devices are just the same, vertical M2M products we’ve always had, but taking the opportunity to benefit from a rebrand. Most of the rest of the IoT is the wet dream of Venture Capitalists and Makers who think that by overfunding and stimulating each other’s egos in a frenzy of technical masturbation, they can create a consumer market for the Internet of Things. As the IoT slips slowly backwards into the foothills of Gartner’s Hype curve you need to look elsewhere to find the real Internet device opportunity, which is only just emerging. It’s the IoV, or the Internet of Voice.

The problem that the current IoT paradigm has is that it’s mostly about collecting data and then applying algorithms to extract value from the data. That’s a difficult job. You need to make the devices, work out how to connect them and then hope you can find something valuable within the data to engage the customer. The problem is that all of that takes time, not least the time to get a critical mass of products out into the field. The Catch 22 which most business plans ignore is that you need to deploy tens of thousands of devices to accumulate enough data before you can even see if there’s anything of value in it. But without an upfront value, people are loath to buy the devices. Everyone, from wearables manufacturers to smart cities are discovering that it’s not a very compelling business case, not least because it needs fairly technical consumers to install everything in the first place.

The Internet of Voice takes a different route. Instead of expecting users to know anything about the IoT, they just get to ask questions and then get answers. 
No more buttons, no more keyboards, no more coding, just ask. But it has the power to control everything we come into contact with. It could mark the end of our love affair with smartphones and is probably the biggest threat that Apple faces today.

In many ways, the Internet of Voice is just the latest step in a constant journey of human enquiry. From the questions posed to the Delphic oracle, to the more recent fictional incarnations of HAL and Her, humanity has been captivated by asking questions and getting an apparently intelligent response. Today, we’re at the point where technology is moving that from fact to fiction and users are finding it remarkably addictive.

Rather surprisingly, given how oral our societies are, voice has often been the poor cousin of video. Telephone voice quality has frequently been terrible. Bluetooth headsets have performed a useful function in allowing phone calls whilst driving, but for most users, or rather recipients of a call from someone using a headset, the best one could hope for was that the voice was recognisable. The more upmarket section of the industry has worked hard to improve voice quality, but in general voice quality has been mediocre, with users content with old fashioned, telephony quality. Trying to do voice recognition through a headset often felt like an exercise largely dependent on chance.

The perception of voice has changed dramatically over the past few years, although bizarrely, it’s received limited recognition. The change started with Siri – Apple’s voice assistant, which was copied and improved on with Google’s Voice Search (now Now) and Microsoft’s Cortana. Users have taken to talking to their phones; last May, Sundar Pichai, Google’s CEO, reported that 20% of queries on its mobile app were now voice queries. However, the best indication of what voice could do came when Amazon launched Alexa on the Echo at the end of 2014.

Alexa introduced users to the concept of talking to the Internet whenever they wanted to know something, buy something or play music. It signalled a major change by removing the need to interact with any device; you no longer needed to take a phone out of your product or press a button – you just spoke a key word or phrase to the internet. It’s difficult to underestimate the importance of this change. Whilst some may find it creepy, just asking a question is so natural that it’s difficult to understand why it has taken so long to get there. The reason for that delay is that voice recognition is difficult. It’s needed a number of different technology enablers to come together: reliable, fast internet speeds for users, low cost, low latency cloud services and the machine learning for voice recognition to move it from novelty to everyday reality. Put them together and we’re now at that point where we can envisage a conversational internet.

Once you can talk to the Internet things start to change. Amazon, Google and Microsoft regularly present slides that show this as the natural evolution of user input, as we progress from keyboards to mice to smartphones to just talking. They refer to it as the new “conversational interface”, signifying that the internet is undergoing a hand to mouth evolution.

[​IMG]

Why is this important? In five years, if voice recognition continues to improve at its current pace, then people may look back and wonder why they ever used a keyboard. But there’s another aspect to that evolution – people may also wonder why they ever tapped a smartphone. If all you need to do to get information is to vocalise your question, then it may not take long for people to fall out of love with their smartphones. In the same way that Apple destroyed the feature phone market, Amazon may equally well destroy the smartphone market.

The reason for that is that whilst Siri, Voice Search and Cortana have mainly been used as keyboard replacements, taking away the pain of typing on a smartphone, Alexa does something else. For many users it has become a companion. In the same way as normal conversation, you don’t need to take anything out of your pocket or press a button – you just talk. In an interview with New Scientist, Daren Gill, director of product management for Alexa, says he has been surprised by how often people try to engage the assistant in purely social interaction. “Every day, hundreds of thousands of people say “good morning” to Alexa,” he says. “Half a million people have professed their love. More than 250,000 have proposed. You could write these off as jokes, but one of the most popular interactions is “thank you” – which means people are bothering to be polite to a piece of technology”.

There is little doubt that users find it appealing. From its initial application of ordering more things from Amazon, its use has expanded, thanks to Amazon’s approach to allowing anyone to add in skills, where additional keywords can direct the conversation to other companies. “Alexa, ask Meat Thermometer the temperature for pork?” will tell you how to cook a piece of pig. “Alexa, ask Tube Status about delays on the Victoria line” tells me about delays into the office. “Alexa, ask Wine Mate what goes with zebra?” tells Amazon something about my culinary experiments.

Voice recognition in the cloud, along with the AI to interpret and respond intelligently (which is a very different task to traditional voice to text) is highly disruptive. The last major disruption we saw in the mobile market, was Apple’s introduction of the iPhone. That was disruptive because it changed the dominant skillset from RF expertise (the preserve of the previous cadre of suppliers – Nokia, Ericsson and Motorola) to User Experience. The iPhone won customer’s hearts because of its ease of use and what you could do with it. It can be argued that that change is what allowed Samsung to rise to its number one position in the handset market. The incumbents (Nokia, Ericsson and Motorola) were too arrogant to copy; Samsung wasn’t, and the rest is history.

In the same way that user experience changed the game in Apple’s favour, the AI behind voice recognition is poised to change the game again. This time, the companies which will succeed are those with the cloud AI expertise. Amazon has made the running, leveraging its AWS experience. Google is well placed to challenge, helped by its acquisition of Deep Mind, who are already showing their capabilities with Google’s Neural Machine Translation. Microsoft’s recent acquisition of Maluuba shows that it intends to be one of the key players. However, this puts physical product companies like Apple and Samsung at a distinct disadvantage. Even with Siri and Viv, without the AI expertise to make the IoV compelling, they could quickly slip from market leaders to low margin followers.

Although Amazon, Google and Microsoft have played, or are playing with mobile hardware,


It’s why the keyword is so important – it becomes the brand, rather than the device which provides the route through to the cloud.




This is where Amazon has a clear advantage. Alexa is sufficiently divorced from the Amazon name that other brands are happy to use it – something which Amazon is actively encouraging, both through Alexa Services, which let hardware vendors build it into their products, and Alexa Skills, which allows applications to use their AI. At CES this year Alexa was generally considered to be the star of the show, even though Amazon weren’t present. More and more companies are jumping on the bandwagon. Ford lets you talk to your car, Huawei and LG let you talk to your phone and fridge; ADT lets you talk to your burglar alarm “Alexa, can you tell the burglars they’re naughty people”, while Brinks Array have it in their door lock, so your burglars can ask to be let out whilst telling all of your other voice activated goodies that they’re about to get new owners. Some applications will be trivial and die, but with the proliferation of things to talk to and a growing range of Alexa skills to provide answers, everything is in position for users to change the way they interact with the internet. A further advantage is that “Alexa” does not have the implicit brand baggage of “OK Google”, making Amazon a more attractive partner for many who don’t want to water down their own brand.

Ironically, Google tell the story better. Google’s Gill is clear when he makes the point that “using speech in this way means the interface almost disappears”. Speech is so second nature, that as long as the AI and applications respond correctly, then talking to the internet becomes natural. We will interact with it in the same way we do with friends. (Although anyone interested in how much we have already lost that conversational skill should read Sherry Turkle’s “Reclaiming Conversation”.)

This is why the IoV has the potential to be so disruptive. The history of computing and telephony has always involved touch – tapping a keyboard or holding a phone. The Internet of Voice removes that constraint – we just converse via a microphone which may exist on any number of household products. Futuresource Consulting reckon that 6.3 million voice assistants were shipped in 2016; Amazon admit that they had difficulty meeting demand for Echos in the run-up to Christmas. If we can believe CES, this is the year when we’ll start talking to (or through) tens of millions of devices.

Once we start talking rather than touching or tapping, it won’t take long to lose our connection with our smartphones. We’ll still need them for connectivity, but without a need to touch them to initiate a question, they may quickly become less relevant to our lives.


Apple’s decision to remove the 3.5mm jack is inadvertently driving this transition even faster, as it encourages manufacturers to put more functionality into their wireless headsets and earbuds.


Part of that functionality will be smart microphones which can listen for the key phrases. Knowles – a manufacturer of miniature microphones for phones and hearables have already launched their VoiceIQ, a low power, always listening, voice detector which connects to a voice DSP for key phrase detection.

Within the next year I expect to see these functions condensing onto a single chip and appearing as standard in most hearables.
 Makers have already demonstrated Echo functionality with Rasbperry Pis and a $9 microcontroller board. For any device with a slightly better than minimum spec microcontroller and an internet connection, that’s just some additional code.

The IoV should be good news for a range of other largely unseen companies with expertise in analytics and voice processing. Lesser known audio processing specialists like Alango, who have been putting voice enhancement algorithms into cars for years, are looking at how they can leverage their IP in this new market. Other key enablers are also making their move, as demonstrated by ARM’s recent introduction of their Audio Analytic’s based ai3 Artificial Audio Intelligence platform for cortex chips and Mindmeld’s announcement of a deep-domain conversational AI platform.

All of this takes functionality and user ownership away from the smartphone. So how quickly will we fall out of love with them? That’s difficult to predict. A recent survey of users about their favourite phone features suggests it may be sooner rather than later. The top three applications were GPS directions, messaging and setting alarms without the need to touch your phone. The growth of hearable devices can take care of directions and alarms. That leaves messaging, and it will be interesting to see what voice does to that. Incorporating voice into its product may prove to be Facebook’s biggest challenge yet. If someone else get voice right for social media it could be the chink in the armour which ends Facebook’s dominance and consigns it to becoming the next MySpace. Voice may also enable new services which attract our attention. We’re already seeing them emerging on Alexa Skills. I particularly like the Earplay skill on Alexa, which lets you take part in telling a story, heralding a new level of user interaction.

It clear that the battle between Google and Amazon is ramping up. Amazon is not just engaging with developers to integrate Alexa and develop skills, but has set up a $100 million Alexa Fundto invest in companies that want to innovate with voice. Google has launched its Assistant and Now and potentially has the better analytics engine, but need to get users talking to it to build up its response AI database. Microsoft is keeping its powder dry, whilst Apple with Siri and Samsung with Viv are increasingly looking like hardware vendors whose voice roadmap doesn’t go much beyond voice to text. There is little to suggest they will be contenders. Phone vendors also have a difficult choice – do they support IoV applications like Alexa and Now, which direct the voice questions to a competitor, or do they try to block them in favour of their own options. If they block them, they risk alienating the consumer, speeding up the point at which users defect from that phone.

Amazon has an interesting advantage in terms of monetisation, as they get a cut of revenue when users place an order using Alexa. The investment firm Mizuho reckons that could be as much as $7 billion by 2020. In contrast, Google’s revenue model focuses on advertising and it’s not clear how that can be mapped onto voice. But they have the cash to ignore revenue in the short term and buy customers while they improve their user experience. They should also have the better infrastructure to support smart home devices, leveraging the work they’ve already done with Nest and Thread. Despite that, and recently signing up NVIDIA, who have smart home aspirations, Alexa seems to be making the running.

Although the battle is between Amazon and Google, it will not stop others trying to define their own niche in the Internet of Voice. Oakley’s Radar Pace is one example – a spectacle in all senses of the word. It uses AI as your personal trainer, allowing you to ask questions like: “OK Radar, what’s today’s workout?”, OK Radar, what’s my power?”, but presumably not, “OK Radar, do I look like a dick when I wear this?” There are times when you realise companies have been seduced by the promise of too much technology. As Saint Exupery said in his biography “Wind, Sand and Stars”, “Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away” – a maxim which many of today’s manufacturers should try to understand. It’s why the smartphone has nowhere else to go – the next step in simplicity is the Internet of Voice. Amazon’s Echo shows they have taken that maxim to heart.

So, “Alexa – I think I want to divorce my smartphone, as I don’t love it anymore. Will you marry me? I want your babies.”

I have added his site to my favorites. QUIK knows this guy, read this blog when it was written....with the traction in hearables they for the first time used the phrase that Nick H probably coined IoV.

Is it significant?

Very.

Really had fun this am.

Please we need all the qs and comments we can get.

Wednesday, November 15, 2017


  1. Commentary; BF spoke well on the shift from pushing a button to talk to a device....
    Samsung has the Bix button-they NEED to loose it ASAP?

    The snips of text show how Ssung is shifting at the top....


    http://www.nationmultimedia.com/detail/Startup_and_IT/30330391


    Competition heats up to take lead in AI initiative at Samsung

    Tech October 30, 2017 14:47





    Tech behemoth strives to catch up on AI specialists, database with AI-powered appliances, new services


    Currently, Bixby is led by the IT and mobile communications division and supported by the Samsung Advanced Institute of Technology and Digital Media & Communications R&D Center.

    SAIT is a center researching on future technologies looking to the distant future -- five or 10 years -- while DMC R&D Center is aiming to shift the paradigm of electronics within one or two years.

    SAIT is led by chip expert Chung Chil-hee, while the DMC R&D Center is headed by Kim Chang-yong specializing in 5G, 3-D technologies and digital media.

    And multiple divisions at Samsung are in competition to take initiative in the AI drive.






    “In the absence of the de facto leader (Vice Chairman Lee Jae-yong), the company doesn’t yet have one (a particular head or organization) to take the lead in AI,” the official said. “SAIT, DCM R&D Center and each business division are separately preparing for AI,” but they would need a clear control tower to create better synergy, the official said.

    In an apparent move to move forward, Rhee In-jong, the head of the mobile division who was responsible for the rollout of Bixby, was replaced earlier this month by 
    Chung Eui-suk, formerly Samsung Research America’s vice chief, to lead the voice assistant, under Samsung’s mobile chief Koh Dong-jin.


    Ssung....get rid of the push to talk, ie THe Bix button...

    thanks.


  2. jfieb

    jfiebWell-Known Member


    Flows of data to enable innovation in autonomous car, healthcare sectors: Samsung president
    2017.10.18 09:36:14 | 2017.10.18 11:23:08

    [​IMG]
    Streams of data will bring in innovation in self-driving car and healthcare technologies, Samsung Electronics Co. President and Chief Strategy Officer Sohn Young-kwon said, urging business and government to prepare the era of the data economy by taking risk and reducing regulations.

    “In just 60 seconds, Google deals with nearly 380,000 queries, and 3.3 million posts are uploaded on Facebook,” Sohn said at a session titled “Driving Innovation in the Data Economy” at the 18th World Knowledge Forum on Tuesday. “Such flood of data is creating new opportunities in the healthcare and self-driving car industries.”

    Sohn added that the exponential rise in data with enhanced connectivity will finally allow artificial intelligence and Internet of Things (IoT) to lead innovation in human life. Accumulated data over the past decades is ready to make big changes in human life after it has been applied to various sectors such as healthcare, transportation, manufacturing, and agriculture. New devices such as smartphones have served as catalyst in boosting new industries.

    [​IMG]
    The Samsung executive expected that the data economy would have the most profound impact on healthcare and self-driving car industries. He projected it may cost only $100 to analyze human genome 10 years later and prevalent genome analysis would lead to new opportunities in the aging society by offering customized treatments and allow disease prediction.

    Sohn cited strict regulations and a lack of entrepreneurship in Korea as obstacles in the era of data economy. The country’s so-called “pali-pali” or fast culture could help promote overall growth, but it also could defer advancement. Sohn urged companies to make long-term investment with patience looking ahead five to 10 years.

    QUIK veterans understand the long-term investment and patience Sohn speaks of,

  3. jfieb

    jfiebWell-Known Member


    SAITS page on heath


    [​IMG]

    The rise in healthcare costs as a result of population aging as well as the increasing prevalence of chronic illness has become a critical social issue. And so there is a rapidly growing need for Mobile Healthcare (MH) solutions, by which normal people can better manage their health status and prevent themselves from becoming chronically ill patients. MH is enabled via the merging of advanced sensor technologies along with Information and Communication technologies.


    MH solutions continuously sense, analyze, and transmit various clinical parameters (including not only vital signs but also a range of biomedical signals, glucose levels, physical activity levels, etc.) obtained in real-time, anywhere, anytime, via ubiquitous wearable and/or implanted biomedical sensors, so that health specialists can remotely monitor user conditions, provide feedback, as well as initiate appropriate health advices. This is a truly revolutionary system that helps to provide quality healthcare services at low cost.


    SAIT is performing world-class research on smart MH solutions for businesses
     in today’s rapidly evolving healthcare industry. We envision the creation of core technologies that will allow novel sensing of human health conditions in non-invasive manner and novel sensing platforms to be networked with smart personal devices such as wearable devices, smart phones, tablet PCs, and smart TVs, combined with new healthcare services that can be easily integrated into our daily lives.


    that sentence....MH solutions are for businesses as Rick has written on.
  4. jfieb

    jfiebWell-Known Member


    Commentary; There were some real politics Korean mobile division vs US SAIT so its good that some US folks are moving up the ladder and spoken of as rising stars....


    [​IMG]


    I put this up again for the tag line,

    No, I dont know what that sensor on the chest wall is.


    The advent of Ubiquitous healthcare.

Tuesday, November 14, 2017

INTERVIEW: Todd Mozer, CEO, Sensory
Posted on October 20, 2017
TweetLikePlusPin ItShare


Peter O’Neill recently interviewed Todd Mozer, CEO of Sensory. [​IMG]The conversation begins with some highlights of Sensory’s exceptional year before diving into the details of a recent major milestone for the company: the integration of its TrulySecure biometric software into LG’s latest flagship smartphones. LG’s move came in anticipation of Apple’s recent Face ID announcement, and Mozer speaks about taking a leadership role in an era when Apple can no longer keep its secrets. The conversation goes on to talk about speech, voice, and face biometrics in the world of IoT and smart homes, how decentralized biometrics can prevent major data breaches like the recent Equifax fiasco, and Sensory’s impressive virtual banking assistant technology.

Read our full interview with Todd Mozer, CEO, Sensory:
Peter O’Neill, President, FindBiometrics (FB): Has this been a good year for your company? What have been some of the highlights?

Todd Mozer, CEO, Sensory: It’s been a great year for Sensory. We have a couple of product lines that are really doing well. Our fiscal year just ended in September, and we grew over the previous year by more than 50 percent, and we are expecting continued growth in that range for 2018.

Things are going well with TrulyHandsFree, our low-power speech recognition engine. The IoT market segment has really created a lot of demand for voice controlled products, and we are getting a lot of traction from companies using cloud based digital assistants that want embedded wake up words and commands. On the biometric side of things, we are seeing a lot of opportunity and growth in both the face and the voice side of our TrulySecure product line. Obviously, mobile phones are a big area, and vertical markets like banking are taking off for us, so overall things are doing quite well.

FB: You just recently announced that LG’s latest phones will feature your Truly Secure facial recognition. Can you tell us a little bit about that news?

Sensory: The whole mobile phone industry – Led by LG, Apple, and Samsung are moving to face authentication as part of their biometric stack. Apple’s is moving towards removing the fingerprint and Samsung seems to be doing a layered approach with a lot of biometric technologies. Most of the mobile players are following this trend. LG put Sensory’s TrulySecure in the G6, Q8, and V30 lines. I just bought a V30 and it’s a fantastic phone!

FB: Now your comment about Apple, we are in total agreement with you. When that news came out we felt that it would have the same impact as Touch ID several years back had on our industry, and I guess you are feeling that almost immediate impact it would seem?

Sensory: Well, everybody knew it was coming. I think one of the big things about Apple’s latest announcements were that there were no big surprises.They were so historically famous for keeping things secret and then surprising us with one more thing and this time there were no surprises. The whole mobile industry knew that face was coming, and LG took a leadership role in setting the trend in this face authentication space.

But even on the voice biometric side we have released our higher end voice biometrics that we call TrulySecure Speaker Verification in both the Moto X and Moto Z, and it’s been used by Samsung, Huawei, and now LG as well, so we are getting a lot of traction for both voice and face across the mobile phone segments.

FB: You started to mention IoT and voice. It’s an area that we are very interested in – home automation etc. – and the two biometrics that you focus on, face and voice, are really considered explosive in terms of their growth right now. One, because of Apple, but on the voice side, what is driving that? It seems to be going through quite a growth period, what are the drivers there?

Sensory: Amazon’s Alexa did for the IoT space what Siri did for voice on the mobile phones and everyone and their brother are now shipping a speaker that is voice controlled. In fact, just today Sonos announced their new speaker which I’m kind of excited about just because Sonos has really the highest quality stuff out there and I’m a big fan of theirs for audio quality and connectivity. The Alexa phenomenon is driving a lot of adoption across a wide range of home consumer type of products. A lot of these IoT devices use TrulyHandsfree which gives embedded voice wake-up. And as the products get more advanced they are using TrulyHandsfree for embedded commands (fast response and wifi not required) and many are layering TrulySecure speaker verification on top of our wake-up word technology so that when you talk to the product it knows who you are. They’re a little slower in adopting that than just the wake word, but I think over the next year what you are going to see is more and more vendors having wake words that do entail speaker verification. Amazon just announced it in their echo line. In home robotics JIBO, the home personal robot, recognizes your voice when you say, “Hey JIBO”. Jibo identifies who you are because it has Sensory’s speaker verification built into the wake-up.

FB: Now you also recently launched your bank teller app and I have had a chance to demo it – it’s so cool with the avatar – and there is a lot of cutting edge technology involved in this product, can you please describe it to us?

Sensory: Sure, let me first give a little background on some of the technologies involved, because you are absolutely right: we have really combined a whole lot of AI technologies and I suspect it is the most advanced demonstration of AI that’s ever been done at this point, and I mean from anybody. We’ve combined not only text-to-speech and wake-up word speech recognition but we put a natural language speech recognition engine into it too and we have biometrics in it which combines face, or voice, or face and voice, so we can identify people. On top of all that we have an animated talking avatar so your assistant has a face; the lip synchronization is driven by the TTS engine, which uses a proprietary non linear morphing scheme to change the mouth shape and image to match distinct visemes that accurately models the way we speak.

For the Finovate conference, it’s one of those demo-based shows where you get up on stage for seven minutes, we put together a bank teller app so it’s like having a bank teller on your phone that you talk to and it talks back to you. So, you are basically interfacing with what looks like a person, and you talk to it and it talks back. We’ve designed the natural language interface to be specific to the domain of banking so I can say things like, “Transfer $134.17 from my checking to my savings account” or “Pay my phone bill from checking” or “What’s my savings account balance?” or “Where do I go to buy checks?”, those kinds of things. I can send money to other people that are in my contacts list. It is a very smart and intelligent interface where if I leave things out it can do what is known as ‘form filling’ to prompt me for the things I left out. For example, if I say, “Transfer $100 to Peter O’Neill” it will say, “From which account would you like to transfer”.

FB: To me this is a major step forward. When I saw the demo, I was very impressed, I think mostly with the user interface and the comfort that is created with this avatar and lip synching technology; it is so natural. I think people will feel at ease using it. Are you finding that?

Sensory: The lip synch and the avatar is something that we had developed a long time ago and we just put on the back burner because we didn’t see a good business model. I do think with the rise of personal assistants we are going to see more use of avatars as personal assistants, but it hasn’t happened yet. From the demo that we gave there were a few banks that approached us that were interested, but it’s really not a big focus of ours today. We do think we want to get a few customers out there with this sort of technology to highlight that it is feasible but is not an area that we are putting a lot of development on.

FB: You mentioned one of the big news events that occurred last month and that was the Apple launch. The other was the Equifax breach, which really shook up our industry. What is your view on that and how can biometric technology help solve these breach issues?

Sensory: Well, it just highlights the sort of the philosophy that Sensory went into biometrics with. And, in fact, it’s not just biometrics but it’s what we do with our speech recognition too. We do it all on-device so it never gets sent out to a centralized database or cloud where a hacker can steal information from everybody at once. In fact, with the Yahoo breach it was just announced that they had totally underplayed the quantity; it turned out that all of Yahoo’s contact information was stolen, so it just wasn’t some percentage of it. If there’s a bunch of data sitting in the cloud, then with more information and the more personal it is, the more incentive hackers have to try and break in and steal it. If there’s enough data the value gets so high that professionals and governments can try and take it. So, keeping more on-device, in your home or in your pocket, it is just a safer approach, and that is the tactic Sensory has taken with both speech recognition and biometrics.

FB: In several of our recent webinars and panels it appears that end user education is becoming critically important in our industry as the it starts to move faster and faster. We are trying very hard to make sure that biometric experts such as yourself are speaking about some of the challenges in the marketplace. How critical do you think this education aspect is, and what are some of the challenges that you see as biometrics start to move ever more quickly into the marketplace?

Sensory: When you think about the taking of personal information there are two areas that you can think about. One is the legal theft by companies that have you sign an agreement when you click on a button and say yes take all my personal information and you can use it in any way you want. In this area the education is about how well the consumer understands the full consequences of what they are signing off on. The second area is the illegal theft, the Yahoo and Equifax kind of breaches. I think they both require a different sort of education. I think it is very hard to quantify how safe different websites or different places are that we trust our personal information in. I saw a really interesting documentary a while back, I think it was Terms and Conditions May Apply, and it basically talked about the ways that we sign off on giving away all our data in return for the free software. They make it so you don’t want to read the agreement. They put it in bold case, six-point font – they make it very hard to read so people don’t read the EULA’s. To the extent that there was legislation that would force them to quantify it in a few simple bullet points like, we are going to use your data for X, Y, and Z and not use for A, B, and C. I think that would be really valuable for consumers.

FB: I’m aware of that push and it is a long road to get to that, but is it needed because nobody reads that 17 pages of fine type that you talked about. And really it can probably be boiled down to the four or five points that are critical for the end user, you and I, when we sign up for anything. Privacy advocates are definitely heading in that direction and I hope it happens soon.

Sensory: Yes, it is a tough and challenging battle. And you know we get free stuff, right? Software has become essentially free by giving away our personal information.

FB: Well Todd it’s been an incredible year for your company, what’s next for Sensory?

Sensory: We’re just trying to keep up with our current customers. We’ve really gone from having a small base of really big customers to a much broader base of customers. So, we need to keep growing our support staff, we need to keep growing our technology staff to be able to deal with the increasing demand that we are seeing. But we are moving in some exciting new directions: we’re going to move toward 3D cameras with the face authentication – we think Apple’s move in that direction was the right one. We’re working on sound and scene analysis because we’re in so many always-on products we want to be able to identify the different sounds whether it’s the glass break, or the baby crying, or an alarm going off – things that can help the owners of the homes where these devices are deployed.

FB: Thank you Todd for taking the time to speak with us today. Your products are fantastic and I think you are really in the sweet spot of where the industry is heading, so congratulations on that.


My investment in QUIK is 10% Sensory, has been for a while now. The IoT will start to show that value in '18
I really liked this item and I want to bask in it...

Chief Strategy Officer Sohn Young-kwon, 
who is based in Silicon Valley.

:cool:

said it-the guy tasked with global innovation.

1. Its no flash in the pan. In the tortoise and hare, they can be a tortoise, but still get there in a solid way.

2. A sense of the scale of the opportunity for them....Global....and aging populations that will need this.

3. They get that its NOT just a nice device its the platform and the data, and how to make use of the data. It takes the delays
and gives a very good reason for them. They want a LOT....not many can even consider doing this? 
A tier 1 has the resources to make it real.


Young Sohn
President and Chief Strategy Officer
[​IMG]
Samsung Electronics

Location:San FranciscoCaliforniaUnited States
Gender:Male
Investor Type:Individual/Angel



Young Sohn is president and chief strategy officer for Samsung Electronics, and Chairman of the Board, Harman International. He leads development and strategy for global innovation, investment and new business creation. Young led the $8 billion acquisition of Harman International Industrie


wow, and he said it.

Monday, November 13, 2017

the snip all by itself, that says it all

are they committed?

One of the long-term growth plans of Samsung is to combine its current manufacturing capabilities for wearables with advanced digital health care platforms, according to the tech titan’s Chief Strategy Officer Sohn Young-kwon, who is based in Silicon Valley..




the rest


http://business.inquirer.net/240405/samsung-electronics-ventures-healthcare#ixzz4yN26Hfy8