Thursday, July 30, 2015

Sensory will change the game... here is some material to work through. Worth ALL the time it takes.
Nice material.  QUIK has done better than I had hoped for here.




  1. Nice, A lot of reading to do. I was very curious to find out where the CRUCIAL part of a SoC would come from.
    From Brian's comments it will be worth the time to get a better understanding. I'm glad its not Audience.

    I will be reading all their blog entries tonight.

    Good Technology Exists – So Why Does Speech Recognition Still Fall Short?
    March 30, 2015

    At Mobile World Congress, I participated in ZTE’s Mobile Voice Alliance panel. ZTE presented data researched in China that basically said people want to use speech recognition on their phones, but they don’t use it because it doesn’t work well enough. I have seen similar data on US mobile phone users, and the automotive industry has also shown data supporting the high level of dissatisfaction with speech recognition.

    In fact, when I bought my new car last year I wanted the state of the art in speech recognition to make navigation easier… but sadly I’ve come to learn that the system used in my Lexus just doesn’t work well — even the voice dialing doesn’t work well.

    As an industry, I feel we must do better than this, so in this blog I’ll provide my two-cents as to why speech recognition isn’t where it should be today, even when technology that works well exists:
    1. Many core algorithms, especially the ones provided to the automotive industry are just not that good. It’s kind of ironic, but the largest independent supplier of speech technologies actually has one of the worst performing speech engines. Sadly, it’s this engine that gets used by many of the automotive companies, as well as some of the mobile companies.
    1. Even many of the good engines don’t work well in noise. In many tests, Googles speech recognition would come in as tops, but when the environment gets noisy even Google fails. I use my Moto X to voice dial while driving (at least I try to). I also listen to music while driving. The “OK Google Now” trigger works great (kudo’s to Sensory!), but everything I say after that gets lost and I see an “it’s too noisy” message from Google. I end up turning down the radio to voice dial or use Sensory’s VoiceDial app, because Sensory always works… even when it’s noisy!
    2. Speech Application designs are really bad. I was using the recognizer last week on a popular phone. The room was quiet, I had a great internet connection and the recognizer was working great but as a user I was totally confused. I said “set alarm for 4am” and it accurately transcribed “set alarm for 4am” but rather than confirm that the alarm was set for 4am, it asked me what I wanted to do with the alarm. I repeated the command, it accurately transcribed again and asked one more time what I wanted to do with the alarm. Even though it was recognizing correctly it was interfacing so poorly with me that I couldn’t tell what was happening, and it didn’t appear to be doing what I asked it to do. Simple and clear application designs can make all the difference in the world.
    3. Wireless connections are unreliable. This is a HUGE issue. If the recognizer only works when there’s a strong Internet connection, then the recognizer is going to fail A GREAT DEAL of the time. My prediction – over the next couple of years, the speech industry will come to realize that embedded speech recognition offers HUGE advantages over the common cloud based approaches used today – and these advantages exist in not just accuracy and response time, but privacy too!

    Deep learning nets have enabled some amazing progress in speech recognition over the last five years. The next five years will see embedded recognition with high performance noise cancelling and beamforming coming to the forefront, and Sensory will be leading this charge… and just like how Sensory led the way with the “always on” low-power trigger, I expect to see Google, Apple, Microsoft, Amazon, Facebook and others follow suit.



  2. I will put this one up because it Samsung

    a snip from this yrs WMC

    I’d be remiss without mentioning the Galaxy S6. Samsung invited us to the launch and of course they continue to use Sensory in a relationship that has grown quite strong over the years. Samsung continues to innovate with the Edge, and other products that everyone is talking about. It’s amazing how far Apple took the mantle in the first iPhone and how companies like Samsung and the Android system seem to now be leading the charge on innovation!


    Samsung selects Sensory as Key Source for Embedded Speech Technologies
    Santa Clara, CA – September 5, 2014 …Sensory Speech Recognition Deployed by Samsung Across a Wide Range of Phones, Wearables, and Cameras.

    Sensory Inc., the industry leader in speech and vision technologies for consumer products, is pleased to announce that its pioneering TrulyHandsfree™ voice technology is deployed across an array of Samsung’s iconic Galaxy products including smartphones, tablets, cameras, and wearables. TrulyHandsfree™ is the leading always-on, always-listening voice control solution that just works. It enables users to activate and access their phone with an ultra-low power voice trigger. The TrulyHandsfree™ voice control can also enable extremely high accuracy command sets that do not require close talking mics, quiet rooms, or even saying things exactly right. Samsung uses these features to answer calls, use the camera, or perform other functions where talking is easier and more convenient than touching the device. Samsung also uses TrulyHandsfre™ as the voice trigger for S-Voice.The technology is robust enough to work in noisy environments, has a low risk of false starts (won’t make a call when you don’t want it to) and has minimal impact on battery life, making it the ideal voice control solution for mobile and wearable devices. Since its inception, TrulyHandsfree™ trigger technology has become the most widely adopted keyword spotting technology in the speech industry.

    Among the Samsung devices implementing the TrulyHandsfree™ technology is the flagship Galaxy S line of phones. Sensory was first introduced in Galaxy S2 and has been a key part of GS3, GS4, and now GS5.

    Outside of smart phones, other Samsung products incorporating TrulyHandsfree™ include the Galaxy Note 1, 2, 3, and 4 devices and the Galaxy Gear wearables line including Gear 1, Gear 2, and Gear S.

    Sensory is also in cameras and tablets which deploy S-Voice.

    “Samsung continues to be a standard bearer and innovator in an array of dazzling and savvy technology devices which meet the needs of mass consumers,” noted Sensory’s CEO Todd Mozer. “We are very pleased that they have selected Sensory for all their embedded speech needs.”



    “Sensory has emerged as the clear leader in low-power high-accuracy speech recognition, and the widespread adoption across Samsung products is a testament to their success,” said William Meisel, President of TMA Associates, which provides insights and consulting support to companies that want to incorporate speech technologies into their products or services.

    For more information on TrulyHandsfree™ contact sales@sensory.com.

    About Sensory, Inc.
    Sensory, Inc. is the leader in UX technologies for consumer products, offering a complete line of IC and software-only solutions for speech recognition, speech synthesis, speaker verification, vision and more. Sensory’s products are widely deployed in consumer electronics applications including mobile, automotive, wearables, toys, and various home electronics. With its TrulyHandsfree™ voice control, Sensory has set the standard for mobile handset platforms’ ultra-low power “always listening” touchless control. To date, Sensory’s technologies have shipped in over half a billion units of leading consumer products.

    TrulyHandsfree is a trademark of Sensory, Inc.
    Last edited: Yesterday at 8:26 PM
  3. jfieb

    jfiebMember



    I read a lot of job offereings, Sensroy has one like this

    Senior Software Development Engineer - 5+ Yrs Exp (Loc: Santa Clara CA)
    Sensory, Inc - United States
    Job Code: 15-10
    *Location: Santa Clara CA

    Sensory offers an exciting opportunity to change the world of consumer electronics with best-in-class speech and image recognition technologies and chips. Sensory is a private, growing technology company and the leader in a rapidly expanding market for voice and vision user interfaces.Sensory’s specialties are user interfaces for mobile phones, tablets and notebooks; home automation; automotive and entertainment robotics. Sensory has design wins and products shipping with major OEMs in all these fields, including the largest mobile OEMs. Sensory’s technologies are deployed in 100’s of millions of units world-wide. Visit sensory.com for more details.


    It's important as its the UI for many devices QUIK needs to be in.

    Qualifications

    The ideal candidate combines a high level of creativity and analytic ability, with get-it-done practicality and excellent work quality. This individual will be part of the team that creates and deploys to the market best-in-class speech recognition and natural language solutions on smart phones, tablets, PCs and consumer electronics.

    Primary Duties/Responsibilities

    Software Programming

    • Architects and implements speech recognition, speech synthesis, voice processing and noise management algorithms and software on a variety of DSP-based and ARM-based platforms from the market leaders

    So QUIK has hardened some part of their algo IP
    • Evaluates new processor platforms for feasibility of implementing Sensory technologies
    • Ports technology software to various processor platforms and validates performance
    • Develops and maintains in-house and customer tools to support product application of Sensory technologies
    • Defines and implements simulations and scripts for validating, evaluating, and improving the performance of Sensory technologies.
    Technology Algorithms

    • Learns and understands existing proprietary Sensory technology in depth
    • Collaborates with the theoretical algorithms development group in the development and improvement of speech recognition, synthesis, voice processing and noise management algorithms.
    Requirements

    • MS in EE or CS.
    • Five years programming experience in product development.
    • Proven ability to be very innovative and productive in teams and working solo
    • Proven experience writing DSP algorithms in C/assembly code in shipped products.
    • Working knowledge of a few commercial DSPs or DSP cores.
    • Solid signal processing knowledge
    • Solid knowledge of analytic techniques, statistics, mathematical modeling.
    • Proficiency in assembly language and embedded “C”, demonstrated by a past primary programming role in one or more fully-released products.
    • Ability to develop software from existing code, detailed specification, or general conceptual outline; equally adept at high-level algorithmic software design and low-level code optimization.
    • Good oral and written communication skills, ability to exchange and debate complex technical concepts face-to-face or remotely.
    • U.S. citizen or permanent resident
    Preferences

    • Experience with Perl, Tcl/TK, or similar scripting languages
    • Experience writing Matlab simulations of algorithms
    • Signal processing knowledge for speech
    • Knowledge of digital audio processing
    • Familiarity with embedded systems hardware, ADCs, DACs, ability to read schematics.
    • Experience with Acoustic Echo Canceller (AEC), Beamforming and Noise Reduction Algorithms– such as for mobile, auto, conf call, etc.
    When applying, please reference: Job Code: 15-10 - Senior Software Development Engineer on subject line.

    * Sensory’s Policy for Agencies, Retained Search Firms and/or Independent Recruiters *

    Any unsolicited resumes sent to Sensory, Inc. including unsolicited resumes sent to a Sensory, Inc. by mailing address, fax machine or email address or directly to Sensory, Inc. employees, without having a Sensory, Inc. agreement in place, will be considered “UNSOLICITED” and a “PROPERTY” of Sensory, Inc. Sensory, Inc. will NOT pay a placement fee resulting from the receipt of any UNSOLICITED resume.


    QUIK could not have an SoC without this, and now it may be an important offload from the AP....that way its always on
  4. jfieb

    jfiebMember



    17 July 2015
    Powerful forces, old and new, have come together to dramatically change the way humans interact with devices. The voices of Siri, Cortana, and Echo have heralded this change to consumers and electronics developers alike, potentially marking the end of an era when tap, pinch, slide, and swipe dominate user interfaces. Very soon, the most natural form of communication—speech—will dominate human-machine interactions, and the pace of this change is taking everyone’s breath away.

    “The velocity of the improvements we have made with voice is like nothing I have ever seen before,” says Kenneth Harper, senior director of mobile technical product management at Nuance Communications. “But what we have today is just the tip of the iceberg. This vision of ubiquitous speech will become a reality in the future. In the next year, we are going to see a lot of new interfaces come to market, where speech actually is the primary interface.”

    Again, this was very important for us, and QUIK has done very well?

    The Need for Speech

    The shift to voice-enabled interfaces has been accelerated by the emergence of the Internet of Things (IoT) and broad adoption of mobile and wearable devices. As the IoT takes shape, promising to provide ubiquitous connectivity to almost limitless information, consumers increasingly expect easy and convenient access to data. Unfortunately, traditional device interfaces often hinder, rather than facilitate, such access.

    [​IMG]T

    Something Old and Something New

    To make this leap forward, developers needed a technology that could process the complexities of language and information retrieval in much the same way that the human brain does. This translates into nonlinear, parallel processing that learns from data instead of relying on large sets of rules and programming.

    For this, developers have turned to neural networks—a branch of machine learning that models high-level abstractions using large pools of data. Although neural networks (also known as deep learning) has been sidelined as a computing curiosity for several years, researchers have begun harnessing neural nets’ ability to improve speech-recognition systems.

    Neural nets use algorithms to process language via deeper and deeper layers of complexity, beginning by identifying phonemes (perceptually distinct units of sound), learning the meaning of key words, and progressing to the point where they understand the importance of context. Ultimately, the algorithms put words together to form sentences and paragraphs that conform to the rules of grammar.

    What makes neural nets so relevant now? Increased use of speech recognition and information retrieval systems like Siri, Cortana, and Echo has created large pools of data that train neural nets. The appearance of this data coincides with the availability of affordable computer systems capable of handling very large data sets. These two resources have enabled the developers to build bigger, more sophisticated models to create more accurate algorithms.

    These new and improved models have increased the effectiveness of voice interfaces in two ways; they have improved speech-recognition systems’ ability to transcribe audio into words, and enabled a technology called natural language understanding, which interprets the meaning and intent of words.

    “.......

    Processors Built for Voice

    While these software developments have greatly enhanced speech-recognition systems, hardware advances also have played a key role. Researchers credit graphics processing units (GPUs) by providing the computing power required to handle the large training data sets, which is essential in developing speech recognition and natural language understanding models. These processors possess qualities that make them ideal for voice systems.

    To begin, GPUs do not burn as much power or take up as much space as CPUs, two critical considerations when it comes to mobile and wearable devices. It is their capacity for parallel computing, however, that makes GPUs so well suited for neural network and voice processing applications. These highly efficient systems provide the bandwidth and power required to convert large training data sets into the models. The graphic processors are not as powerful as CPUs, but developers can still divide larger calculations into small pieces and spread them across each GPU chip. As a result, GPUs routinely speed up common operations, such as large matrix computations, by factors from 5 to 50, out pacing CPUs.

    “As we have gotten more sophisticated GPUs, we have also gotten more sophisticated ways of interacting with products through voice,” says Todd Mozer, CEO of Sensory Inc.

    Cloud vs. Local…or Something in Between

    Speech-recognition systems come in three flavors: cloud-based implementations, locally residing systems, and hybrids. To determine the right design for an application, issues to consider are processing/memory requirements, connectivity, latency tolerance, and privacy.

    The size of a speech-recognition system’s vocabulary determines the RAM capacity requirements. The speech system functions faster if the entire vocabulary resides in the RAM. If the system has to search the hard drive for matches, it becomes sluggish. Processing speed also impacts how fast the system can search the RAM for word matches. The more sophisticated the system, the greater the processing and memory requirements, and the more likely it will be cloud-based. However, this may not be so in the future.

    “All of the best intelligent system technology is cloud-based today, says Expect Labs’ Tuttle.” “In the future, that is not necessarily going to be the case. In three to five years, it’s certainly possible that a large percentage of the computation done in the cloud today could conceivably be done locally on your device on the fly. Essentially, you could have an intelligent system that could understand you, provide answers on a wide range of topics, and fit into your pocket, without any connectivity at all.”

    Despite the advantages of cloud-based systems, a number of factors make speech systems residing locally on a device desirable. First and foremost, they do not require connectivity to function. If there is any chance that connectivity will be interrupted, local voice resources are preferable. In addition, local systems often offer significantly better performance because there is no network latency. This means that responses are almost instantaneous. Also, if all data remains on the device, there are no privacy concerns.

    Some companies, however, adopt hybrid configurations in an attempt to cover all contingencies. By combining cloud-based and local resources, the designer gets the best of both worlds. Cloud-based resources provide high accuracy in complex applications, and local resources ensure fast responses in simpler tasks. Hybrid designs also mitigate the issue of unreliable connectivity.

    Predictions of what voice systems will look like in the future indicate that there will be a place for each of these approaches. “The cloud will continue to play a big role for many years,” says Harper. “We will continue to push more advanced capability to the device, but as we do that, we will start inventing new things that we can do only in the cloud. But the cloud will always be a little bit ahead of what you can do on the device.”

    ..................
    “We will look back on this period we are in now, and the next five years, as the golden age of AI,” says Tuttle. “The numbers of advances we are seeing are remarkable, and they look like they will continue for the foreseeable future.”


    This is hard work and just consider that they did some important things in the time it took to get the s3. This was a worry for me and what we have is going to keep us busy( learning more)
  5. jfieb

    jfiebMember



    QuickLogic and Sensory Partner to Provide Always-Listening, Deeply Embedded Voice Recognition at Ultra-Low Power


    SUNNYVALE, CA--(Marketwired - Jul 30, 2015) - QuickLogic Corporation (NASDAQ: QUIK)


    • Hardened system blocks specifically designed for voice processing applications provide extremely power efficient, always-listening voice capability

    What a lot of work to decide what part of their IP gets hardened, but in getting it right, what it allows...forget the how, but what it allows.

    • Less than 350 microAmps always-on voice trigger
    • Supports advanced voice processing, including voice recognition, without cloud connection requirement
    So this is incremental info....really, really nice.
    QuickLogic Corporation (NASDAQ: QUIK), the innovator of ultra-low power programmable sensor processing solutions, today announced that it is partnering with voice and vision technology industry leader Sensory Inc. to deliver TrulyHandsfree™ software the world's most advanced voice recognition solution, deeply embedded in its new EOS™ S3 sensor processing platform. The hardened system blocks included in the EOS sensor processing SoC platform are designed to provide integrated voice trigger and command and control functionality at ultra-low power levels, enabling a vast array of voice-driven applications without the need for a connection to cloud services.

    Integrated logic allows digital input from Pulse Density Modulation (PDM) as well as Inter-IC Sound (I2S) microphones, and provides PDM to Pulse Code Modulation (PCM) conversion for processing with Sensory's TrulyHandsfree software. Also hard coded is Sensory's Low Power Sound Detector (LPSD) technology, which allows the speech recognizer to be suspended while an ultra-low power sound detector is running and listening for what could be speech.

    The integrated system supports a wide range of features including highly noise robust always-on, always-listening fixed triggers, enrolled fixed triggers, user defined triggers and passphrases, and up to 20 phrase spotted commands that can be accurately detected in silent to extremely noisy environments. Embedding functionality in hardware dramatically reduces power consumption, enabling always-on voice triggering at a draw of less than 350 microAmps.

    "QuickLogic's new EOS sensor platform is groundbreaking, and we are excited to have enhanced its capabilities by providing our TrulyHandsfree voice control technology complemented by our ultra-low power sound detector in the form of an embedded block," said Bernard Brafman, vice president of business development at Sensory.

    "Sensory is the industry leader in voice processing systems for mobile applications," said Dr. Frank A. Shemansky, Jr., senior director of product management at QuickLogic Corporation. "Integration of Sensory's TrulyHandsfree and LPSD technologies with the QuickLogic EOS sensor processing system provides unprecedented always-on voice capability, and will facilitate a new generation of voice-driven applications."


      More
      Sensory's TrulyHandsfree firmware and hardware low power sound detector (LPSD) are included in QuickLogic's advanced EOS sensor processing SoC, which incorporates a revolutionary architecture that enables the industry's most advanced and compute intensive sensor processing capability at a fraction of the power consumption of competing technologies.

      AvailabilityInitial samples of the EOS platform with integrated voice processing will be available in September 2015. For more information, please visit www.quicklogic.com/EOS.

      About QuickLogicQuickLogic Corporation is the leading provider of ultra-low power, customizable sensor processing platforms, Display, and Connectivity semiconductor solutions for smartphone, tablet, wearable, and mobile enterprise OEMs. Called Customer Specific Standard Products (CSSPs), these programmable 'silicon plus software' solutions enable our customers to bring hardware-differentiated products to market quickly and cost effectively. For more information about QuickLogic and CSSPs, visit www.quicklogic.com.
  6. jfieb

    jfiebMember



    want this on the same post as its very, very important...

    QUIK/Sensory item of today...

    Supports advanced voice processing, including voice recognition, without cloud connection requirement

    Sensory blog.....

    Cloud vs. Local…or Something in BetweenSpeech-recognition systems come in three flavors: cloud-based implementations, locally residing systems, and hybrids. To determine the right design for an application, issues to consider are processing/memory requirements, connectivity, latency tolerance, and privacy.

    The size of a speech-recognition system’s vocabulary determines the RAM capacity requirements. The speech system functions faster if the entire vocabulary resides in the RAM. If the system has to search the hard drive for matches, it becomes sluggish. Processing speed also impacts how fast the system can search the RAM for word matches. The more sophisticated the system, the greater the processing and memory requirements, and the more likely it will be cloud-based. However, this may not be so in the future.

    “All of the best intelligent system technology is cloud-based today, says Expect Labs’ Tuttle.” “In the future, that is not necessarily going to be the case. In three to five years, it’s certainly possible that a large percentage of the computation done in the cloud today could conceivably be done locally on your device on the fly. Essentially, you could have an intelligent system that could understand you, provide answers on a wide range of topics, and fit into your pocket, without any connectivity at all.”

    Despite the advantages of cloud-based systems, a number of factors make speech systems residing locally on a device desirable. First and foremost, they do not require connectivity to function. If there is any chance that connectivity will be interrupted, local voice resources are preferable. In addition, local systems often offer significantly better performance because there is no network latency. This means that responses are almost instantaneous. Also, if all data remains on the device, there are no privacy concerns.




    Really nice execution on this key part of the Eos. This is huge in my opinion. WHat else is there to like....

    these snips......

    In the future

    In three to five years

    QUIK will have cool stuff to work on and add for the S4, S5...this stuff will be on the roadmap and we don't want it to stop.


    Some companies, however, adopt hybrid configurations in an attempt to cover all contingencies. By combining cloud-based and local resources, the designer gets the best of both worlds. Cloud-based resources provide high accuracy in complex applications, and local resources ensure fast responses in simpler tasks. Hybrid designs also mitigate the issue of unreliable connectivity.

    Predictions of what voice systems will look like in the future indicate that there will be a place for each of these approaches. “The cloud will continue to play a big role for many years,” says Harper. “We will continue to push more advanced capability to the device, but as we do that, we will start inventing new things that we can do only in the cloud. But the cloud will always be a little bit ahead of what you can do on the device.”


  7. jfiebMember




    consider what K Morris has said...



    http://www.eejournal.com/archives/articles/20150405-customizability/

    By Kevin Morris…

    Instead of looking at what’s inside,we should be thinking about what jobs a chip is intended for. If we look at a device’s intended application, that gives us a much more realistic view of the “market ” than if we look at the kinds of transistors and the type of architecture inside the chip that lets it accomplish its task. In fact, looking at the “how” can be a dangerous distraction from the “what” – which is where the real competition happens in semiconductors.

    So I will put up some interesting essays on the what of Audio....the key thing to grasp is that it is THE USER INTERFACE, some say
    BIG time of the future.

  8. jfieb

    jfiebMember



    use this one as a mental model of audio...

    Google Glass Needs A Full Audio UI
    July 30, 2014 — markwarren
    [​IMG]
    Google Glass’ UI requires the touchpad today, yet using it becomes painful after only 1-2 minutes! Glass needs a complete voice command UI to avoid “Glass shoulder” syndrome.

    Google Glass’ touchpad is OK when a verbal command might be awkward, but an audio UI becomes imperative when your hands are occupied (e.g. covered in dough while cooking, busy carrying things, or just relaxing). The Glass UI relies too heavily on the touchpad and this is, literally, painful. Tom Chi concisely explained why in his talk at “Mind the Product 2012″ (http://vimeo.com/55741515, 7m18s):

    [Tom describes the first test subjects trying a prototype gesture UI]
    “…and about a minute and a half in I started seeing them do something weird, they were going like this [Tom rolls his shoulders and kneads them], and I was like “What’s wrong with you?” and they responded “Well my shoulder sort of hurts.” and we learned from this set of experiments that if your hands are above your heart then the blood drains back and you get exhausted from doing this in about a minute or two and you can’t go more than five. “

    That’s why using Glass for more than a minute or two just isn’t practical right now; the touchpad is above your heart, yet much of the UI requires it.

    Hopefully future revisions of Glass will make the entire UI available via audio. A simple test for completeness is covering the display and then using Glass with just your voice and ears (and possibly head movements).

    So it will be like this for many devices of tomorrow.

  9. jfieb

    jfiebMember



    I want these snips together


    Sensory blog

    Wireless connections are unreliable. This is a HUGE issue. If the recognizer only works when there’s a strong Internet connection, then the recognizer is going to fail A GREAT DEAL of the time. My prediction – over the next couple of years, the speech industry will come to realize that embedded speech recognition offers HUGE advantages over the common cloud based approaches used today – and these advantages exist in not just accuracy and response time, but privacy too!



    QUIK/Sensory item of today...

    Supports advanced voice processing, including voice recognition, without cloud connection requirement

    This will help get the TAM that is possible. :)


    and the roadmap will go on so plan on the S4 doing a whole lot more along the lines expressed in the sensory blog.;)

    Last edited: Today at 10:50 AM
  10. jfieb

    jfiebMember



    the related part of the cc



    Okay. As relates to the presentation Brian that you gave, and I wanted to ask you if you could give me a little more color on the QuickLogic sense free, truly hand free program and partnership and describe that just a little more details if you don’t mind?

    Brian Faith - VP of Worldwide Sales and Marketing
    Sure. So one thing I'll mention is that the silicon level I talked about the low power sound detector block, that’s a sensory piece of IP that we have developed in our device with their permission by licensing it. That’s just the silicon level.

    If you look at the entire solution, what is that capability give customers? So basically what it does, as it allows people to do voice recognition for things like okay, Google call Bob West, that entire phase matching it, what I just said commit and deeply embedded at very lower power in our device without waking up the apps processor.

    The result is that people can have more voice recognition, enable the applications in their products from phones to wearables, knowing that the high net silicon is actually a very reputable voice recognition company and sensory.

    So it’s an ecosystem partner primarily, customers will continue to license that from them, knowing that it can run in very low power optimized hardware from QuickLogic.

    Robert West - Oak Grove Associates
    Has that been licensed by Sensory and – or any other customer?

    Brian Faith - VP of Worldwide Sales and Marketing
    Yes. The voice recognition software calls really hands free, they've actually I think I had a footprint that we said over a 1 billion smartphones today have already shipped with it. So they definitely licensed that already for phones and watches, I believe in Moto 360 watch uses that for their okay Google what's step count, what's the temperature today type functionality and also some of the larger, very large smartphone OEMs also have existing licenses.

    Robert West - Oak Grove Associates
    How is that implemented, is that implemented in a unique part or it’s in the software of the SoC?

    Brian Faith - VP of Worldwide Sales and Marketing
    Its all based in software and I think that’s one of the duties of the EOS platform is that now we're taking some of these elements that have already been running out in production where the environments and now we're optimizing those for even lower power consumption which frees up more MIPS of computational capacity to do other algorithms that these OEMs are wanting to do.

    Robert West - Oak Grove Associates
    Okay. Very good. That sounds like really and usually good program and high demand program potentially.

    Brian Faith - VP of Worldwide Sales and Marketing
    I can tell you that’s one of the most exciting element of the platform for myself personally and also to the interactions of the press that I've been briefing.


    This is a game changer, its on every smartphone, it the UI. It's better than I had hoped for.

  11. jfieb

    jfiebMember




    + Brian Faith snip

    one of the most exciting element of the platform for myself personally and also to the interactions of the press that I've been briefing.

    +

    Sensory has a roadmap

    The next five years will see embedded recognition with high performance noise cancelling and beamforming coming to the forefront, and Sensory will be leading this charge… and just like how Sensory led the way with the “always on” low-power trigger

No comments:

Post a Comment