Saturday, March 11, 2017


  1. a nice read.......

    NXP talking...


    To that end, NXP is working with Amazon and Google to speed up the types of calculations that happen locally, potentially allowing for multiple passes that isolate for different attributes.

    Analyzing the raw audio in the cloud is also an option, but this would also make responses take longer. Azevedo believes at least some of the speaker identification should happen on the device itself. “The more you can do locally, the better you can do [it] locally, the less time to send it to the cloud,” he says.


    NXP should really,really want eFPGA in their MCU SoCs


    more.. its kind of related to Dr. Saxe's talk.

    http://www.sensory.com/company/press-room/sensory-news/

    So its a BIG pic item...a disparate source BIG in MCU stuff saying...More intelligence, more speed locally....

    Nice.



    Last edited: Yesterday at 2:21 PM
  2. jfieb

    jfiebWell-Known Member


    NXP says

    NXP is working with Amazon and Google to speed up the types of calculations that happen locally



    Speed up calculations= acceleration.

    At the cloud end acceleration is done with fpgas, like this snip...

    Intel offers customers solutions for accelerated computing with its data center leadership in Xeon, FPGAs,.
    Last edited: 43 minutes ago
  3. jfieb

    jfiebWell-Known Member


    things that go on same page


    NXP talking...

    Azevedo believes at least some of the speaker identification should happen on the device itself. “The more you can do locally, the better you can do [it] locally, the less time to send it to the cloud,” he says.


    Brian Faith in cc

    The other thing I will mention is that, the typical Alexa trigger if you were to plug in your house like a light example. it goes up with the cloud to figure out what light you need to turn and off and there’s a lag. And if you’re walking through your house at night, you probably don’t want to have a lag, you want to have the light turn on into the room. And so what we’re doing is, we’re running Alexa deeply embedded on our device, you don’t have to go to the cloud, you can immediately do whatever you tell us to do like turn the light on or off. And so, the fundamental difference here that we’re enabling more real-time and we’re enabling it to be done in battery.


    So for the IoT let it sink in that QUIK has 2 entrants...

    1. eFPGA which Dr Saxe will give a keynote on at IoT summit

    2. LPSD and the Eos 3 to allow the Smart Home voice UX- LOCALLY, like NXP is speaking of.

    The adjacent possible of the rooms they have enterd have a MUCH bigger perimeter for unexpected good things to happen by a factor of 10x over what
    long term holders are used to.

    So these big swings in price and all will become more and more common.

    L Cohen said it best,......."Like any dealer he was watching for the card that is so high and wild he'll never need to deal another. "
    Some will glimpse them earlier than others...:)


    There are cards like that now in the adjacent possible that QUIK is sitting at the table of.......
    ANd if eFPGA = inference on the edge? Local intelligence?

    Its a bingo for Leonards phrase

    so high and wild he'll never need to deal another. "

    That what the adjacernt possible has to say.

    Will track along.

    Last edited: A moment ago

Friday, March 10, 2017


  1. Mar 10, 2017

    Dr. Timothy Saxe, QuickLogic's CTO, to Deliver Keynote at IoT Summit


    SUNNYVALE, CA -- (Marketwired) -- 03/10/17 -- QuickLogic Corporation (NASDAQ: QUIK), a developer of ultra-low power programmable sensor processing, display bridge and programmable logic solutions, announces that its Chief Technology Officer, Dr. Timothy Saxe, will be delivering a keynote address at the IoT Summit inSanta Clara. Dr. Saxe will speak on "Solving the SoC Design Dilemma for IoT Applications with Embedded FPGA (eFPGA.)"

    Date/Time: March 16, 2017 at 10:45 a.m.
    Venue: Santa Clara Convention Center
    Exhibit Hours: March 16 and 17 from 11:00 a.m. to 4:00 p.m.
    Web: http://www.iot-summit.org/

    Saxe's presentation will discuss how SoC developers can deliver highly differentiated products for IoT applications quickly and cost effectively through the use of eFPGA technology.Dr.

    In addition, QuickLogic will be demonstrating its latest ArcticPro™ eFPGA intellectual property technology that enables semiconductor companies and OEMs to reduce R&D costs, increase their revenue and gross margins, as well as address adjacent markets in a more scalable way.



    So this is the FIRST talk of its kind for QUIK. The CTO, who just KNOWs eFPGA on the subject of its great IP
    This is the ONE id like to see hear?

    Anybody gonna b there?

    Thanks in advance.
    Daaang so exciting in many ways!

    Keep on using stuff like this to UP the $$ you put to their IP.

    And down the line IF eFPGA = inference on the device? WAAAY up.
  2. jfieb

    jfiebWell-Known Member


Tuesday, March 7, 2017


    1. jfiebWell-Known Member

      An important BIG pic item...it about Amazon's smaller tap unit....I will highlight the important part

      Amazon Tap portable speaker gains always-listening option

      02/09/2017 at 3:00 PM by Brad Linder Leave a Comment

      $130 Amazon Tap speaker. It also supports the Alexa voice service, but the Tap is a portable, battery-powered speaker that didn’t have an always-listening mode… until now.

      Previously you had to press a button before talking. But a new software update gives you the option of going totally hands-free.


      [​IMG]

      Of course, if you don’t like the idea of a speaker that’s always listening to you, it’s nice to have the option of using a model that only works when you press a button, and the Amazon Tap still has that. Hands-free mode can be turned on or off.

      Amazon’s Echo and Echo Dot were always designed to stay plugged into a wall outlet, but the Amazon Tap has a battery, which is one of the reasons it wasn’t always-listening.

      So it should come as no surprise that enabling the new feature will take a toll on battery life. But it’s not a very big toll: Amazon says you should get about 8 hours of battery life when using an Amazon Tap speaker with hands-free mode enabled, compared with 9 hours with the feature disabled.

      And what would it do IF it switched to the QUIK EOS S3?....8 hrs to wks or months? No I don't know for sure but WE want these things as Brian Faith said....

      Sprinkled......

      QUIK the adjacent possible is headed right your way!


    The Amazon move

    Previously you had to press a button before talking. But a new software update gives you the option of going totally hands-free.


    The cc with Ricks Q.....

    Rick Neaton

    Okay. Can you explain how you and Sensory facilitate the deployment of Alexa since it already is pretty useful in its current form. What – since I didn’t get to Las Vegas, what was the demo that you used to approve your concept and how does that make Alexa and other voice applications better?

    Brian Faith

    Yes, good question. So Sensory has voice recognition technology and they’re behind a lot of these triggers they’ve used, say like, Galaxy, okay Google. They also have the Alexa trigger, which is the Amazon’s products. And the nice thing about what Amazon is doing from an ecosystem point of view is, they’re actually enabling other peoples to build products, they have their Alexa trigger, which is why we thought all over CES from fans to refrigerators to cars.

    So what we demonstrated at CES was a home automation, example, where we had our EOS S3 on a very small board, running the Alexa trigger, and we actually hacked into the light in the hotel room. So when somebody came in, they could say, hey, Alexa, turn the light on and the light will come on, and say turn the light off, and it turns off. And so the reasons why we’re showing that demo is, yes,



    Alexa is out there today and it’s functional. But what if it could be battery powered and you could sprinkle around your house. You don’t have to go buy under $50 Amazon battery, $250 Amazon Achelous and turn the light of your house.



    If the trend is going to be more competing at the edge, it’s going to be sprinkling these things around with batteries not line powered, that’s the big difference of what we enable at EOS S3, not just a functionality from a voice point of view, it’s the ability to do that at batter powered applications is the big difference.

    The other thing I will mention is that, the typical Alexa trigger if you were to plug in your house like a light example. it goes up with the cloud to figure out what light you need to turn and off and there’s a lag. And if you’re walking through your house at night, you probably don’t want to have a lag, you want to have the light turn on into the room. And so what we’re doing is, we’re running Alexa deeply embedded on our device, you don’t have to go to the cloud, you can immediately do whatever you tell us to do like turn the light on or off. And so, the fundamental difference here that we’re enabling more real-time and we’re enabling it to be done in battery.


    Commentary


    Amazon by taking a step toward loosing the ON button, for Tap is foreshadowing things to come, for Samsung.....there will be a BIX button that is like the tap, use it to turn it on listening mode.

    Amazons tap has a NO TAP mode. Samsung WILL loose the BIX button down the line.

    Subjective probabilities are shifting our way & little things like this show that with this shift, that most will not notice, they are getting closer to
    wanting the EOS S3 in their NO TAP 2.

    Last edited: Sunday at 7:25 AM
  1. jfieb

    jfiebWell-Known Member



    Its nice to look for phrase matching from disparate sources.

    Here is one...

    Amazon talking about Tap

    Previously you had to press a button before talking. But a new software update gives you the option of going

    totally hands-free.


    l8er this wk in London, Dr Saxe

    smart home and IoT


    2.10pm Break
    2:25am
    What if Your House Understood You?
    If your hands are busy, or dirty, you can ask a family member to turn on a light, or oven. What if you could talk to your house and appliances?
    Speaker:
    [​IMG] Dr Timothy Saxe, CTO, QuickLogic

    So its voice UX in focus. Voice controlled connected home


    Amazon's Tap needs the LPSD of Eos S3.
    Samsungs Galaxy needs a LPSD.
    Waze needs a LPSD
    Waze needs to go totally hands free.
    etc.

  2. jfieb

    jfiebWell-Known Member



    So S Johnson says expect multiples.....here is one attempt at it..

    Sensory, DSP Group, STMicroelectronics Produce New Voice Activation SiP
    Posted on March 2, 2017


    Semiconductor maker STMicroelectronics, wireless chipset provider DSP Group, and voice and speech technology specialist Sensory Inc. have announced a jointly-developed System-in-Package device that could enable sophisticated voice interaction in new products.[​IMG]

    Each company has brought technology to the table to help make the device a uniquely capable voice interaction technology. A microphone using STMicroelectronics’ MEMS technology to pack considerable performance in a low-power, light package, aided by DSP Group’s processor technology, which can detect certain instructions without activating the central processing system – a capability that both saves power and enables faster response times, the companies say. Sensory, meanwhile, offers the speech recognition technology enabling the system to ‘understand’ user commands.

    The development follows previous collaboration between Sensory and DSP Group, with the companies having also worked with Vesper to produce a low-power, rugged voice activation system unveiled at this year’s Consumer Electronics Show. Such technologies appear to be facing rising demand as voice interaction rises in prominence as a key interface of the Internet of Things.


    So its perfect S johnson. They have used what they have on the bench.
    For casual readers...it is NOT an SoC.

    It has NO MCU.

    The algos run in a DSP. Maybe 10x the power of the LPSD in the Eos S3.

    It won't do for the IoT(?)

    So for any DSP maker this is what they will do for a while...

    Nice. :)

    Last edited: Today at 7:25 AM
  3. jfieb

    jfiebWell-Known Member



    Sensory says.....


    STMicroelectronics, DSP Group and Sensory Create Keyword-Smart Microphone for Voice-Controlled Devices
    • Ultra-low power voice processing and MEMS microphone circuitry enables sound detection and keyword recognition in a miniaturized single package
    • Combines ST’s MEMS microphone and packaging know-how with DSP Group’s voice-processing expertise and Sensory’s voice-recognition firmware

    Barcelona, February, 27 2017 – STMicroelectronics (NYSE: STM), a global semiconductor leader serving customers across the spectrum of electronics applications and a top MEMS supplier, DSP Group Inc. (NASDAQ: DSPG), a leading global provider of wireless chipset solutions for converged communications, and Sensory Inc., the leader in voice interface and keyword-detect algorithms, have revealed details for a highly power-efficient, voice-detecting and -processing microphone that delivers keyword-recognition capabilities in a compact package.

    The small System-in-Package (SiP) device integrates a low-power ST MEMS microphone enabled by DSP Group’s ultra-low power voice-processing chip and Sensory’s voice-recognition firmware. The solution leverages ST’s advanced packaging technology to achieve a powerful yet lightweight package, extremely long battery runtimes, and advanced functionality.

    Although typical wake-on-sound microphones eliminate the need for users to touch the device to wake it from sleep mode, they suffer from limited processing power and wake the main system processor to recognize the received instruction. Using the powerful computation capabilities from DSP Group, ST’s microphone detects and recognizes instructions without waking the main system, enabling energy-efficient, intuitive and seamless interactions for users speaking to voice-operated appliances like smart speakers, TV remotes and smart home systems.

    The new microphone solution taps DSP Group’s HDClear ultra-low power audio processing chip to significantly reduce energy consumption, extending the lifetime of battery-operated equipment for several years without the need to recharge or replace battery. Responses to voice commands are also faster because the system acts on the instruction immediately without first having to recognize it.

    So maybe I'm wrong on t power of their LPSD?


    “Unlike previous existing solutions, this microphone doesn’t just listen to voices – it immediately understands the commands, too, without using the power and computation resources of the main processor,” commented Andrea Onetti, MEMS and Sensors Division General Manager, STMicroelectronics. “This smart integration step is a key enabler for the voice interfaces that are being added to IoT objects and applications, including those contributing to Industry 4.0.”

    “As voice becomes the default user interface, more and more innovative products embrace smart voice-processing technology. Our solution combines a small footprint, high integration, and the low-power consumption needed to enable seamless and effective voice user interfaces in battery-operated devices,“ said Ofer Elyakim, CEO of DSP Group. “Collaboration with industry leaders ST and Sensory on this smart microphone brings to market a powerful yet energy-efficient solution with best-in-class performance, which makes it a perfect match for any smart system that needs to incorporate high-quality voice capabilities.”

    Todd Mozer, CEO of Sensory, added, “Voice activation has the potential to transform the way people interact with all kinds of electronic equipment in the home, while on the go, or at work. ST’s new highly integrated solution, leveraging our latest-generation firmware, is an important enabler for OEMs seeking to deliver a natural and fluid user experience.”

    First prototypes of ST’s new command-recognition microphone will be


    with volume production in early 2018.

    The timetable here make QUIK look top notch/ nimble in time to market? Perhaps being small and intensely focused and connected to the immersive experiences has been a factor here?
    Use this as a mental model of solid execution by QUIK, that they listened in the coffee house of sensor fusion very well.

    This is at least the 2nd such like this. Knowles has the this 2, mic DSP, Sensory. Expect any Mic guy to do this.

    Last edited: Today at 7:39 AM