Saturday, March 11, 2017


  1. a nice read.......

    NXP talking...


    To that end, NXP is working with Amazon and Google to speed up the types of calculations that happen locally, potentially allowing for multiple passes that isolate for different attributes.

    Analyzing the raw audio in the cloud is also an option, but this would also make responses take longer. Azevedo believes at least some of the speaker identification should happen on the device itself. “The more you can do locally, the better you can do [it] locally, the less time to send it to the cloud,” he says.


    NXP should really,really want eFPGA in their MCU SoCs


    more.. its kind of related to Dr. Saxe's talk.

    http://www.sensory.com/company/press-room/sensory-news/

    So its a BIG pic item...a disparate source BIG in MCU stuff saying...More intelligence, more speed locally....

    Nice.



    Last edited: Yesterday at 2:21 PM
  2. jfieb

    jfiebWell-Known Member


    NXP says

    NXP is working with Amazon and Google to speed up the types of calculations that happen locally



    Speed up calculations= acceleration.

    At the cloud end acceleration is done with fpgas, like this snip...

    Intel offers customers solutions for accelerated computing with its data center leadership in Xeon, FPGAs,.
    Last edited: 43 minutes ago
  3. jfieb

    jfiebWell-Known Member


    things that go on same page


    NXP talking...

    Azevedo believes at least some of the speaker identification should happen on the device itself. “The more you can do locally, the better you can do [it] locally, the less time to send it to the cloud,” he says.


    Brian Faith in cc

    The other thing I will mention is that, the typical Alexa trigger if you were to plug in your house like a light example. it goes up with the cloud to figure out what light you need to turn and off and there’s a lag. And if you’re walking through your house at night, you probably don’t want to have a lag, you want to have the light turn on into the room. And so what we’re doing is, we’re running Alexa deeply embedded on our device, you don’t have to go to the cloud, you can immediately do whatever you tell us to do like turn the light on or off. And so, the fundamental difference here that we’re enabling more real-time and we’re enabling it to be done in battery.


    So for the IoT let it sink in that QUIK has 2 entrants...

    1. eFPGA which Dr Saxe will give a keynote on at IoT summit

    2. LPSD and the Eos 3 to allow the Smart Home voice UX- LOCALLY, like NXP is speaking of.

    The adjacent possible of the rooms they have enterd have a MUCH bigger perimeter for unexpected good things to happen by a factor of 10x over what
    long term holders are used to.

    So these big swings in price and all will become more and more common.

    L Cohen said it best,......."Like any dealer he was watching for the card that is so high and wild he'll never need to deal another. "
    Some will glimpse them earlier than others...:)


    There are cards like that now in the adjacent possible that QUIK is sitting at the table of.......
    ANd if eFPGA = inference on the edge? Local intelligence?

    Its a bingo for Leonards phrase

    so high and wild he'll never need to deal another. "

    That what the adjacernt possible has to say.

    Will track along.

    Last edited: A moment ago

No comments:

Post a Comment