Saturday, December 3, 2016

It goes like this....


  1. jfieb

    jfiebWell-Known Member



    Facebook web page...

    https://code.facebook.com/posts/196146247499076/delivering-real-time-ai-in-the-palm-of-your-hand/


    Delivering real-time AI in the palm of your hand
    We've developed a new deep learning platform on mobile so it can — for the first time — capture, analyze, and process pixels in real time, putting state-of-the-art technology in the palm of your hand. This is a full-fledged deep learning system called Caffe2Go, and the framework is now embedded into our mobile apps. By condensing the size of the AI model used to process images and videos by 100x, we're able to run various deep neural networks with high efficiency on both iOS and Android. Ultimately, we were able to provide AI inference on some mobile phones at less than 1/20th of a second, essentially 50 ms — a human eye blink happens at 1/3rd of a second or 300 ms.


    . Our Applied Machine Learning group had been working toward building an AI engine that would run on mobile devices
    . The Camera team had a clear understanding of the user needs. Along with the contribution of many others, these teams produced a best-in-class solution that runs highly optimized neural networks live on mobile devices. We'll explain how we thought about and developed the applicable technologies, starting with Caffe2go.



    Its is a parlor trick for now. Key points to consider...

    1. Its Facebook moving into AI and they want it ON THE DEVICE.

    2. Regular readers of this thread will see Sensory Inc. in this, ieIT HAS TO BE a Neural Network to get the small footprint. The FIRST BIG DOG to say what Todd M ( Sensory CEO ) has been saying for a Loooong, long, long time.
    INtelligence ON THE DEVICE.


    IF anyone can build a ubiquitous NNLE ( neural network learning engine ) that can reside on a mobile device -it is Dr Saxe, they have ALL the bits and pieces needed to do so. The latest tidbit of info on the use of FPGA as a very good place to run inference
    is new this yr....someone just has to try it on their benches, I mean running the inference on a FPGA designed for low power so it can be done on the device. With the IP $$, I am happy that Dr Saxe will get to make that Eos S4.

    I have written in my journal it goes like this....

    FPGA= Inference ( ?)

    Facebook wants inference on the mobile device.

    The IP around an FPGA purposed for the mobile device just moved up substantially in value?

    Does anyone see a hole in this line of thinking?

    Thanks in advance.

No comments:

Post a Comment