Friday, January 19, 2018

consider that intelligence on the edge is discussed in so many places.  The original AI crystallization ocurred when Goog beat the best GO players.
What was neat is that there was almost no one who saw it coming.


'CNNs - convolutional neural networks should be the way it happens on the edge.

Why?  small footprint.

intelligence of the edge WILL be ubiquitous.  If its ubiquitous it will get hardened...

Some proof of that?

Yes, here is a snip of text...


very,very important...

. Most notably Apple’s neural engine and HiSilicon’s neural processing unit lead the pack with already shipping silicon. These new IP blocks are hardware accelerators for convoluted neural network inferencing. As opposed to what we call “deep learning” which is the training aspect of CNNs, inferencing is the execution of already trained models.

Use-cases such as image classification are very latency and performance sensitive so the industry has evolved towards edge device inferencing, meaning a device such as a smartphone locally has a trained neural network model and does the inferencing and classification locally on the device without having to upload images or other content to the cloud. This is vastly improves latency and makes use-cases such as instantaneous camera scene recognition viable (As used on Huawei’s Mate 10 camera).


As with other IP blocks (image and video encoder/decoders) which accelerate and offload workloads from more general purpose blocks such as the CPU and GPU in a much more specialized and thus faster and more efficient way, neural network inference can also be offloaded. This is what we call neural network accelerators such as CEVA’s NeuPro.



If you are still with the technology here....make the jump, you can do it.
Nobody has been talking about running inference on the edge on eFPGA.


They run inference in the cloud on fpga, why not on the edge?

One reason is that the CEVA's of the world DO NOT have it on their bench to even try it.

Someone WILL try it.  Maybe QUIK is having some fun on their bench as that is what they have.
Is it significant that QUIK has NOT said one word of this?

Did they say anything that they were making the FFE?
They can keep a secret very well



Inference on the edge puts QUIK IP at a much higher valuation that we think of.  It is a multiple of the current market value.
I have my subjective multiplier, but its my own and won't help someone else. :).


QUIK can you run inference on eFPGA on the S4?

Thanks in advance for the consideration. jfieb.

No comments:

Post a Comment