Sunday, December 4, 2016

A snip from a VERy GEEKY read....

http://www.nallatech.com/fpga-acceleration-convolutional-neural-networks/

Conclusion
The unique flexibility of FPGA fabric allows the logic precision to be adjusted to the minimum that a particular network design requires. By limiting the bit precision of the CNN –Convolutional Neural Network calculation the number of images that can be processed per second can be significantly increased, improving performance and reducing power.

The non-batching approach of an FPGA implementation allows for object recognition in 9 milliseconds (a single frame period), ideal for situations where low latency is crucial. E.g. object avoidance. This permits images to be categorized at a frame rate greater than 100 Hz.

The intrinsic scalability demonstrated by our FPGA implementation can be utilized to implement complex CNN – Convolutional Neural Networks on increasingly smaller and lower power FPGAs at the expense of some performance. This allows less demanding applications to be implemented on extremely low power FPGA devices, particularly useful for embedded solutions, E.g. Near sensor computing.

Somebody will take that FPGA IP from GLoFo and run various CNN's on it and see what they get.

IF FPGAs can become the solution for on device Inference.....the value of the related IP goes up ( A LOT)

No comments:

Post a Comment