Thursday, December 8, 2016

snips

As companies like Facebook and Google continue to push neural networks onto smartphones, phone makers will start building hardware into these devices that can run neural networks with even greater speed.


Facebook
Ultimately, we were able to provide AI inference on some mobile phones at less than 1/20th of a second,

Apple

By slimming down the neural network, iPhones and iPads can identify faces and locations in photos, or understand changes in a user’s heart rate, without needing to rely on remote servers. Keeping these processes on the phone makes the features available anywhere, and also ensures data doesn’t need to be encrypted and sent over wireless networks.


In the cloud this stuff is done with FPGA coprocessors.

Wednesday, December 7, 2016

eFPGA

  1. jfieb

    jfiebWell-Known Member




    this thread is for working out/sorting info. It can be skipped as a digression by those pressed for time.

    I have added this link to my favorites...and am reading there today.

    http://iotdesign.embedded-computing.com/search/#quicklogic

    the web page title?

    iotdesign.embedded

    I will track items along and see if QUIK eFPGA material starts to show up there.





    http://embedded-computing.com/pdfs/Altera.Fall03.pdf
    Code profiling can help identify the functions that consume the majority of the MIPS and can provide options, for example, such as accelerating the code by use of a hardware coprocessor. Not all functions are appropriate for off-loading to a coprocessor. First, the goal is to identify a group of algorithms that together occupy more than half of the processing load. Second, the identified group of algorithms should be clustered together so that once data reaches the coprocessor, there is no processor dependency in the calculation until the processing is complete and the coprocessor can return the result to the DSP. A third criterion is that the processing be straightforward to implement in hardware. A simple definition is that the algorithm be heavily looped, thus implying a very repetitive computational structure. 


    So consider that while QUIK is at the other end of it all the process should be the same..

    One other thing...

    I had not considered a BIG DSP maker may have motivation to put an FPGA in to offload the DSP?

    While much of the items that might get placed here are for the cloud end, as we go forward look for key tidbits that say they want the same thing on the device....Facebook is exploring that already and says so loud and clear.

    Consider discarding the nagging thoughts that QUIK don't have FPGA IP worth a dime.

    Worth more each day that links FPGA to AI.  There will be no fire sale.

Tuesday, December 6, 2016

 don't talk about the S4?

My mention of the Eos S4 is not to subtract from the Eos S3 at all, the potential for the IP effort,or to suggest that there is no hope save the S4.

I am considering that there are some shifts in the subjective probabilities, that are not at all dreams, that will allow the Eos S4 to happen.
So as a result of these shifts in the subjective probabilities ( Bayesian Analysis) I will put some related info onto this thread and it can be skipped by those who consider it a
distraction from the here and now that you focus on.. A lot of the material I put up is just a way of passing the time.


Who here wrote about any IP angle before we read about it?

PMCW has suggested there may be other things that come to pass....

There is material on this forum that says FPGA IP has moved into the limelight, ie that is does have real value. QUIK been down so long that many think it has NO IP worth a dime.

That is not an accurate conclusion. Recent unfolding events suggest that FPGAs are moving up in the value of the IP........
and moving up fast in ways NOBODY saw coming. Not Altera, Xilinx, not QUIK.

One basic key to the discipline of Bayesian analysis to to realize when a shift in the subjective probabilities occurs. It is subjective because not everyone sees it the same.
For me there has been very real shift in those probabilities and I do act on them and to each their own. :)

I put some of the things I find into a blog. Not well read, but for the first time each day China is a light green, never in the past has it had any color. 
Thanks for taking the time from way over there in CHina. :-)

Eos S4


  1. jfiebWell-Known Member

    Always keep something to look forward to.

    We should expect this for QUIKS next Eos...


    Also, note that this report forecasts the first 22nm FD-SOI products in 2H-17.

    THe Eos S4 will be one of them and Brian F. blog will apply to its own effort also.
    Everywhere will mean the Eos S4 also.

    http://blog.quicklogic.com/corporate/the-everywhere-fpga/#sthash.DlNWDYQ6.dpbs



    WIll '17 become a blur of events, each which changes the subjective probabilities?

Sunday, December 4, 2016

A snip from a VERy GEEKY read....

http://www.nallatech.com/fpga-acceleration-convolutional-neural-networks/

Conclusion
The unique flexibility of FPGA fabric allows the logic precision to be adjusted to the minimum that a particular network design requires. By limiting the bit precision of the CNN –Convolutional Neural Network calculation the number of images that can be processed per second can be significantly increased, improving performance and reducing power.

The non-batching approach of an FPGA implementation allows for object recognition in 9 milliseconds (a single frame period), ideal for situations where low latency is crucial. E.g. object avoidance. This permits images to be categorized at a frame rate greater than 100 Hz.

The intrinsic scalability demonstrated by our FPGA implementation can be utilized to implement complex CNN – Convolutional Neural Networks on increasingly smaller and lower power FPGAs at the expense of some performance. This allows less demanding applications to be implemented on extremely low power FPGA devices, particularly useful for embedded solutions, E.g. Near sensor computing.

Somebody will take that FPGA IP from GLoFo and run various CNN's on it and see what they get.

IF FPGAs can become the solution for on device Inference.....the value of the related IP goes up ( A LOT)
a nice read

http://semimd.com/blog/tag/globalfoundries/

some snips


NXP uses 28nm FD-SOI for its iMX 7 and iMX 8 processors,


Itow said she has talked to more companies that are looking at FD-SOI, and some of them have teams designing products. “So we are seeing more serious activity than before,” Itow said. “I don’t see it being the main Qualcomm process for high-volume products like the applications processors in smartphones. But I do see it being looked at for IoT applications that will come on line in a couple of years. And these things always seem to take longer than you think,” she said.


GlobalFoundries claims it has more than 50 companies in various stages of development on its 22FDX process, which enters risk production early next year, and the company plans a 12nm FDX offering in several years.


That may open up companies “with a lower cost engineering team” in India, China, Taiwan, and elsewhere to “go off in a different direction” and experiment with FD-SOI, Gwennap said.


“If you believe the future is about mobility, about more communications and low power consumption and cost sensitive IoT chips where analog and RF is about 50 percent of the chip, then FD-SOI has a good future.

NXP says...



“NXP's next generation of i.MX multimedia applications processors are leveraging the benefits of FD-SOI to achieve both leadership in power efficiency and scaling performance-on-demand for automotive, industrial and consumer applications,” said Ron Martino, vice president, i.MX applications processor product line at NXP Semiconductors. “GLOBALFOUNDRIES’ 12FDX technology is a great addition to the industry because it provides a next generation node for FD-SOI that will further extend planar device capability to deliver lower risk, wider dynamic range, and compelling cost-performance for smart, connected and secure systems of tomorrow.”

on going to QCOM
will allow us to further enhance our leadership positions, and expand the already strong partnerships with our broad customer base, especially in automotive, consumer and industrial IoT and device level security," said Rick Clemmer, NXP Chief