Thursday, December 29, 2016

Some app numbers here....

Facebook & Google dominate the list of 2016’s top apps
Posted 19 hours ago by Sarah Perez (@sarahintampa)


Mobile applications from Facebook and Google dominated the new list of the year’s top apps released today by Nielsen. Not surprisingly, Facebook again grabbed the number one spot on the list, with more than 146 million average unique users per month, and 14 percent growth over last year. In fact, Facebook scored several spots on the top 10 chart, thanks to Messenger (#2) and Instagram (#8) – the latter which also showed some of the highest year-over-year growth, up 36 percent from 2015.

Messenger came in second place this year, with over 129 million average unique monthly users, followed by YouTube with over 113 monthly uniques.

However, it was Google, not Facebook, that grabbed the most spots on the year-end chart.

According to Nielsen, Google’s apps YouTube (#3), Google Maps (#4), Google Search (#5), Google Play (#6) and Gmail (#7) were among those people used the most throughout the year. Given that several of these are considered the basic utilities you need on any device – search, maps, email – it’s also not surprising to find them so highly ranked.

[​IMG]

However, one notable change Nielsen discovered is Amazon’s surge in 2016.

Facebook again grabbed the number one spot on the list, with more than 146 million average unique users per month,


Would Facebook want a LPSD?

A snip of text



They should just buy Sensory Inc?


Applied Research Scientist, Speech Recognition Language Modeling
(Menlo Park, CA)
Facebook's mission is to give people the power to share, and make the world more open and connected. Through our growing family of apps and services, we're building a different kind of company that helps billions of people around the world connect and share what matters most to them. Whether we're creating new products or helping a small business expand its reach, people at Facebook are builders at heart. Our global teams are constantly iterating, solving problems, and working together to make the world more open and accessible. Connecting the world takes every one of us—and we're just getting started.
Facebook is seeking a Speech Recognition Research Scientist to join our research team in Menlo Park. The ideal candidate will have research experience developing speech recognition systems in different languages. Individuals in this role should be experts in language modeling and have experience working on vast quantities of data, advanced recognition systems and perform rigorous performance evaluations. The candidate will help Facebook conduct research that support naturally spoken input in more than 70 languages.
Responsibilities

  • Create language models from large corpora of text data in different languages using Hadoop/Hive
  • Work closely with other speech recognition experts and researchers on implementing algorithms that power user and developer-facing products
  • Be responsible for measuring and optimizing the quality of your algorithms
Minimum Qualifications
  • Strong desire to build beautiful, expressive products that delight users in any language
  • M.S. or Ph.D. in Computer Science, Electrical Engineering, Speech/Language Processing or Machine Learning
  • Experience with scripting languages such as Perl, Python, PHP, and shell scripts
  • Experience in C/C++
  • Experience with Hadoop/Hbase/Pig or Mapreduce/Sawzall/Bigtable a plus

They need to get started or Alexa will take over. The adjacent possible is NOT a 5 x 10 broom closet anymore.

Wednesday, December 28, 2016

The Horror of the Hardware..

great snip from ARM CTO

http://-.com/--architectures-and-security/


IoT, Architectures, And Security
202305
ARM CTO Mike Muller discusses how markets and technology are changing in a very candid one-on-one interview.


because by the time you get to the apps developer you have to abstract away the horror of the hardware. 

so there is a reason for PEEL and others to work with QUIK...

QUIK is to abstract away the horror of the hardware.
Now ALtera, Xilinx and now QUIK have ARM cores paired with eFPGA :)


Really popular idea now ;-)

Xilinx puts ARM core into its FPGAs | Embedded
www.embedded.com/electronics-products/.../Xilinx-puts-ARM-core-into-its-FPGAs
Apr 27, 2010 - New embedded systems architecture employs ARM core in processor-centric FPGAsAltera's User-Customizable ARM®-Based SoC
https://www.altera.com/content/dam/altera-www/global/en_US/.../br-soc-fpga.pdf
Our SoCs integrate an ARM-based hard processor system (HPS) consisting of a multi-core ... peripherals, and memory controllers with the FPGA fabric using a ...


But ARM has a noticable weak spot..ARM=NO eFPGA. (?)

QUIK been down a long time and its natural to conclude they ain't got no FPGA IP worth one thin dime.
Not true.

Correct anything that is wrong with this line of thought. 

THanks in advance.

Yes, they can only use what they have on their bench- even MIGHTY ARM has limits, they do NOT have FPGA IP on their bench. They probably wish they did, maybe they will. So in the coral reef of sensor fusion you use what you have on your bench. ARM= NO eFPGA. :-(.
They should fill that gap somehow.


For the casual reader on the IoT. Its 80-90 % sensors that create the huge data files. wearable are a segment of the IoT though NOT the one that is in current material.
The backbone of the IoT, and its data is sensors of all types. IoT backbone= sensor, sensrs, sensors.
The IoT is SOC so eFPGA is a good move.

The ability to have tiered silicon to handle it all is what QUIK has on its bench.


What is most interesting for me is the strong move to the front of AI. I am going to spend most of my time reading on this. Inference on the device.....

I reread this item from the archives...
ARM buys British computer vision and imaging startup Apical for $350M


Why did ARM spend so much.....


[​IMG]
Photo credit: Apical

[​IMG]
In acquiring Apical, whose tech has made its way to over 1.5 billion mobile devices, ARM has made a jump to new fields in the processor sector


The IP and processor giant ARM announced on Wednesday their acquisition of fellow British firm Apical, a startup known for their imaging technology that has been widely used in devices ranging from smartphones to security cameras.

Founded in 2002, the semiconductor IP developer Apical is reported to have sold for $350 million in cash.

ARM does the IP development for nearly all chips found in mobile and tablet devices, including the processors for the iPhone and Samsung phones.

In acquiring Apical, whose tech has made its way to over 1.5 billion mobile devices, ARM has made a jump to new fields in the processor sector, giving them an edge in imaging technology for the Internet of Things market. While not a producer of devices themselves, ARM’s IP is found in billions of devices worldwide, which according to the company, is used by 80% of the global population.

“Apical has led the way with new imaging technologies based on extensive research into human vision and visual processing,” said Michael Tusch, CEO and Founder of Apical, to the press.


“The products developed by Apical already enable cameras to understand their environment and to act on the most relevant information by employing intelligent processing. These technologies will advance as part of ARM, driving value for its partners as they push deeper into markets where visual computing will deliver a transformation in device capabilities and the way humans interact with machines.”


This is compute intense stuff,






Some of the practical uses for the imaging capabilities will be for products like Mobileye that tracks nearby vehicles, analyzing distances, speed, and other factors to provide the driver with actionable information and alerts.

Even before the buyout, ARM had already begun work on beefing up their imaging capabilities through their ARM Mali graphics, display, and video processor line of products.

“Computer vision is in the early stages of development and the world of devices powered by this exciting technology can only grow from here,” said Simon Segars, CEO at ARM in his release to the press. “Apical is at the forefront of embedded computer vision technology, building on its leadership in imaging products that already enable intelligent devices to deliver amazing new user experiences.

Segars continued, adding that, “The ARM partnership is solving the technical challenges of next generation products such as driverless cars and sophisticated security systems. These solutions rely on the creation of dedicated image computing solutions and Apical’s technologies will play a crucial role in their delivery.”

Imaging processing capabilities are expected to play an increasingly important role in the development of smart cities and IoT in general.


How much is Sensory Inc. worth....
A key thing that most do not know...footprint size of the audio UX.
Sensory had a focus( like QUIK ) for On device. Amazon Google are good, but
not for On device at this time.

Sensory with its background in NN has the Audio, AND it also has the visual perception stuff.

Nice read. Interview on the IoT with ARMs CTO

http://semiengineering.com/iot-architectures-and-security/


IoT, Architectures, And Security
202305
ARM CTO Mike Muller discusses how markets and technology are changing in a very candid one-on-one interview.


One snip


SE: This is also the part of the market that is not following Moore’s Law, right?

Muller: Yes, but the world doesn’t need 100 million different microcontrollers. It can get by with hundreds or thousands of microcontrollers that enable 100 million different products. The economics become a question of whether you can take standard hardware product, write software and apps, and create the system you want to deploy. It’s not a question of whether you can build a custom SoC. Most of the applications out there don’t need a custom SoC.............

Brian Faith on eFPGA


Third, the additional design flexibility engendered by SoCs with eFPGA technology allows one device to serve many different applications and markets. That fact allows the significant development cost of advanced SoCs to be amortized over greater volumes (reducing per unit development costs) and, by driving per-design wafer volumes up substantially, can reduce individual device unit costs. - See more at: http://blog.quicklogic.com/mobile/q...ies-brings-big-benefits/#sthash.mKBD0lkZ.dpuf


this snip is great

because by the time you get to the apps developer you have to abstract away the horror of the hardware. 

so there is a reason for PEEL and others to work with QUIK...


another good snip

SE: This is one of the big shifts. You have distributed processing everywhere, and when you think about edge devices you need to think about security and power and throughput and how to process all of the data.

Muller: And from ARM’s perspective, we’re interested in how you do machine learning in the cloud and in the device. It’s not just about really fast accelerators you can put into the cloud. You have to develop architectures to allow that to scale all the way down to your microcontroller.


QUIK has all the bits and pieces to make the very best NNLE ( neural network learning engine) ......

Last edited: 3 minutes ago

Sunday, December 25, 2016


  1. I consider my QUIK stock 10% Sensory. Have for a while. Makes me feel diversified :)

    Because of the hardcode LPSD, etc. They can do well together.
    How many people have a sensory Inc LPSD hardwired with an MCU above it on an SoC?

    there will be more.....
  2. jfieb

    jfiebWell-Known Member


    A Blast from the past....'11

    Shows just how ahead of their time Sensory was. Just amazing. Use this for what we are about to see at CES!

    THE HOLY GRAIL IN SPEECH IS ALMOST HERE!
    May 6, 2011

    For far too long, speech recognition just hasn’t worked well enough to be usable for everyday purposes. Even simple command and control by voice had been barely functional and unreliable…but times, they are a changing! Today speech recognition works quite well and is widely used in computer and smart phone applications…and I believe we are rapidly converging on the Holy Grail of Speech – making a recognition and response system that can be virtually indistinguishable from a human (a really smart human with immaculate spelling skills and fluency in many languages!)

    I think there are 4 important components to what I’d call the Holy Grail in Speech:

    1. No Buttons Necessary. OK here I’m tooting my own whistle, but Sensory has really done something amazing in this area. For the first time in history there is a technology that can be always-on and always-listening, and it consistently works when you call out to it and VERY rarely false-fires in noise and conversation! This just didn’t exist before Sensory introduced the Truly Handsfree™ Voice Control, and it is a critical part of a human-like system. Users don’t want to have to learn how to use a device, Open Apps, and hold talk buttons to use! People just want to talk naturally, like we do to each other! This technology is HERE NOW and gaining traction VERY rapidly.
    2. Natural Language Interactions. This is a bit tricky, because it goes way beyond just speech recognition; there has to be “meaning recognition”. Today, many of the applications running on smart phones allow you to just say what you want. I use SIRI (Nuance), Google and Vlingo pretty regularly, and they are all very good. But what’s impressive to me isn’t just how good they are, it’s the rate at which they seem to be improving. Both the recognition accuracy and the understanding of intent seem to be gaining ground very rapidly.
      I just did a fun test…I asked each engine (in my nice quiet office) “How many legs does an insect have?”…and all three interpreted my request perfectly. Google and Vlingo called up the right website with the question and answer…and SIRI came back with the answer – six! Pretty nice! My guess is the speech recognition is still a bit ahead of the “meaning recognition”…
      Just tried another experiment. I asked “Where can I celebrate Cinco de Mayo?” SIRI was smart enough to know I wanted a location, but tried to send me off to Sacramento (sorry – too far away for a margarita!) Vlingo and Google both rely on Google search, and did a general search which didn’t seem to associate my location… (one of them mis-recognized, but not so badly that they didn’t spit out identical results!) Anyways, I’d say we are close in this category, but this is where the biggest challenge lies.
    3. Accurate Translation and Transcription. I suppose this is ultimately important in achieving the Holy Grail. I don’t do much of this myself, but it’s an important component to Item 2 above, and also necessary for dictating emails and text messages. When I last tested Nuance’s Dragon Dictate I was blown away by how well it performed. It’s probably the Nuance engine used inApple’s Siri (you know, Nuance has a lot of engines to choose from!), and it’s really quite good. I think Nuance is a step ahead in this area.
    4. Human Sounding TTS. The TTS (text-to-speech) technology in use today is quite remarkable. There are really good sounding engines from ATT, Nuance, Acapela, Neospeech, SVOX, Ivona, Loquendo and probably others! They are not quite “human”, but come very close. As more data gets thrown at unit selection (yes, size will not matter in the future!), they will essentially become intelligently spliced-together recordings that are indistinguishable from live performance.
    Anyways, reputable companies are starting to combine and market these kinds of functions today, and I’d guess it’s a just a matter of five to ten years until you can have a conversation with a computer or smartphone that’s so good, it is difficult to tell whether it’s a live person or not!
  3. jfieb

    jfiebWell-Known Member


    To my good friend RC, who is new here ( and happy).
    Thanks for today. -).
    BFAST after CES?

    Here is what I want to put up for ALL to review. Expect that PEEL will want to highlight its new voice capability.
    Voice will BE THE key thing at CES ( Huffinton Post item )
    Many of the players are GLOBAL.


    Sensory Inc. Speech Recognition Solutions for Consumer Products Support Language Capabilities Across the Globe
    Santa Clara, CA – July 16, 2014 World’s Most Highly Spoken Language Mandarin Chinese Among Languages Also Supported by Sensory’s Speaker Verification Technology.

    Sensory Inc., the industry leader in speech and vision technologies for consumer products, is pleased to announce that it supports a wide range of languages across 41 countries all over the world with its innovative speech recognition solutions.

    Compare to Facebook who wants 70 languages in that job opening.
    So PEEL can roll across its global footprint. America, Asia, South America.


    Languages currently supported by Sensory’s speaker recognition technologies include the world’s three most highly spoken-Mandarin (2 billion speakers) Spanish (406 million speakers) and English (335 million speakers).Other languages developed for Sensory’s platforms include: French, German, Italian, Japanese, Korean, Dutch, Russian, Arabic, Turkish, Swedish, and Portuguese. Nearly all of the languages available in Sensory’s speech recognition solutions are also supported in its speaker verification technologies, including Mandarin Chinese.

    Among its products in multiple languages is Sensory’s landmark TrulyHandsfree™, the leading always-on, always-listening voice control solution for consumer electronics. The introduction of the TrulyHandsfree™Voice Control technology revolutionized the speech technology industry for a wide variety of hands-free consumer applications. With an extremely noise robust and accurate solution that responds quickly and at ultra-low power consumption, theTrulyHandsfree™ trigger technology has become the most widely adopted keyword spotting technology in the speech industry.

    Sensory’s staff of world-renowned speech experts and linguists is continuing to expand the company’s language and country support. Languages currently in development include Indian English, Polish, Greek, and Cantonese, with others soon to be added.


    So this is from 14, expect that thy have added many of the above, but NOT Scottish. ( a joke for the real GEEKs)


    “We are committed to providing the most innovative global speech and speaker solutions for deployment in consumer electronic applications,” stated Sensory CEO Todd Mozer. “Theterm ‘world-class’ truly defines our technology for the diversity of languages and international regions that it supports, and our continued investment in resources to develop and expand these language offerings.”



    So as you think over the LPSD, and the QUIK Eos understand that a major player that has an APP across all the continents...will have it all covered with the QUIK/Sensory solution. So this is SO very important. If they use the Eos they can have a global menu of phrases.
    In this case the adjacent possible is just what a global brand needs/

    Facebook want a team for 70 languages for their platform, as TM wrote about in 11.

    The adjacent possible does not have to be a 5 x10 broom closet, but it can also be a well lit stage where great performances will occur.

    The Holy Grail is almost here....

    1. No Buttons Necessary. OK here I’m tooting my own whistle, but Sensory has really done something amazing in this area. For the first time in history there is a technology that can be always-on and always-listening, and it consistently works when you call out to it and VERY rarely false-fires in noise and conversation! This just didn’t exist before Sensory introduced the Truly Handsfree™ Voice Control, and it is a critical part of a human-like system. Users don’t want to have to learn how to use a device, Open Apps, and hold talk buttons to use! People just want to talk naturally, like we do to each other! This technology is HERE NOW and gaining traction VERY rapidly.

  4. jfieb

    jfiebWell-Known Member


    Sensory Inc in Phonetics journal on Indian English



    The effects of native language on Indian English sounds and timing patterns

    Article in Journal of Phonetics 41(6):393-406 · November 2013 with 94 Reads
    DOI: 10.1016/j.wocn.2013.07.004

    Abstract
    This study explored whether the sound structure of Indian English (IE) varies with the divergent native languages of its speakers or whether it is similar regardless of speakers' native languages. Native Hindi (Indo-Aryan) and Telugu (Dravidian) speakers produced comparable phrases in IE and in their native languages. Naïve and experienced IE listeners were then asked to judge whether different sentences had been spoken by speakers with the same or different native language backgrounds. The findings were an interaction between listener experience and speaker background such that only experienced listeners appropriately distinguished IE sentences produced by speakers with different native language backgrounds. Naïve listeners were nonetheless very good at distinguishing between Hindi and Telugu phrases. Acoustic measurements on monophthongal vowels, select obstruent consonants, and suprasegmental temporal patterns all differentiated between Hindi and Telugu, but only 3 of the measures distinguished between IE produced by speakers of the different native languages. The overall results are largely consistent with the idea that IE has a target phonology that is distinct from the phonology of native Indian languages. The subtle L1 effects on IE may reflect either the incomplete acquisition of the target phonology or, more plausibly, the influence of sociolinguistic factors on the use and evolution of IE.

    Sensory Inc. and its CEO who stuck to his vision for 20 yrs deserves ALL the limelight they got in 16 and I expect TM will just be TOO Busy to write many blogs anymore. Just too busy to write, he does NOT have to evangilize any more, he is writing contracts as fast as they can work out the details.......

    Maybe they will finally get to Scottish in '17? ;-)

    ANd here is an important TM snip on the BIG dogs...

    but they have grown their products to a very usable accuracy level, through deep learning, but lost much of the advantages of small footprint and low power in the process.

    Facebook needs Sensory Inc.?

Friday, December 23, 2016

Commentary for the casual reader...
This is important stuff as it articulates what a BIG DOG sees..
Intelligence ON THE DEVICE.
Todd Mozer ( Sensory Inc. CEO) has seen the same thing for decades.
QUIK's CTO ( Dr. Saxe) must be very happy to hear this stuff from Fscebook as he too has spoken ofmore intelligence ON THE DEVICE>



It's a big pic item of some urgency now for everybody.

FPGA IP has much greater value from this new adjacent possible.






Applied Research Scientist, Machine Translation
(Menlo Park, CA)
Facebook's mission is to give people the power to share, and make the world more open and connected. Through our growing family of apps and services, we're building a different kind of company that helps billions of people around the world connect and share what matters most to them. Whether we're creating new products or helping a small business expand its reach, people at Facebook are builders at heart. Our global teams are constantly iterating, solving problems, and working together to make the world more open and accessible. Connecting the world takes every one of us—and we're just getting started.
Facebook is seeking an Applied Research Scientist to join our Machine Translation Team in Menlo Park. This team is part of the Applied Machine Learning organization within Facebook, working to break language barriers to create a world where everyone on Facebook can understand everyone else, no matter what languages they speak. In pursuit of this mission, the team has created machine translation systems that serve over 2 billion translations every day across 2000 different language pairs. However, this journey is still only 1% finished, and the team won't rest until they can produce human quality translations for all language directions. The ideal candidate will have a strong background in developing machine translation systems, a strong knowledge of neural networks, and have experience working with massive amounts of data. They should also have strong software engineering skills and the ability to build systems that reach Facebook scale.
Responsibilities
  • Conduct research to advance the state of the art in neural networks and machine translation.
  • Utilize that research to develop and deploy scalable neural network models into production to impact billions of people using Facebook.
  • Contribute to helping the team develop machine learning models that help solve other language related problems, such as language identification and user language modeling.
  • Collaborate with team members from other research and applied research teams working on a variety of deep learning and NLP problems.
Minimum Qualifications
  • PhD/MS with relevant experience in the fields of machine translation, machine learning, deep learning, parsing or language modeling.
  • 3+ years of experience in building in large-scale machine translation systems from the level of researching a prototype to the level of production.
  • Experience with developing and training neural network models.
  • Experience in C++ and Python.

Facebook wants the spoken word and the written word. Always with stuff like this is the sad part....those who can speak many languages are such great people to know.

There are MANY more such interesting jobs at Facebook.

On FPGA IP value...

The intrinsic scalability demonstrated by our FPGA implementation can be utilized to implement complex CNN  Convolutional Neural Networks on increasingly smaller and lower power FPGAs at the expense of some performance.



Give up the notion that QUIK don't got no IP worth a dime.
It is worth much more than you realize. :)

Thursday, December 22, 2016

Facebook, hmmm

  1. jfieb

    jfiebWell-Known Member


    I will store some important stuff here.

    They should just buy Sensory Inc?


    Applied Research Scientist, Speech Recognition Language Modeling
    (Menlo Park, CA)
    Facebook's mission is to give people the power to share, and make the world more open and connected. Through our growing family of apps and services, we're building a different kind of company that helps billions of people around the world connect and share what matters most to them. Whether we're creating new products or helping a small business expand its reach, people at Facebook are builders at heart. Our global teams are constantly iterating, solving problems, and working together to make the world more open and accessible. Connecting the world takes every one of us—and we're just getting started.
    Facebook is seeking a Speech Recognition Research Scientist to join our research team in Menlo Park. The ideal candidate will have research experience developing speech recognition systems in different languages. Individuals in this role should be experts in language modeling and have experience working on vast quantities of data, advanced recognition systems and perform rigorous performance evaluations. The candidate will help Facebook conduct research that support naturally spoken input in more than 70 languages.
    Responsibilities
    • Create language models from large corpora of text data in different languages using Hadoop/Hive
    • Work closely with other speech recognition experts and researchers on implementing algorithms that power user and developer-facing products
    • Be responsible for measuring and optimizing the quality of your algorithms
    Minimum Qualifications
    • Strong desire to build beautiful, expressive products that delight users in any language
    • M.S. or Ph.D. in Computer Science, Electrical Engineering, Speech/Language Processing or Machine Learning
    • Experience with scripting languages such as Perl, Python, PHP, and shell scripts
    • Experience in C/C++
    • Experience with Hadoop/Hbase/Pig or Mapreduce/Sawzall/Bigtable a plus
    • Fluency in at least 2 natural languages is a plus
    Last edited: 1 minute ago
  2. jfieb

    jfiebWell-Known Member


    and this one




    Typically, neural networks run on large numbers of computer servers packed into data centers on the other side of the Internet—they don’t work unless your phone is online—but with its new app, Facebook takes a different approach. The Picasso filter is driven by a neural network efficient enough to run on the phone itself. “We perceive the world in real-time,” Mehanna says. “Why wouldn’t you want the same thing from your AI?”

    the link

    https://www.wired.com/2016/11/fb-3/

    Really, I said to myself, holy smokes, YES! THis will be better than 10 axis fusion for indoor location..


    Here is some more...


    Already available in Ireland and due soon here in the States, this new Facebook app is another sign that deep neural networks will push beyond the data center and onto phones, cameras, and various other devices spread across the so-called Internet of Things. As Rick points out its just like INTC/Altera only on the other end.....


    Commentary.... ALready we have the BIG dogs of the FPGA world talking bout cloud neural networks...and Facebook talking up Todd Mozers vision...we need more intelligence ON THE DEVICE. And the article only gets better....

    Yes, these tools can operate without an Internet connection. And that points to a future where our smartphone apps can perform a much wider range of tasks while offline. But it also shows we’re moving towards technology that can handle more complex AI tasks with less delay. Ultimately, if you can complete a task without sending a bunch of data across the wire, it will happen quicker.

    Training in the cloud and inference ON THE DEVICE.....

    Facebook’s app doesn’t train its neural networks on your smartphone. That still happens on servers in the data center. But the phone does execute the neural net—without calling back to the data enter. That may seem like a small thing, but building a deep neural net that can so quickly execute on a phone—which offers limited processing power and memory—is no simple task.


    Am blown away this am....best find in a loooong time for 

    As companies like Facebook and Google continue to push neural networks onto smartphones, phone makers will start building hardware into these devices that can run neural networks with even greater speed. 



    QUIK, with is multi yr focus on low power, with its FPGA IP, this starts to flesh it out. QUIK has every bit and piece needed to create a ubiquitous NNLE.( neural network learning engine). 

    A Picasso photo filter won’t change your life. But this one points to big changes in the years to come.


    QUIK's IP just took a jump in value, likeXilinx.


    From the other recent reading, DSPs can do the training very well, FPGAs are vying for the inference part of AI.

    There should be a LOT of interesting talk with Dr Saxe and team,

    Please read this one over every couple days for incremental understanding of what it means....

    I have waited a LOOOOONG time to read something so profound that is good.

    jfieb, happy today



    am so blown away by implications I have to log off and do other stuff and come back l8er to look at it again and see if I got it right.

    Sorry bout the underlining, once I use it for a phrase it does NOT stop.

    Please make comments, ask questions etc. This IS HUGE.?


    Facebook has a LOT of jobs related to this...........

    Last edited: A moment ago
  3. jfieb

    jfiebWell-Known Member

    New

    https://techcrunch.com/2016/11/08/shining-light-on-facebooks-ai-strategy/

    Shining light on Facebook’s AI strategy

    a few snips

    Caffe2Go won’t remain limited to Style Transfer — it holds the key to deploying convolutional neural nets across Facebook’s suite of mobile apps.

    “For anything we build on the server, we now have a vehicle to ship it to mobile devices,” noted Schroepfer.



    Bringing machine intelligence to the smartphone is just step one. For the time being, the world still doesn’t have an answer for training neural networks on mobile devices. The prospect coupled with Facebook’s long-term roadmap make a compelling argument for a future where the average person could design and train a custom neural net on their own smartphone for daily use.


    Regardless of the barriers, Facebook has little choice in prioritizing AI as its competitors pour billions into beating the company to the next great breakthrough — though the mere fact that everyone is all in on the race is perhaps what makes it so interesting.

    While Google first popularized algorithmic search, and Snapchat is now making augmented reality mainstream, it’s Facebook delivering artificial intelligence to the masses. AI isn’t necessarily about communicating with a conscious computer. It’s about sifting through mountains of data to make sense of the chaos. And no one has more data about us than Facebook.


    consider that in the cloud Inference is done on FPGAs. Some Soc makers will have to try it out for themselves.

    Facebook should just buy Sensory and be where it needs to be much much sooner and with really good stuff?

    Dr. Saxe you got this covered for us? :) thanks in advance