A very nice thing to listen to is here...
So who makes the Fitbits?
Flex does and what do they have to say?
http://www.cnbc.com/2016/04/18/flex...owth-opportunity-in-wearables-technology.html
give it a listen- especially about what they are learning making the Fitbits....
"We work very, very hard for wearable technology," he said. "We try to work on the underlying core process technologies that enables wearables to happen."
QUIK, can you get the Eos into Flex's parts bin? And feel free to start with the BIG bins that holds the Fitbit parts
"
One of the primary reasons for McNamara’s optimism is the potential for new and interesting applications for wearable technology. We are already seeing professional sports organizations talking to wearable manufacturers about adding wearable technology to their player’s uniform in order to better track and evaluate performance during the game.
Hospitals and other health care facilities are also looking to wearables as a step forward in patient care.
With all this in mind, we can be certain that McNamara isn’t the only CEO placing a bet on the wearables market.
Monday, April 25, 2016
Rick's snip
VUI = Vocal User Interface
I wanted to reread this Sensory Inc item....
November 12, 2015
A really smart guy told me years ago that neural networks would prove to be the second best solution to many problems. While he was right about lots of stuff, he missed that one! Out of favor for years, neural networks have enjoyed a resurgence fueled by advances in deep machine learning techniques and the processing power to implement them. Neural networks are now seen to be the leading solution to a host of challenges around mimicking how the brain recognizes patterns.
Google’s Monday announcement that it was releasing its TensorFlow machine learning system on an open-source basis underscores the significance of these advances, and further validates Sensory’s 22 year commitment to machine learning and neural networks. TensorFlow is intended to be used broadly by researchers and students “wherever researchers are trying to make sense of very complex data — everything from protein folding to crunching astronomy data”. The initial release of TensorFlow will be a version that runs on a single machine, and it will be put into effect for many computers in the months ahead, Google said.
Microsoft also had cloud-based machine learning news on Monday, announcing an upgrade to Project Oxford’s facial recognition API launched in May specifically for the Movember Foundation’s no-shave November fundraising effort: a facial hair recognition API that can recognize moustache and beard growth and assign it a rating (as well as adding a moustache “sticker” to the faces of facial hair posers).
Project Oxford’s cloud-based services are based on the same technology used in Microsoft’s Cortana personal assistant and the Skype Translator service, and also offer emotion recognition, spell check, video processing for facial and movement detection, speaker recognition and custom speech recognition services.
While Google and Microsoft have announced some impressive machine-learning capabilities in the cloud, Sensory uniquely combines voice and face for authentication and improved intent interpretation on device, complementing what the big boys are doing.
From small footprint neural networks for noise robust voice triggers and phrase-spotted commands, to large vocabulary recognition leveraging a unique neural network with deep learning that achieves acoustic models an order of magnitude smaller than the present state-of-the-art, to convolutional neural networks deployed in the biometric fusion of face and voice modalities for authentication, all on device and not requiring any cloud component, Sensory continues to be the leader in utilizing state-of-the-art machine learning technology for embedded solutions.
Not bad company to keep!
Since that item was written a LOT has changed.
Goog's machine learning computer for the first time beat the best Go player.
Amazon's echo audio focused device is now realized to be a whole lot more than realized as an IoT portal. It has the VUI that RIck spoke of.
These big pic events suggest that QUIKs hardcode of the sensory algos shows QUIK did the right thing a yr. ago, when they had to design the Eos.
The subjective probabilities have shifted because of this, to more evals, more serious interest, the tipping point could become a much more obvious event?
Sensory has more algos than we have implemented, so far.
The value of a well designed NNLE ( neural network learning engine ) is substantial?
Subscribe to:
Posts (Atom)