Tuesday, January 12, 2016


Senior Staff Algorithm Architect: Machine Learning

Discussion in 'Main Forum' started by jfiebToday at 6:11 AM.
  1. jfieb

    jfiebMember


    As a big pic item we really want this layer on the device…so this folder will be to track things along.
    I do really like my mental model of tiers of intelligence and the rice terraces, but
    I will stick to the way the GEEKS present the same concept.

    In the news…A lot for 2016. Much of it is Big data in the cloud and I won't post much of that…

    Here is such an item, but this snip is a good one on why there should be intelligence on the device…

    http://insidebigdata.com/2016/01/11/human-in-the-loop-is-the-future-of-machine-learning/

    In other words, machines learn from the data humans create. Whether it’s you tagging your friends in images on Facebook, filling out a CAPTCHA online, keying in a check amount at the ATM, those all end up in a dataset that a machine learning algorithm will be trained on. Machine learning simply can’t exist without this data.

    The other major issue with machine learning is accuracy. Generally, it’s not too difficult to train a machine learning algorithm to get you to about 80% accuracy. Of course, what business is going to make big, important decisions with that 20% looming?

    Getting to high certainty with your data (think something like 98% or 99%) is incredibly difficult. That’s because there are always outliers and hard cases a machine simply can’t figure out.



    THe same is true of data generated by a mobile device, and from Dr. Saxes talks we know they have focused of this a fair bit.

  2. jfieb

    jfiebMember


    Wired on health… a trend they see…in health

    Machine learning will keep us healthy longer
    06 JANUARY 16
    This article was taken from The WIRED World in 2016 -- our fourth annual trends report, a standalone magazine in which our network of expert writers and influencers predicts what's coming next. Be the first to read WIRED's articles in print before they're posted online, and get your hands on loads of additional content by subscribing online.

    When assessing a patient, medics look at snapshots of physiological data that are manually taken by doctors or nurses, and make decisions against patient history, family background and test results, as well as their own knowledge and experience. But what if this datawas constantly being taken, every second of every day? And what if a system was clever enough to compare these readings to thousands of patients worldwide with a similar history and disorder, as well as all the current clinical guidelines and studies, and make clinical suggestions to doctors?

    In 2016, this kind of data-led decision-making will come ever closer. Sentrian, a California-based early-stage machine learning and biosensor analytics company for remote patient management, has created a system that does just that, and it's currently being trialled on patients. "We actually don't monitor people very frequently," says Jack Kreindler, Sentrian's founder and chief medical officer. "If I see a patient once a year, I may spend one hour listening to them, and the rest of the year's 8,759 hours not listening to them. We are trying to build a system that will enable us to listen to the lives and bodies of patients all the time, so we can make better, earlier and more personalised decisions."

    Currently, wireless biosensors can collect simple data such as body temperature and heart rate as well as more complex information like oxygen saturation of the blood and potassium levels. Remote patient monitoring is typically done with one or two sensors at a time and the data is usually assessed by clinicians. But if a patient could constantly wear several sensors at a time, the amount of data produced would be enormous.

    Sentrian's approach collects data streams from biosensors and uses machine learning algorithms to detect subtle patterns based on general information within the system on chronic conditions. These can include heart disease, diabetes and chronic obstructive pulmonary disease (COPD). Data such as heart rate, blood pressure and oxygen saturation from wirelessbiosensors on the patient are pushed to a cloud-based engine that analyses this data and notifies doctors when needed.

    Martin Kohn, chief medical scientist at Sentrian, who practised emergency medicine for 30 years, explains the value in this approach. "It's based on the premise that for many patients with diseases such as congestive heart failure and COPD, the processes that lead to severe illness start days before the patient actually becomes acutely ill," he says.

    The system is currently being tested in clinical trials in the US andUK in patients with chronic heart failure, COPD, high risk of falls and cancer. Early unpublished evidence has already shown the possibility of being able to spot congestive heart failure exacerbations up to ten days in advance. "That is quite extraordinary - before you maybe only had hours," Kreindler says. "We are seeing subtle, personalised patterns and data where odd things, which we didn't really expect before, may end up having strong statistical significance in predicting whether someone is going to fall over many days in advance." Very early research is showing that, in some people, factors such as heart rate variability, sleep duration and body temperature may be indicators of an impending crisis, These differ from the currently accepted warning signs and evidence-based triggers for treatment.

    But are we ready to hand over all decision making to a black box, particularly when it comes to healthcare? "At the moment, there is a barrier, even from the profession themselves, to trust the kind of outputs that machine learning can deliver," Kreindler says. Sentrian has tried to account for this mistrust by giving some control back to humans - for example, by allowing doctors to specify rules for their patients. So, a scenario may run: "If Mr Smith's heart rate rises significantly, but his activity is going down and his breathing rate is going up, send a text message to the patient and a family member. If there is no response from the caregiver or patient after a text message, one email, and one phone call, then make a call to their doctor."

    These rules, which can be general or personalised, are the kind of complex event-processing and subtle pattern-recognition that is going on in an experienced team of clinician's heads when they are monitoring a patient in intensive care, explains Kriendler.

    And Sentrian's system will continue to learn exactly which rules and interventions work best for which patients: if a false alarm is indicated, the doctor can report this. The more patients the system "sees" and the more feedback it gets, the more the system learns. "Normally, the human brain remembers the last 30 or so patients it looked at," says Kreindler. "With this we may have more than 300,000 patients in memory."

    Another issue that machine learning is being applied to is the volume and rate at which new medical information is growing. Knowledge is expanding faster than doctors are able to assimilate and apply. It is estimated that in 1950, the time to double the volume of the world's medical knowledge was 50 years; in 2010 it took 3.5 years; and in 2020, it will take 73 days. In another study, researchers projected that it took, on average, 17 years for new evidence-based findings to find their way to the clinic.

    What if a doctor was able bring up every single case study, clinical study and national guideline worldwide on a particular disorder to the forefront of their mind? The second part of Sentrian's project aims to do just this, and augment the system with the ability to read and learn from all current clinical evidence.

    IBM Watson, the supercomputer that won a game of Jeopardy! against humans in 2011, has already demonstrated that this sort of learning is possible. Kohn was chief medical scientist at IBM research, where he led the company's Watson supercomputer initiative in healthcare. Watson has "read" 204 oncology textbooks, medical databases of journals (one of which, PubMed has 24 million citations of biomedical literature), thousands of patient records, and had 14,700 hours of "training" from clinicians. In a study published in 2014, scientists from Baylor College of medicine in Houston, Texas and IBM used Watson technology to analyse more than 70,000 scientific articles to identify proteins that can modify a tumour-suppressing protein. Over the past 30 years scientists have identified 28 similar target proteins - Watson identified six in a month.

    "Watson will assist your clinician by providing timely insights into the specific condition by analysing the patient's detailed medical records including genomic considerations," says Robert S Merkel of Watson Health. "Watson will then suggest potential treatment recommendations from a very large repository of knowledge spanning millions of pages of medical literature, research articles and 180,000 clinical trial protocols."

    Merkel explains that Watson does not have to be used as just a tool for suggesting clinical management options to doctors - it could also be a benefit in clinical trials. "Consider clinical trial matching," he says. "A clinical trial for an experimental breast cancer treatment may require a hundred patients who meet a variety of criteria, like a specific genetic marker, age, current stage of the tumour, history of interventions and a response to treatments and medications. Today, physicians and nurses spend hours manually reviewing patient records and comparing patient data to the criteria for a trial. This process introduces the possibility of errors, delays and missed matches." Watson's computing power is able to help doctors by accurately matching their patients with clinical trials that could benefit their care.

    Sentrian's system is currently undergoing several randomised controlled trials, which means that the data of thousands of patients will be added to the platform. While we wait on clinical evidence, it is just a matter of time before this sort of artificial intelligence becomes a regular occurrence in the doctor's office.
  3. jfieb

    jfiebMember


    In retail..

    8 January, 2016
    How machine learning is changing online retail for good
    [​IMG]


    By Arie Shpanya

    Counting shares

    1 comment
    William Faulkner once wrote, "Always dream and shoot higher than you know you can do. Do not bother just to be better than your contemporaries or predecessors. Try to be better than yourself."

    Growing content with yourself or your business is incredibly dangerous.

    You should continuously strive for self improvement, but it can be hard to resist the temptation of settling down with what you have.

    There's so much that goes into running and improving an ecommerce business.

    You want to have the most appealing website with the most competitive prices, but you also want to reach your desired margins and grow faster than your competitors.

    Luckily, there's a new tool retailers can use to improve their business without losing sight of other aspects of their business.

    The tool is merely three words long, but packs a powerful punch: machine learning algorithms.

    [​IMG]

    Machine learning algorithms are lines of code that retailers can use to their advantage and improve competitively in different aspects of their business.

    These areas include pricing, inventory forecasting, cost reduction, and more. They're remarkably technical, so I'll break them down.

    Pricing
    Remember the scientific method in school? It's a six-step process that helps answer a looming question.

    In the frame of ecommerce, it can answer "what's the right price for my products?"

    Building an algorithm is a process that varies greatly, depending on its application but it boils down to a few steps: measuring different variables, applying various statistical methods to get the best fit, testing it, and then distributing it across your entire product catalog.

    [​IMG]

    A couple of variables taken into a pricing algorithm's consideration are product seasonality, elasticity to competitor prices, and desired margin.

    Using these factors, you can then build a demand estimation engine (which is actually exactly what it sounds like.)

    You use this demand estimation engine to hypothesize what would happen with different price changes in your product catalog. You then test this hypothesis on a sample of SKUs, and measure the results.

    To validate these tests, you can expand the algorithm to the rest of your catalog and analyze results.

    Constantly changing these algorithms until you establish a high confidence level is where the fun really begins.

    Machine learning algorithms are, well, machine learning, meaning they will learn and understand how different factors influence a consumer’s purchasing decision.

    For example, let's say you introduce a new catalog of products that you've never sold before. Algorithms are capable of taking that new assortment, identifying similar characteristics of products you have sold and estimating demand.

    Over time, your algorithm learns how to optimize these products for revenue or profit, avoiding the common pitfalls of a more manual approach.

    [​IMG]

    Inventory forecasting
    But what about other aspects of your store that are directly related to demand levels? Like inventory, for example.

    You don't want to have the right price for your product, win the shopper's interest, but lead them to a page that says you're out of stock.

    Machine learning algorithms can act as retail meteorologists, giving you a forecast for your inventory levels using their demand estimation engines.

    Forecasting demand can help you order the right amount of stock to last you through any rises or dips in traffic to your site.

    Using a set of factors similar to pricing algorithms, you can estimate demand and order your store's items accordingly.

    Then, after building confidence, your algorithm can learn what works well for your inventory levels and what hurts your bottom line.

    So what are these algorithms doing for the retail industry? In a nutshell, they're giving retailers a more cost-friendly approach to improving their competitive levels and earning a profit.

    They are giving retailers the ability to reprice like the "king" of the ecommerce jungle, Amazon.

    [​IMG]

    Cost reduction
    The simplicity of machine learning all translates into reduced costs for your business. Instead of paying the salaries of 10 workers to do tedious work, you could automatically monitor and implement algorithms that continuously optimize your ecommerce store and stock levels.

    In place of doing hours of manual work, you can have an algorithm do all the heavy lifting. This automation gives you the opportunity to improve other aspects of your business.

    This means more flexibility to improve the shopping experience, which in turn can strengthen your brand value in the eyes of your shoppers.

    In summary...
    There's a lot in store for retail in 2016, and I think machine learning algorithms are one of the most powerful tools a retailer should use to get ahead in the growing industry.

    Shoppers are expecting a more personalized shopping experience free of speed bumps that can run rampant in online retail.

    Algorithms can do the heavy pricing work for your business, and let you add a human touch to improve your store's shopping experience.

    Machine learning algorithms can break the glass ceiling that's been hurting your business' chances of excelling past your previously established benchmarks.

    They unlock what may have been a previously unknown potential to become a leader. Therefore you're never settling for a mediocre store, you're improving every aspect of it past the point you may have known was possible.

    For more on this topic, read:
  4. jfieb

    jfiebMember


    In retail indoor location will be important and we have Project Tango to move the ecosystems forward.



    Neural computing is one form of machine learning of keen interest to us as we know, and very very few other people, or investors know, that
    neural computing allows intelligence at 20x less power.


    QUIK has such a difference in its genes and this will have resonated with them.
    THe adjacent possible says they will have on their bench a prototype generic NNLE ( neural network learning engine )

    Will it work?

    Stay tuned, I think so.

    Such a thing will allow a layer of intelligence to reside on the device with amazing power figures.
    We have the mental model of what Sensory Inc has done for audio OS an order of magnitude less
    than others have on their bench. We really want to see an extension of their initial work.

    Its not a difference of 10-30% its as much as 20x less. IF a NN engine can be built and hardened, QUIK is one who will do just that.

    QUIK is NOT majoring in minor things anymore.

    Audio on the device is an OS

    Gesture is an OS

    a NNLE is real intelligence on the device.
  5. jfieb

    jfiebMember


    A mental model is useful for me, I find an article that fills in all the details on how something will go.

    [​IMG]
    [​IMG]
    China develops a neuromorphic chip based on Spiking Neural Networksby Staff WritersBeijing, China (SPX) Dec 25, 2015

    [​IMG]
    Photos of the chip and the demonstration board. Image courtesy Science China Press. For a larger version of this image please go here.


    Artificial Neural Network (ANN) is a type of information processing system based on mimicking the principles of biological brains, and has been broadly applied in application domains such as pattern recognition, automatic control, signal processing, decision support system and artificial intelligence. Spiking Neural Network (SNN) is a type of biologically-inspired ANN that perform information processing based on discrete-time spikes.

    It is more biologically realistic than classic ANNs, and can potentially achieve much better performance-power ratio. Recently, researchers from Zhejiang University and Hangzhou Dianzi University in Hangzhou, China successfully developed the Darwin Neural Processing Unit (NPU), a neuromorphic hardware co-processor based on Spiking Neural Networks, fabricated by standard CMOS technology.

    With the rapid development of the Internet-of-Things and intelligent hardware systems, a variety of intelligent devices are pervasive in today's society, providing many services and convenience to people's lives, but they also raise challenges of running complex intelligent algorithms on small devices.

    Sponsored by the college of Computer science of Zhejiang University, the research group led by Dr. De Ma from Hangzhou Dianzi university and Dr. Xiaolei Zhu from Zhejiang university has developed a co-processor named as Darwin.The Darwin NPU aims to provide hardware acceleration of intelligent algorithms, with target application domain of resource-constrained, low-power small embedded devices.

    It has been fabricated by 180nm standard CMOS process, supporting a maximum of 2048 neurons, more than 4 million synapses and 15 different possible synaptic delays. It is highly configurable, supporting reconfiguration of SNN topology and many parameters of neurons and synapses. Figure 1 shows photos of the die and the prototype development board, which supports input/output in the form of neural spike trains via USB port.

    The successful development of Darwin demonstrates the feasibility of real-time execution of Spiking Neural Networks in resource-constrained embedded systems. It supports flexible configuration of a multitude of parameters of the neural network, hence it can be used to implement different functionalities as configured by the user.

    Its potential applications include intelligent hardware systems, robotics, brain-computer interfaces, and others. Since it uses spikes for information processing and transmission,similar to biological neural networks, it may be suitable for analysis and processing of biological spiking neural signals, and building brain-computer interface systems by interfacing with animal or human brains.

    As a prototype application in Brain-Computer Interfaces, Figure 2 describes an application example of recognizing the user's motor imagery intention via real-time decoding of EEG signals, i.e., whether he is thinking of left or right, and using it to control the movement direction of a basketball in the virtual environment.

    Different from conventional EEG signal analysis algorithms, the input and output to Darwin are both neural spikes: the input is spike trains that encode EEG signals; after processing by the neural network, the output neuron with the highest firing rate is chosen as the classification rest


    Darwin Neural processing unit-


    neuromorphic hardware co-processor

    target application domain of resource-constrained, low-power small embedded devices.


    How is yours coming QUIK? :)

    It is their adjacent possible and do NOT expect to hear even a whisper about this, its as important as the initial S1 and they did a good job for 2 yrs
    without sifters like me finding it.

    Intelligence = margin

    neuromorphic hardware co-processor?

    The adjacent possible says they have the ability to do one of the best.
    There will be multiples- others with their approach-QUIKs skill in partitioning between hare and 
    software will be very useful as they work on this?

    QUIK is different now, as they are not majoring in minor things.
    Understand that IF they can build a NNLE is will be an order of magnitude change that allows a jump in intelligence at less power-ON the device.

    Well worth the effort?

  6. jfieb

    jfiebMember


    NNX-

    one more take on how these will look


    Deep networks are state-of-the-art models used for understanding the content of images, videos, audio and raw input data. Current computing systems are not able to run deep network models in real-time with low power consumption. In this paper we present nn-X: a scalable, low-power coprocessor for enabling real-time execution of deep neural networks. nn-X is implemented on programmable logic devices and comprises an array of configurable processing elements called collections. These collections perform the most common operations in deep networks: convolution, subsampling and non-linear functions. The nn-X system includes 4 high-speed direct memory access interfaces to DDR3 memory and two ARM Cortex-A9 processors. Each port is capable of a sustained throughput of 950 MB/s in full duplex. nn-X is able to achieve a peak performance of 227 G-ops/s, a measured performance in deep learning applications of up to 200 G-ops/s while consuming less than 4 watts of power. This translates to a performance per power improvement of 10 to 100 times that of conventional mobile and desktop processors.

    Published in:
    Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on


    Seems neuromorphic stuff comes of age.

    Carver Meade must be happy to see these things unfold.

    ( PS, our son starts his neuromorhic computing course from a C Meade disciple today, he is a real GEEK, not a vicarious one like me ;-)



    I just learned of this order of magnitude difference in power, but QUIK has known for a looooong time.
    It is QUIK's adjacent possible to build a deep neural coprocessor for learning, hence NNLE- neural network learning engine.

    Sensory Inc, is the first great use of this stuff and they should get credit for their 20 yr belief in this, it has started to pay off for them, and the coins they will mint
    for a singular vision may be substantial.( not so many multiples here, as far as I know they are the ONLY audio player who stayed on this sort of course.)

    Others will move to catch up?

    Open question… will QUIK move into vision IP or leave that to DSPs, as Sensory does have vision IP to go with their audio stuff…I have no opinion and am aware that they can only do so much and they may be using 100% of their resources on getting the revenue flowing in '16

No comments:

Post a Comment