Saturday, December 16, 2017


Medical Grade Wearable

Discussion in 'Main Forum' started by jfiebToday at 6:10 AM.
  1. jfieb

    jfiebWell-Known Member


    This folder will be kept up to date as we move forward.



    analysts expect that by 2020, almost half a billion smart wearable devices will have been sold. For today’s health-conscious consumer, a Fitbit, Garmin or Jawbone activity tracker is a must-have accessory. Despite the fast-growing market, only 10% of those consumers are using the product daily. This is an opportunity for innovative life science companies to tap into the market and create value-added services for consumers.

    To shine a light on personal health management for users, life sciences companies are working to provide solutions that combine treatment and technology by leveraging existing products and introducing medical-grade wearables.

    Life science companies need to consider these factors as they deliver medical-grade wearables to patients:

    The potential for medical-grade wearables

    Medical-grade wearables offer a range of potential applications from real-time disease diagnosis and monitoring, to delivering insights on patient experiences, to improving product quality and innovation. Developing medical-grade wearables that will appeal to consumers, while providing health benefits, begins with identifying and defining unmet patient needs. The next step to develop a wearable that effectively meets a plethora of patient needs – whether it’s a drug-device combination associated with a medical app or as part of a broader patient-engagement initiative.


    Philips has this sort of vision-a broader patient-engagement initiative


    The data

    Part of the development process for creating a medical-grade wearable is to examine how clinical trial data integrate with real-world evidence, to determine the patient transition from controlled testing to everyday use. With the understanding of how technologies can be implemented as treatment options for conditions like diabetes and obesity, device creators can integrate feedback and validate the outcomes. The insights gathered from trial data can be used to improve product characteristics, design, and implementation.


    Tier 1 is at this stage?


    The regulations 

    The development of regulations around security and confidentiality have become obstacles to the creation of medical-grade wearables. Currently, the regulatory path is in flux as the global agencies, like the FDA, develop risk-based frameworks for health information technologies, which will include wearables.

    However, regulations are evolving, as both medical and wearable technologies are changing at a rapid pace, particularly if these devices are providing healthcare professionals with data in real time. Positioning health wearables as clinical-grade medical devices can ensure data integrity and compliance.

    However, regardless of the device’s classification, data collection and supply must always be accomplished in a way that ensures security of the data and informed consent from the patient. The resulting data output from medical-grade wearables can be directed through secure cloud‑based environments which can address privacy, security or confidentiality concerns, like anonymity.

    The customer

    For the health wearables market to have a successful and profitable future, life science manufacturers need to create and deliver customer-centric products. Creating and extending pharmaceutical solutions to encompass a more holistic approach to healthcare will ultimately deliver sustainable value to patients and healthcare systems. Building mutually-beneficial relationships between patients, physicians, providers and producers allows life science companies to improve therapeutic outcomes while reducing the cost of care for the patient and the healthcare system.


    Will big pharma make wearables?


    Beyond the hype

    Wearables have the potential to enable a more holistic approach to healthcare that has the potential to revolutionise the health sciences industries. By embracing such a holistic approach to patient care, sciences like genomics, personalised medicine and molecular biology can integrate with emerging technologies powered by the Internet of Things (IoT), creating what could become the next wave of life-saving drugs and devices.

    To enable this transformation, life science companies need to leverage the power of these devices when utilised in a disease-related context. This will not be simple or easy, given the magnitude of legal, regulatory and technical issues, but the opportunity to positively impact healthcare outcomes around the globe has never been greater


    Philips has Asia as a KEY geography for them, most will not have any idea of that.
  2. jfieb

    jfiebWell-Known Member


    Artificial Intelligence and the Move Towards Preventive Healthcare
    December 13, 2017 by Editorial Team Leave a Comment
    In this special guest feature, Waqaas Al-Siddiq, Founder and CEO of Biotricity, discusses how AI’s ability to crunch Big Data will play a key role in the healthcare industry’s shift toward preventative care. A physicians’ ability to find the relevant data they need to make a diagnosis will be augmented by new AI enhanced technologies. Waqaas, the founder of Biotricity, is a serial entrepreneur, a former investment advisor and an expert in wireless communication technology. Academically, he was distinguished for his various innovative designs in digital, analog, embedded, and micro-electro-mechanical products. His work was published in various conferences such as IEEE and the National Communication Council. Waqaas has a dual Bachelor’s degree in Computer Engineering and Economics, a Master’s in Computer Engineering from Rochester Institute of Technology, and a Master’s in Business Administration from Henley Business School. He is completing his Doctorate in Business Administration at Henley, with a focus on Transformative Innovations and Billion Dollar Markets.

    In October 2000, Google co-founder Larry Page made a luminary prediction: “Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the Web. It would understand exactly what you wanted, and it would give you the right thing.” Fast forward seventeen years later, and artificial intelligence (AI) and Big Data are the new buzzwords in healthcare. According to a new Market Study report, the healthcare artificial intelligence segment is projected to see a staggering 40 percent compound annual growth rate (CAGR) between 2017 and 2024, resulting in a $10 billion market focused on medical imaging, diagnostics, robotic personal AI assistants, drug discovery, and genomics. When Larry Page compared artificial intelligence to the “ultimate search engine,” he was essentially speaking of AI’s ability to crunch massive amounts of data. AI’s deep learning algorithms are designed to detect features in huge, disparate datasets that are not discernible to entire teams of data scientists. Second, these deep learning algorithms can be trained to provide specific information, or in Page’s words, “[AI] would understand exactly what you wanted, and it would give you the right thing.”

    Clinical AI implementation promises a healthcare system that is preventive rather than reactionary. Today, patients are often diagnosed with a chronic condition, such as cancer or diabetes, when it’s too late to reverse the progression of the condition. Treatment plans with late stage diseases are expensive and debilitative. Patients are poorly equipped with feedback and insights into their own health conditions, and so are less proactive about making healthy lifestyle choices and adhering to physician advice. Consequently, preventing the start of chronic disease and managing the disease post-diagnosis has become the focus of preemptive measures in healthcare. Here, AI offers a promising solution. A preventive healthcare system will capitalize on AI’s ability to collect, compile, and analyze data to facilitate three progressive and ultimately integrated stages of learning. First, AI will enable a broad scope of learning that will aid in more effective and efficient disease diagnostics based on historical data. The insights gleaned from this massive survey of Big Data will be utilized by physicians to further train AI. Second, AI will harness historical data and augment it with real-time patient data to provide feedback to patients. Finally, as AI begins to learn how patients react differently based on real-time data, it will create personalized and predictive feedback for each patient.

    Broad Learning for Effective and Efficient Diagnostics

    AI is distinguished by its analysis capabilities and its deep learning algorithms. These capabilities can be deployed to traverse massive amounts of data and detect a few variables across hundreds of thousands of data points that are specific to certain conditions and diseases. In the context of broad learning, AI holds the potential to aid in the diagnostic process and identify problems before they become serious.

    Researchers from Sutter Health and the Georgia Institute of Technology are using deep learning to analyze electronic health records to predict heart failure before it happens. Initial results have empirically demonstrated that this AI application can accurately predict heart failure one to two years early. Philadelphia’s Thomas Jefferson University Hospital has researchers training AI to identify tuberculosis on chest X-rays, an initiative which may help screening and evaluation efforts in TB-prevalent areas with limited access to radiologists.

    By leveraging public historical data sets licensed from research groups such as the Mayo Clinic or the American Heart Association, with patient-specific data such as medical history, individual symptoms, and prescribed medications, AI will enable physicians to identify a specific condition while ruling out others. Then, they’ll be able to recommend the best course of treatment based on the individual patient.

    Augmenting Broad Learning with Real-Time Patient Data

    Once AI’s broad learning can identify and assist in the diagnosis of a specific condition, it can leverage historic data to develop treatments plans that are interactive, driving patient engagement. Doc.ai is using Blockchain technology to collect masses of medical data globally and generate insights from that information. Then, through machine learning, the data collected will be analyzed and processed to provide personalized feedback to users about their own medical issues. Studies have shown that ongoing feedback is a key factor in driving patient engagement. A 2012 trial found that when remote patient monitoring devices were given to patients with chronic conditions, the number of emergency room visits, hospital admissions, and one-year mortality rates decreased. The devices used in this study provided ongoing feedback for patients by reminding them when tests were due, offering educational videos, and creating a graphic chart detailing their recent clinical results.

    It is just as easy to envision a heart disease patient equipped with a medical-grade wearable device that provides real-time metrics detailing the effectiveness of an exercise regime or medication based on the prior week’s metrics. This demonstrable, measurable feedback could encourage the patient to adhere better to a treatment plan or to consult with physicians between appointments to improve regimens and future results.

    For AI to be truly “intelligent,” it needs to become more effective with experience, and this experience cannot occur with information pulled from historic datasets alone. AI requires copious amounts of data for optimization, and medical-grade remote monitoring technologies that continuously stream patient data are the ideal mechanism for this purpose.

    Philips has this vision



    This is because these devices provide constant connectivity (through expanded broadband) combined with the capability to collect clinically accurate, medically verifiable data.

    Specific Learning for Personalized and Predictive Feedback

    Perhaps the most valued quality of AI is its ability to dynamically learn and improve over time. As AI collects individual patient data, and begins to learn how patients react differently to feedback, it can begin tailoring feedback so that it’s personalized and predictive. Such feedback is the foundation upon which a preventive healthcare system is built. Medtronic’s new IBM Watson-powered Sugar.IQ diabetes app uses real-time continuous glucose monitoring and insulin information from Medtronic pumps and glucose sensors to provide diabetes patients with personalized insights. The AI-based app is designed to learn from a patient’s own information input; its Glycemic Assist feature enables users to inquire about how specific foods or therapy-related actions and events impact their personal glucose levels. By following trends, Sugar.IQ can then help users discover the impact that these items have on their glucose levels. Patient inputs also enable Sugar.IQ to learn and issue blood glucose level predictions by assessing the patient’s current situation and the risk of glucose levels falling outside safe thresholds.

    Medical-grade wearables with AI could create predictions based on a patient’s daily biometrics. If a heart disease patient is prone to developing a rapid heartbeat after X minutes of walking, then the medical device would make a prediction and alert the patient to avoid exceeding the recommended minutes of walking. AI could also exercise predictive capabilities by learning what kinds of feedback instigate adherence for a patient and then applying that feedback to improve the patient’s disease management, almost like a personal health coach. When patients can follow their own progress, and see how certain choices have a direct impact on their health, they are more likely to adhere to treatment plants, engage in their healthcare, and change their behavior.

    The Future of AI in Healthcare

    Ultimately, the effectiveness of AI in healthcare will be directly predicated on its access to Big Data—both to historical data sets and EHRs as well as to real-time, continuous, patient-specific data from remote monitoring technologies. Training AI to reach its maximum potential is an interactive process in which physicians and patients are key players. AI applications must become fully integrated into existing healthcare systems, and must function within a “residency program” of sorts, in which they perform analytics on real-time patient data while being overseen by a physician. In this way, the algorithms will learn simultaneously from the data and from the physician’s oversight to hone their capabilities. AI’s ability to learn from experience and offer personalized and predictive feedback to patients and physicians is its greatest value proposition for preventive healthcare systems which improve diagnostics while catalyzing patient adherence through engagement. The integration of both broad and specific AI learning applications, the latter implemented in remote patient monitoring devices, represents the tantalizing future of preventive healthcare that beckons on the horizon.


    The Tier 1 device + The European b2b are both aimed here?
    What have I learned?

    Philips efforts here are NOT, lets just make a wearable, its a whole, well integrated platform from edge device to the cloud.
    100s or 1000s of man yrs of effort to put it all together.


    It does explain time going by.

    Security of this info? A lot of work will go into this, especially in the USA.

    For a company like Philips it is NOT just the west, it is a global effort.

Friday, December 15, 2017


  1. Why read this item?

    We are in voice devices of 2 app companies...this is Amzn ALexa on voice apps. I want to get a feel for it-voice apps.



    Amazon's Alexa now lets you pay for premium voice apps

    Today, it's a few select games -- tomorrow, maybe Alexa's top personalities could quit their day jobs.

    DECEMBER 14, 2017 1:53 PM PST

    s Amazon's description of the first in-skill purchases available today:

    • The popular Heads Up! game from "The Ellen DeGeneres Show": In a first for a voice service, you can say, "Alexa, open Heads Up!" and start playing the wildly popular game from Ellen DeGeneres. Customers try to guess the word on Alexa's "card" based on clues from Alexa. The skill has 3 free decks to get you started and offers a collection of 5 decks for $0.99 for Prime members and $2.99 without a Prime membership. This collection includes the decks: Millennials; It's, Like, the 2000's; That's So '90s; Family Fun; and As Seen on TV.
    • Teen Jeopardy! and Sports Jeopardy! skills let you guess free weekly clues from Alexa and now you can also purchase themed premium packs of 50 clues for $0.99 for Prime members and $1.99 without a Prime membership.
    • The popular Match Game show has been adapted into a new skill for Alexa. Just say, "Alexa, open the Match Game" to get started. Fill in the missing blanks as you play Match Game's Super Match featuring the Audience Match and Head-to-Head rounds of the classic game show -- then see how you rank against other players. Premium packs of 50 more rounds of game plays are available for $0.99 for Prime members and $1.99 without a Prime membership.
    • History buffs can play the Ultimate History Quiz for Alexa, a brand-new skill from History. Each day, you can play the Free Daily Three with questions on topics like ancient civilizations, World Wars, US Presidents and more. You can also see how you rank against other players. Premium packs of 50 questions are available for $0.99 for Prime members and $1.99 without a Prime membership.
    If you're a prospective Alexa developer, you can read more about the new monetization options here. And if you're a parent worried about accidental purchases, Amazon tells us that skills with in-app purchases will be clearly marked, they'll require your voice code before authorizing a purchase (if you've set one up) and the company will refund accidental purchases if you ask within seven days.


    So use this to see how Voice is moving into everything. That early adopters will maybe play these games over the holidays.
    Have a blast. Get used to voice more and more.....come to really LIKE it and expect it .

    What did I learn?

    I sort of look at voice as a UI, a way to get stuff done, but with this item I sort of get how voice could be a BLAST and make people happy?

    So as we learn about our pipeline and it grows, there increases the chance that ONE of these devices turns into a HIT, hugely popular?

    The QUIK affective disorder would disappear just like that. Not necessary, but it is possible now?

  2. jfieb

    jfiebWell-Known Member


    A snip from Nick Hunn's great essay of 1 yr ago, on the IoV


    Forget the Internet of Things – it’s a bubble. The majority of products currently claiming to be IoT devices are just the same, vertical M2M products we’ve always had, but taking the opportunity to benefit from a rebrand. Most of the rest of the IoT is the wet dream of Venture Capitalists and Makers who think that by overfunding and stimulating each other’s egos in a frenzy of technical masturbation, they can create a consumer market for the Internet of Things. As the IoT slips slowly backwards into the foothills of Gartner’s Hype curve


    you need to look elsewhere to find the real Internet device opportunity, which is only just emerging. It’s the IoV, or the Internet of Voice.



  1. grt read here......
    really like their focus and execution.....

    • News & Analysis

      ST, Ams Sense 3D Trend

      Junko Yoshida

      12/13/2017 00:01 AM EST
      2 comments
      ST, Ams Sense 3D Trend
      Ams CEO talks to EE Times

      Junko Yoshida

      12/13/2017 00:01 AM EST

      • MADISON, Wis. — The comeback of STMicroelectronics (Geneva) has been widely reported in recent months. Testimony to the company’s revival lies in an upbeat set of recent financials and product portfolios. ST's first nine months revenue this year grew to $5,880 million from $5,113 million in the same period a year ago.

        Ams (Premstaetten, Austria), a vendor of analog, mixed signal and sensor products, is also doing well. In fact, the company, although much smaller than ST in size, is closer to spectacular. In the first nine months this year, Ams’ revenue jumped by 42.5 percent to 593 million euros compared to the same period of 2016. Ams CEO insists this is not just a short-term spike. He is forecasting a compound annual revenue growth rate of “more than 40 percent” through 2019.





        [​IMG]
        Ams latest financial results (Source: Ams)




        In iPhone X, Apple adopted leading-edge NIR imagers from STMicroelectronics, while deploying dot illuminators from Ams. Undoubtedly, both companies are profiting from these big design wins. More to the point, to frame the success of these two European companies as just a lucky break induced by the iPhone X factor would be an understatement.

        Consider three big changes quietly brewing in Europe.

        First, European microelectronics companies are finding their voice again. They have brought much-needed focus to their business with a big emphasis on such growth areas as “sensing” and “automotive.” That’s a big part of it.

        Second, European chip vendors are applying a much finer level of analysis to strategy. Neither ST nor Ams is focusing broadly on the general sensor market. Each has identified “3D sensing” as the biggest growth segment of the sensor market, for which they are boldly taking on the complex technology challenges.

        Third, Europe has never given up its passion for technology development. Neither ST nor Ams has totally abandoned manufacturing. Their in-house manufacturing capabilities are paying off now.

        In his recent report, Pierre Cambou, imaging activity leader at Yole Développement, made the strongest case for ST.

        Looking back, he said ST’s imaging division should have disappeared when Nokia lost its way [in the then newly emerging smartphone battle, and sold its handset business to Microsoft.] But it did not. He asked why.

        Cambou credits the engineering teams’ resilience and creativity at ST. In his opinion, “the craziness of investing in Single Photon Avalanche Diode (SPAD) technology became the first step in maintaining life support to [ST’s] endangered [imaging] business.” He said, “Passion for science and technology is not always logical, nor overly business-efficient, but it creates a positive environment for trying new things and going beyond traditional barriers, where no competitor even exists.”

        Cambou added, “STMicroelectronics has shown that innovation is not just a game for startups; large companies can also make the right moves.”

        3D Sensing: ‘Mega trend’
        The story of Ams is much different. In a recent one-on-one interview with EE Times, Ams CEO Alexander Everke discussed his company’s aggressive, but tightly targeted strategy, which has led to Ams’ successful entry into high-end segments of a certain sensor market such as light sensing.





        [​IMG]
        Alexander Everke, CEO at Ams (photo: Ams)




        Unlike valuation-driven M&A activities in recent years, particularly prevalent among Silicon Valley-based chip companies, Everke stressed, “Ams is focused on acquiring technologies, not the revenue.


      • After Everke took the Ams helm January 2016, he acquired a string of companies with specific sensor technologies. They include: Cambridge CMOS Sensors Ltd (CCMOSS), a technology leader in micro hotplate structures for gas sensing and infrared applications; MAZet, a color and spectral sensing systems specialist; Incus Laboratories, an IP provider of digital active noise cancelation in headphones and earphones; high end optical packaging leader Heptagon; and Princeton Optronics, a provider of Vertical Cavity Surface-Emitting Lasers (VCSELs).









        Everke laid out the “four pillars” of sensing technologies targeted by Ams: Optical, Imaging, Environmental and Audio. The company isn’t interested in traditional sensor markets such as motion sensing. “Those technology innovations are slowing down and they are getting commoditized,” he told EE Times.

        Everke is enthusiastic about Ams’ 3D adventure. He called 3D sensing “one of the mega trends of our industry that will drive the market over the next 10 years.” In smartphones, industry 4.0, automotive and emerging medical applications, the imaging world is rapidly transitioning from capturing 2D information to 3D, said the Ams CEO.

        ST, actively engaged in technologies for augmented reality and virtually reality, is also highly aware of the growing demand for 3D sensing. It also sees adding 3D sensing to UI as the key to improving interactivity.

        Yole’s Cambou agreed. “Augmented reality has been the buzzword for the last two years, but part of market momentum still lies in the notion of interaction.” He explained that adding sensors to a smartphone expands its user’s capabilities. “With the development of new computing capabilities and ‘artificial intelligence,’ electronic devices’ perception is going further. The devices are not just recording inputs and outputs but — some would say autonomously — managing an ‘interaction’ with the world,” he said.

        Leadership in optical sensor
        Today, Ams is an undisputed world leader in optical sensors. It commands a broad portfolio in the optical sensing field, thanks to a host of acquisitions over the last 24 months. Its optical product portfolio includes ambient light, IR proximity, RGB and all associated combo sensors, according to Jérémie Bouchaud, director and senior principal analyst for MEMS and sensors at IHS Markit.





        Light Sensor Market Share by Revenue in 2016 (Source: IHS Markit)




        Ams has gotten where it is today partly because of a dominant supplier position in the leading smartphone markets — especially Apple and Samsung. Bouchaud told EE Times, “In particular, Ams is the only supplier of IR proximity sensors designed into iPhones until the iPhone 6 generation. Ams is also the sole supplier of the VCSEL assembly under the dot projector in the iPhone X as part of the True Depth system. Further, Ams is also the main supplier for the RGB/proximity sensors into Samsung’s flagship smartphones.”

        IHS Markit estimates that Apple and Samsung accounted for 65 percent of the combined light sensor business of Ams and Heptagon in 2016.

        So, who’s number two? That would be ST. Bouchaud observed that ST became the second largest light sensor maker in 2016 when it started to ship ToF sensors in the iPhone 7. ST displaced Heptagon’s IR proximity sensor slot, he added.

        Ams is frank about its strategy to pursue the high-end market where the company can command higher prices. Consider digital ALS (ambient light sensors). Bouchaud noted, “Ams is shipping it to Apple at significant higher prices than the market average.” Ams has been able to do this, because “the performance and the quality of its ALS is valued” by customers, he added.





        Ams is focused on a complete optical sensing solution
        Click here for larger image
        (Source: Ams)




        The acquisitions of MAZeT and Heptagon is also helping Ams pursue its high-end market strategy. Ams has been able to develop, for example, a spectrometer based on MAZeT technologies. With Heptagon, Ams is adding ToF sensors. “Both the ToF and on-chip spectrometer are selling for significantly higher prices than other light sensors,” Bouchaud observed.

        Ams’ Heptagon acquisition is considered pivotal for the company’s future growth. Heptagon assets are helping to turn Ams into “a very interesting wafer level optical packing company,” Bouchaud noted. This gives Ams “a real advantage” when it builds complex assemblies such as the True Depth system in the iPhone, or future systems on chip spectrometer with both emitter and the detector within the same component, he explained.

        Next page: Combo sensors, sensor fusion
      more.......

  2. jfieb

    jfiebWell-Known Member

    New


    This statement resonated with me....

    • Unlike valuation-driven M&A activities in recent years, particularly prevalent among Silicon Valley-based chip companies, Everke stressed, “Ams is focused on acquiring technologies, not the revenue.
    • I use them as a mental model of how well it can work. Such great execution on their part and the rest takes care of itself.
    • Onward to CES for QUIK.
    • I was looking backward with that QUIK CES listing, and how waaay back then they had that slide on compute intense stuff that burns batteries in MCUs..... they had voice on it always, along with other stuff ( 10 axis location etc), they were spot on to look
    • at the equation like that.
    • that slide IS STILL there as the deck changes
    • #14
    • They were spot on weren't they? With limited resoures they did NOT go down a dead end st. They made the right choice and that allows what we are about to see at CES '18, MWC, that voice over BLE reference design...gonna allow the pipeline to expand a LOT>

Thursday, December 14, 2017


  1. I will put related info here.

    GOOG, they will still make buggy whips.

    Obsolete?


    Google Home Show rumours: Is Google planning a touchscreen smart speaker?
    Google could be prepping a rival to the Amazon Echo Show. Read the latest rumours on a touchscreen Google Home.

    By Marie Black | 12 Dec 2017


    [​IMG]

    Google might have the better device, but Amazon's Echo smart speaker family leads the market. While Google has recently announced its Google Home Mini and Google Home Max(the latter available only in the US for now), Amazon still has more devices. One of which is the Echo Show, a smart speaker with a touchscreen display, and it's the one Google could well have its eyes on next.

    It makes sense that if Google wants to take on Amazon Echo head to head it needs to have fingers in all the same pies, and therefore that it would look to produce an Echo Show-style version of Google Home.

    There's very little evidence to go on here, however, which suggests the company is still in the early planning stages.

    Trusted Reviews.

    “In this role, you’ll work on the next generation of Google Hardware to enable the best multi-touch user experience. You will lead the touch module development and integration for Google Hardware from concept to mass production,” states the job description.

    The eventual hire will “work to define complete touch solutions” and make “our user’s interaction with computing faster, more powerful, and seamless”, it continues.


    Come on GOOG, get hands free, "ambient computing". By the time you get this done it will be a buggy whip....:)
    S Johnson says they use what they have on the bench. A lot of touch screen stuff on various benches, but GOOG had better do BETTER than this or they will get their butt kicked. You have a sensor hub in your Pixel phones, make MORE use of it and make it always listening?

    Thanks in advance.

  2. jfieb

    jfiebWell-Known Member

    New

    Here is just part of the trouble bewtween AMazon and Goog

    Dear Amazon and Google: Enough.
    There comes a time when companies need to put corporate greed aside and put their users' interests first. For Amazon and Google, that time is now.



    • Gang, we need to talk. Here in the land o' tech (no relation to the Land o' Lakes, aside from a shared love of butter), things are starting to get silly.

      Google and Amazon, if you haven't heard, are in the midst of a very public schoolyard spat. And their little game of corporate one-upmanship shows no sign of slowing anytime soon.

      Here's the 30-second version, in case you haven't been following along: For years, Amazon has refused to offer Google products like Chromecastand Google Home in its online store. It also neglected to offer a readily available Prime Video app for Android up until just a few months ago (previously, you had to go out of your way to sideload the entire notification-spam-spewing Amazon storefront app just to play a lousy movie). Oh, and it still doesn't provide a way to cast videos from Prime to Google-Cast-compatible devices, which is a real thorn in the side for its many Cast-using subscribers.

      A couple months ago, though, the tables turned: Amazon launched a voice control device with a screen, the Echo Show, and wanted to offer YouTube on it. Rather than working with Google on a custom version of the app, Amazon evidently came up with its own "hack" to make YouTube available. Google balked and blocked the device's access. Soon after, Amazon — completely coincidentally, of course — stopped selling all Nest products in its store.

      awkwardly formatted web version of the site from the device. Google again said "naw" and pulled the plug for YouTube on both the Echo Show and on Amazon's various Fire TV streaming devices.

      In a statement sent out widely to tech news publications last week, Google left little question about the driving reason for its retaliation:

      We’ve been trying to reach agreement with Amazon to give consumers access to each other's products and services. But Amazon doesn't carry Google products like Chromecast and Google Home, doesn't make Prime Video available for Google Cast users, and last month stopped selling some of Nest's latest products. Given this lack of reciprocity, we are no longer supporting YouTube on Echo Show and Fire TV. We hope we can reach an agreement to resolve these issues soon.

      Amazon, for its part, told anyone who asked that Google was "setting a disappointing precedent by selectively blocking customer access to an open website" and said it hoped to resolve the matter "as soon as possible" (by "duking it out at the tetherball pole after school," the company probably should have added).

      Computerworld's Facebook page. ]
      Look, I get it: Business is business, and to some extent, you have to look out for your own bottom line. But at a certain point, the most significant harm you're inflicting is upon your own users and their experiences with your products. At a certain point, you start to make Steve Jobs' infamous "thermonuclear war on Android" look like the work of a well-adjusted sane man. At a certain point, you become the corporate equivalent of a bratty little kid shouting at another 10-year-old on the playground.

      [​IMG]


      And here's the real rub: Given that both Amazon and Google create products that are primarily about services — and about getting users to embrace an ecosystem regardless of what type of hardware they own — locking down the gates just to make a point is not only petty but also ultimately self-defeating on both sides of the spectrum.

      To stick with the third-grade mentality at which the companies are currently operating, Amazon, by all counts, appears to be the one who, like, totally started it. Sources who are "familiar with Google's thinking on the matter" (but, you know, totally aren't Google PR reps who want to make a point without going on the record) tell Engadget that Amazon implementing its own "hack" to get YouTube working on the Echo Show and Fire TV — rather than working with Google to create fully featured versions of the apps designed specifically for those devices — was essentially the last straw.

      But, no big surprise, the frustration apparently dates back years — to when Amazon started its quiet policy against selling Google's products.

      We don't want to be caught in the middle of your self-serving little squabble

    So use this as a backdrop for ALexa and voice and IoT etc. ......

    We have been working with an IoT cloud guy, like these 2 or the China equivalents.

Wednesday, December 13, 2017




  1. Google to open artificial intelligence centre in China


    [​IMG]Image copyrightGETTY IMAGES
    Image captionThe new facility joins other Google AI research centres in London and New York
    Google is deepening its push into artificial intelligence (AI) by opening a research centre in China, even though its search services remain blocked in the country.

    Google said the facility would be the first its kind in Asia and would aim to employ local talent.

    Silicon Valley is focusing heavily on the future applications for AI.

    China has also indicated strong support for AI development and for catching up with the US.

    Research into artificial intelligence has the potential to improve a range of technologies, from self-driving cars and automated factories to translation products and facial recognition software.
    In a blog post on the company's website, Google said the new research centre was an important part of its mission as an "AI first company".

    "Whether a breakthrough occurs in Silicon Valley, Beijing or anywhere else, [AI] has the potential to make everyone's life better for the entire world," said Fei-Fei Li, chief scientist at Google Cloud AI and Machine Learning.

    The research centre, which joins similar facilities in London, New York, Toronto and Zurich, will be run by a small team from its existing office in Beijing.

    Strict rules
    The tech giant operates two offices in China, with roughly half of its 600 employees working on global products, company spokesperson Taj Meadows told the AFP news agency.

    But Google's search engine and a number of other services are banned in China. The country has imposed increasingly strict rules on foreign companies over the past year, including new censorship restrictions.

    China has for many years censored content it sees as politically sensitive, using an increasingly sophisticated set of filters that critics have called the "great firewall".

    At the same time, China has been expanding its push into artificial intelligence.

    Last week, the country's President, Xi Jinping, urged senior officials at a key Communist Party meeting to "accelerate implementation of big data".

    In July, China announced its national plan for AI, calling for the country to catch up with the US.

    But its advances in this area have sparked concerns. Human rights groups are among those troubled by China's use of artificial intelligence to monitor its own citizens.

    Addressing the meeting of Communist Party officials late last week, President Xi reportedly emphasised "the necessity of using big data to improve governance".
  2. jfieb

    jfiebWell-Known Member

    New



    GOOGLE IN ASIA

    Opening the Google AI China Center

    GDDChina2

    Fei-Fei Li
    Chief Scientist AI/ML, Google Cloud
    Published Dec 13, 2017

    • RELATED ARTICLES

    Since becoming a professor 12 years ago and joining Google a year ago, I’ve had the good fortune to work with many talented Chinese engineers, researchers and technologists. China is home to many of the world's top experts in artificial intelligence (AI) and machine learning. All three winning teams of the ImageNet Challenge in the past three years have been largely composed of Chinese researchers. Chinese authors contributed 43 percent of all content in the top 100 AI journals in 2015—and when the Association for the Advancement of AI discovered that their annual meeting overlapped with Chinese New Year this year, they rescheduled.

    I believe AI and its benefits have no borders.
     Whether a breakthrough occurs in Silicon Valley, Beijing or anywhere else, it has the potential to make everyone’s life better for the entire world. As an AI first company, this is an important part of our collective mission. And we want to work with the best AI talent, wherever that talent is, to achieve it.

    That’s why I am excited to launch the Google AI China Center, our first such center in Asia, at our Google Developer Days event in Shanghai today. This Center joins other AI research groups we have all over the world, including in New York, Toronto, London and Zurich, all contributing towards the same goal of finding ways to make AI work better for everyone.

    Focused on basic AI research, the Center will consist of a team of AI researchers in Beijing, supported by Google China’s strong engineering teams. We’ve already hired some top experts, and will be working to build the team in the months ahead (check our jobs site for open roles!). Along with Dr. Jia Li, Head of Research and Development at Google Cloud AI, I’ll be leading and coordinating the research. Besides publishing its own work, the Google AI China Center will also support the AI research community by funding and sponsoring AI conferences and workshops, and working closely with the vibrant Chinese AI research community.

    Humanity is going through a huge transformation thanks to the phenomenal growth of computing and digitization. In just a few years, automatic image classification in photo apps has become a standard feature. And we’re seeing rapid adoption of natural language as an interface with voice assistants like Google Home. At Cloud, we see our enterprise partners using AI to transform their businesses in fascinating ways at an astounding pace. As technology starts to shape human life in more profound ways, we will need to work together to ensure that the AI of tomorrow benefits all of us.

    The Google AI China Center is a small contribution to this goal. We look forward to working with the brightest AI researchers in China to help find solutions to the world’s problems.

    Once again, the science of AI has no borders, neither do its benefits


    Commentary;

    From a distillation of all I have read that includes the Nick Hun IoV essay of one yrs ago.

    Amazons Alexa? Only Todd M. knew what it was about the day he saw it. About the only one.

    AMZNs ALexa is the biggest thread to Apple?

    Always on-use CES to gain more insight into how it will sweep across all the edge devices and use GOOG phrase....

    And we’re seeing rapid adoption of natural language as an interface with voice assistants like Google Home.

    Voice...
    Most wont get how voice is so closely tied to AI.
    It HAS to listen to give something back.
    The more it listens the more it knows
    and can give a better experience.
    IF you dont have voice you cant do good AI.


    Facebook is actually doing a BAbbage thing and trying to SKIP voice
    and read thoughts.....50 floors up.
    A colossal waste of $$, but they got them to burn.

    I used a mental model of the tiers of sensor intelligence
    as the rice terraces
    [​IMG]


    So AI on the device will be interesting to see happen. TO explain it simply

    You take a set result from one algo, you plug it into another one, take that result and plug it into another one......a hundred or more layers deep.
    Dr Saxe said he sees a few more layers that can reside ON the device.

    So multiples HAVE to happen, Voice is a basic building block to get you in the game.
    JOhnson says you use what you have on the bench, and that is true so far...
    DSP people( Knowles) use DSP. If you find you need something and dont have it you buy it...
    M & A will happen to bolt stuff on where needed.

    Im glad QUIK has had experience with partitioning, what to hardcode, what to do in an FGPA, what to do in an MCU.

    That NNLE( neural network learning engine ) was my mental model. QUIK will add another core to the Eos one day.

    Dont forget this phrase...

    And we’re seeing rapid adoption of natural language as an interface with voice assistants like Google Home

    As we are working with a BIG cloud IoT guy....:)