Here is my Mon morning holiday reading.
Integrating sensor fusion into embedded designs
JUNE 09, 2014
As embedded wireless devices and mobile platform applications become more sophisticated, the management of sensor inputs has become of critical importance. Typical of the complexity of this task is the nature of the user interface on mobile phones, where capacitive touch 2D user interfaces are being superseded by a range of3D sensor applications designed to allow the device to identify gestures and recognize what they mean.
Commentary, this is a KEY shift, so at the same time that sensor fusion happens the touch screen will get replaced with 3D sensors for gesture recognition-hence the jobs at QUIK on Gesture......17 yrs min experience there.
It is also becoming
commonplace for advanced smartphones to also collect information on location using
GPS signals and determine device orientation and status from information gathered by integrated 3D MEMS position detectors.
Coming soon will be the ability to identify the location of mobile devices in buildings by a variety of wireless sensors. And with the current enthusiasm about the Internet of Things, consumer device makers are thinking about a whole range of wearable electronic devices and home network sensor apps that collect information about their environment and send it back to a smartphone for analysis and interpretation.
The challenge for developers of the embedded subsystems will be how to manage the massive amounts of sensor information coming in and interpreting it as to context, orientation as well as other factors, and making decisions based on that input. But where on the average 2D touch screen smartphone of a few years ago the designer only had to worry about ten or so sensor inputs, the new application environments will require the ability to manage hundreds of such sensor data streams.
commentary, wow, this snip the new application environments will require the ability to manage hundreds of such sensor data streams.
He does not mean the smartphone will go to hundreds of sensors, rather hundreds of different fusions from the sensors on a device.
In the view of RTI's Supreet Oberoi, author of "
Sensor fusion brings situational awareness to health devices," if this fusion can be achieved and it is possible to consolidate and integrate this data in real time, "we have opportunities to develop new suites of smart applications that can change the way we manage our health, drive our cars, track inventory-
-the possibilities are endless."
But he cautions that it will require several new technologies to make this happen, including fusion techniques for acquiring and organizing information and algorithms for situational awareness that will "make the system as a whole and the device acquiring and using that data aware of the specific environment in which that data is to be used."
Fortunately, a lot of work has been going on to come up with techniques you will need to explore this new application area. Included in
this week’s Tech Focus newsletter are a number of recent design articles, technical journal articles, and conference reports on sensor fusion in smartphones, robotics, and wireless sensor collection. In addition, there are a number of other articles that I have found useful in providing a context for this new trend, including:
This is an exciting area that greatly expands the opportunities and challenges available to designers of embedded systems, and I will be tracking its developments, looking for papers and conference presentations that provide new tools and techniques to speed up and simplify the process. I also look forward to your contributions to this topic, including comments here and as design articles and blogs you may want to contribute on the tools you have found helpful, new ways to use them, and what new techniques for sensor fusion you have found effective.
Personally, I look forward to the capabilities sensor fusion will add to mobile phones and consumer devices (such as MP3 players) that enrich my life, not to mention the medical devices (such as glucose testers) upon which my life as an insulin-dependent diabetic depends. And a device I can attach to my key ring so my lost keys are findable.
In previous blogs, I have complained that the only portable electronic device I can be reasonably sure of finding is my cell phone, because I can call it up from my house phone and listen for the ring to tell me where it is.
Forget that solution for my MP3 player and my glucose meter, because I can’t call them up. I often put my MP3 down and then can’t find it for as much as a week. So I have several MP3 players - and several glucose testers – stashed in strategic places around the house, so an alternative is available until I find the original.
And then there are the many TV remote controls I have lost and am still finding hidden under chair cushions and in various nooks and crannies in my home.
The optimist in me says that the with device location and monitoring capabilities that sensor fusion technologies will bring to ordinary things in my life, I will be able to stop buying duplicates of everything portable, wireless, and untethered.
This is a good read.. here is the snip I will put by itself.
But he cautions that it will require several new technologies to make this happen, including fusion techniques for acquiring and organizing information and algorithms for situational awareness that will "make the system as a whole and the device acquiring and using that data aware of the specific environment in which that data is to be used.
QUIK is allocating their fintite $$ to hire veterans to do exactly what this author says needs to happen...does QUIK have any little advantage over the others working on this?
Perhaps its this one..SIlicon Labs has been touted as a winner in the IoT and they have an MCU they want for that segment...let's look at their blog on it
How are the MCU folks marketing themselves for wearables….
Official Blog of Silicon Labs
Writing about energy efficient embedded systems and microcontroller design
Low-Power Embedded Design Tips for Wearables
siliconlabs / August 1, 2014
Wearable devices, from smart watches to portable health and fitness trackers, are changing many aspects of our daily lives. A successful wearable device must deliver the right combination of price, performance, functionality and battery life, as well as a unique look, feel and behavior to differentiate itself from its competitors.
To reduce the microcontroller’s impact on the wearable platform’s energy budget, it is important to minimize the frequency and duration of any task that requires it to awaken from a low-power sleep mode.
One of the primary ways to optimize a low-power embedded design is to find an MCU offering the lowest sleep mode that still provides adequate response to real-time events.
Most MCUs using the ARM Cortex®-M processing core support multiple sleep modes.
Powering some of today’s hottest wearable products such as the Misfit Shine and the Magellan Echo smart sports watch, our EFM32 Gecko microcontroller family uses standard 32-bit ARM Cortex®-M cores combined with an energy-optimized set of peripherals and clocking architecture.
The EFM32 architecture has been designed from the ground up specifically for energy-sensitive applications. The architecture features a range of power modes that enable developers to achieve the optimal energy efficiency required by wearables.
Sleep/Standby (Known as EM1 mode for EFM32 MCUs) – Enables quick return to active mode (usually via interrupt) at the expense of slightly higher power consumption. In this mode, power consumption for EMF32 = 45 μA/MHz; typical equivalent 32-bit MCU = 200 µA.
Deep Sleep – (EM2 mode for EFM32) – Leaves the MCU’s critical elements active while disabling high-frequency system clocks and other non-essential loads. In this mode, power consumption for EMF32 is as low as 900 nA; typical equivalent 32-bit MCU = 10 μA to 50 μA.
Stop – (EM3 mode for EFM32) A deeper version of Deep Sleep Mode that enables further power savings while retaining limited autonomous peripheral activity and fast wakeup. In this mode, power consumption for EFM32 = 0.59 μA; typical equivalent 32-bit MCU = 10 μA to 30 μA.
Off – (EM4 or shutoff mode for EFM32) – This “near-death” state preserves the minimum compliment of functionality needed to trigger wakeup from an external stimulus. The energy savings comes at the cost of significantly longer wake-up time. In this mode, power consumption for EFM32 = 20 nA (420 nA with RTC running); typical equivalent 32-bit MCU = 1.5 µA.
Backup Battery Mode – A unique EFM32 feature that offers an attractive alternative to Shutoff Mode, preserving a few more critical functions and enabling much faster wake-up.
For additional wearable design tips, read out whitepaper: “Winning Design Strategies for the Wearables Market”
To see some of these wearables in action, watch our video on smart wearables featuring Silicon Labs technology.
So its very recent and almost all the talk is how to reduce the functioning of the device to some penumbral state, an induced coma of varying levels,
It won’t know where you are or what you are doing, it’s unconscious in a deep coma, but its in wearables now.
So there is a link to the things that QUIK speaks of-Context and Location and to determine that the device has to be on- not in a coma?
So some devices that use Si Labs wearable MCU will use slight of hand and maybe leave one sensor on and the thing will be asleep, not fusing any data from the other 10 or so sensors, it won't know where you are, or what the context is.
QUIK will not focus on slight of hand, they will enable a wearable that is NOT in a coma, is fusing the data from many sensors all the time, it will know the context and where you are? QUIK can focus its algo talent to the ideal of always on context and location while others focus $$ on slight of hand, or just one sensor on.
This may be a crucial difference?
I will put this snip up also.
Sensor fusion is not limited to a 9-DoF solution. For example, if we include one additional sensing quantity, it becomes a 10-DoF (or 10-ASF) solution. A good example of this would be adding a location sensing inside buildings to the 9-DoF solution. That can be done by adding barometric sensing for altitude. Having a barometer enables altitude detection between floors since pressure changes with altitude at the rate of about 10 Pa/m (in average there is about 3.5 meters between floors). So, the 10-DoF includes a 3D-accelerometer, 3D-gyro, 3D-magnetometer and barometer.
Figure 1: 9-Axis Sensor Fusion System (Microsoft – Supporting Sensors in Windows 8)
Why stop there? Even more sensing quantities can be added in which case the sensor fusion solution becomes an m-DoF solution, where ‘m’ stands for ‘multiple’ and it can be greater than 10. Why not have your own private lab at you fingertips and check the level of your blood sugar or cholesterol when you need it? It is not unfeasible anymore to see new smartphones, tablets, ultrabooks and PCs with universal sensor hubs that can accommodate many applications. Freescale has already demonstrated a 12-DoF solution that includes a 3D-accelerometer, 3D-gyro, 3D-magnetometer, a barometer, a thermometer, and an ambient light sensor. The m-DoF solutions will be the way of the future.
So we can track along and see if QUIK is ultimately working toward the m-DOF?
Implications for a catalog item?
yes, a lot of implications.
1. Expect a 10 axis catalog for indoor location, as the TAM is so big.
2. Maybe a 10 +2 so that the catalog + model has stuff to do, they can have the 10 axis indoor location and then add the custom stuff on top.
so the catalog items will be BIG stuff that form the pillars, such as context and 10 axis indoor location, that will help support the catalog + with things that go on top.