Select Projects

Amazon Echo, Fire TV & Interactive Projection (Lab126)

Several key projects I worked on at Amazon are still confidential. However, parts of these other projects were integrated into 2 shipping products that are now public, as are some of the patents filed (see Patents here ). The Echo is a product for household consumers that is intended to be a "smart assistant" controlled exclusively by speech input.  The cylindrical speaker performs local noise reduction, echo cancellation, beam forming, and speech recognition (from across a room) when it detects a "wake up" word. Spoken commands from users can look up facts, find and play music, or even purchase items in real time. Speech components of Echo were also integrated into a speech-enabled remote control used by the Fire TV system to enable users to search for and play TV content through speech queries. Echo was a spin out product from a 2nd larger project I worked on that integrated speech, gesture recognition, and interactive projection for in-home consumer applications.  

OASIS - Smart Kitchen and Interactive Lego (Intel)

Object Aware Situated Interactive System (OASIS) combines real time computer vision, 3D cameras, and micro-projection for identifying and tracking physical objects and gestures with situated interactive displays (projected displays on households surfaces) to create interactive “islands” for in-home applications. OASIS demonstrates novel design and user interface technologies and a flexible component plug-in infrastructure. We demonstrate advanced algorithms, parallel computing, state-of-the-art depth cameras and state-of-the-art micro-projection. This was demonstrated at CES live in the Intel booth in 2011 and live at LegoWorld in Copenhagen. The gesture recognition portion of this work (and collaboration with SoftKinetic) led to the RealSense product from Intel. 

Three systems were built using this platform. The first demonstrated a smart kitchen application that integrated object recognition, recipes, shopping lists, and media content. The second application - Interactive Lego - demonstrated real time gesture and 3D post-independent object recognition with associated computer graphics and animated behaviors applied to everyday physical toys. Lastly, we created an application in collaboration with Autodesk to demonstrate using physical objects and gestures to manipulate 3D models in real time.  (See video page on my site here for videos of each of these projects.)  All of these projects were built in collaboration with fabulous students from University of Washington in less than 3 months. 

OASIS - collection of projects (kitchen, Lego, Autodesk)  (link to YouTube list of videos)

Bringing Toys to Life video  (link to YouTube video)

Bonfire - Interactive Projection with Laptops (Intel)

Bonfire is a self-contained mobile computing system that uses two laptop-mounted laser micro-projectors to project an interactive display space to either side of a laptop keyboard. Coupled with each pico-projector is a camera to enable hand gesture tracking, object recognition, and information transfer within the projected space. Thus, Bonfire is neither a pure laptop system nor a pure tabletop system, but an integration of the two into one new nomadic computing platform. This integration (1) enables observing the periphery and responding appropriately, e.g., to the casual placement of objects within its field of view, (2) enables integration between physical and digital objects via computer vision, (3) provides a horizontal surface in tandem with the usual vertical laptop display, allowing direct pointing and gestures, and (4) enlarges the input/output space to enrich existing applications.  (see video page on my site here)

Bonfire video (link to YouTube video)

HeatWave - Thermal imaging applied to user interfaces (Intel)

HeatWave is a system that uses digital thermal imaging cameras to detect, track, and support user interaction on arbitrary surfaces. Thermal sensing has had limited examination in the HCI research community and is generally under-explored outside of law enforcement and energy auditing applications. We examined the role of thermal imaging as a new sensing solution for enhancing user surface interaction. In particular, we demonstrated how thermal imaging and existing computer vision techniques can make segmenting and detecting routine interaction techniques possible in real-time and complement or simplify algorithms for traditional RGB and depth cameras. Example interactions include (1) distinguishing surface touch or target selection from hovering above surface, (2) shape-based gestures similar to ink strokes, (3) pressure based gesturing, and (4) multi-finger surface-based gestures.    (see video page on my site here)

TLC - Technology for Long-Term Care (wearable RFID-based) (Intel)

This project used wearable RFID technology (a small RFID reader embedded in a watch-like bracelet) combined with RFID tags on objects in everyday use. Applying machine learning techniques we can infer the activities a user is doing based on the sequence of objects that the user interacts with or touches. A field study of elders was conducted over several months (in collaboration with Univ of Washington, the VA, and the Seattle Housing Authortiy) demonstrating that this could be a viable technology for eldercare in their own homes. This work subsequently was integrated into an Intel-GE joing product for in-home monitoring of elders and a 10,000 user trial. There is also a product trial with Intel, GE and the Mayo Clinic to test this product. 

UbiFit - Mobile application to encourage physical activity (Intel)

This project used a pager-sized wearable sensor system and/or cell phone to detect the physical movement patterns of users throughout the day. Using machine learning, we can determine the activities users are doing (e.g., walking, jogging, bike riding, sitting). We used this technology to build the first ever cell phone interface to encourage physical activity using a garden blooming graphical interface. 

UbiGreen -  Mobile application for capturing commute info (Intel)

We broadened the above sensor platform work to detect commute methods and patterns for users in order to infer something about their "green" behaviors. Again, using a graphical cell phone background screen interface, users were encouraged towards more green behavior, earning leaves or blossoms on a tree and eventually apples. 

Mobile Sensing Platform (MSP) - wearable multi-purpose sensor device (Intel)

This platform was developed at Intel Labs Seattle (with University of Washinton) and subsequently was made available to academics at no cost for research purposes. A pager sized sensor device incorporated 7 different sensors: 3D Accelerometer, Microphone, Barometer, Humidity, Visible light, Infrared light, and Temperature. To support experiments in location and inertial sensing, an optional daughter card provides: 3D magnetometers, 3D gyros, 3D compass, and USB host. To allow large data sets to be collected, we have equipped the MSP with a removable MiniSD card capable of storing 2 GB of data. For communication, the MSP includes a Bluetooth radio capable of communicating in both the RFCOMM and the PAN profile, allowing it to both connect to IP networks via Bluetooth access points and to pair with devices like smart phones and PDAs.

00beverly@gmail.com © Beverly Harrison 2012