Research

My research explores how to democratize the creation of tangible, interactive devices by making them easier to prototype and build—often without requiring electronics, complex assembly, or machine learning calibration. By leveraging properties of air, sound, and material structure, I develop techniques that turn passive 3D-printed and laser-cut forms into interactive artifacts. This work is guided by the idea of Print-and-Play Fabrication: devices should be fabricated, not assembled. Below are the main projects I’ve worked on in this space.

This interest in accessible, fabrication-centered interaction design began during my PhD, where I developed new techniques for embedding interactivity directly into physical objects using off-the-shelf tools like 3D printers and laser cutters. Today, I continue this line of work as a postdoctoral researcher at the Paul G. Allen School of Computer Science & Engineering at the University of Washington. I focus on developing design and manufacturing systems that are inclusive of diverse users—particularly people with visual impairments. This includes building accessible CAD tools, designing intuitive interfaces, and creating end-to-end systems that integrate formal reasoning, optimization, and generative AI to support reliable, interpretable design exploration. Across both past and current work, my goal remains the same: to lower the barrier to designing and fabricating interactive, functional hardware.

Papers Link to heading

The papers below present the key techniques and toolkits I have developed over the years. Although each project solves a different interaction challenge—touch sensing, pneumatic logic, shape-change, rapid circuit fabrication—they all share one goal: lowering the barrier for non-experts to create expressive, functional hardware. Together, they reflect an ongoing effort to make digital fabrication more immediate, expressive, and accessible.

AirLogic Link to heading

AirLogic is a toolkit for designing interactive objects that don’t need any electronics. The system relies entirely on air pressure—routing air jets through 3D-printed structures to perform sensing, logic, and output. Each air jet represents a binary 1 (present) or 0 (absent), and we use that to mimic the core functions of a computer: input, processing, and output.

AirLogic stands out for its simplicity. It requires no specialized printers or post-processing and can be powered by something as simple as your lungs. We provide a collection of modular components (like touch sensors, switches, and actuators) that can be snapped together like Lego to prototype or finalize interactive designs.

By combining these elements, we created examples like a lung-powered flute, a game where players can “steal” or “split” rewards, and modular logic circuits embedded into models. AirLogic shows that you can go a long way by just printing the logic into the shape of the object.

AirTouch Link to heading

AirTouch allows 3D-printed objects to detect where they’re being touched—without any electronics inside the object itself. By embedding channels and using a single pressure sensor and a bit of machine learning, we can tell exactly where someone has pressed on the object’s surface.

The real magic here is that these objects don’t look interactive. Their outside shape remains intact, and they can even be printed in transparent materials. Better still, once a model is trained, it can be reused with other objects that share the same internal structure. You can use the same model to make a family of interactive creatures with completely different external shapes.

AirTouch enables up to 12 distinct touch points, limited mainly by printer resolution. The technique has been used in interactive charts, color selectors, and toys. It offers a simple way to make interactive objects that feel custom, but don’t require custom electronics.

Blowhole Link to heading

Blowhole is a technique to add blowing-activated interactions to 3D-printed models. The idea is based on acoustic resonance—the same principle that makes bottles hum when you blow across their tops. We embed tuned cavities into objects so that blowing into specific holes produces unique, recognizable sounds.

These sounds are picked up by a microphone and interpreted to trigger actions like text-to-speech, music playback, or educational quizzes. Blowhole tags require no post-processing after printing and can be powered entirely by human breath.

The system includes a Meshmixer plugin that adds the cavities directly to a model, and a sound recognition engine that classifies the pitches. It’s great for educational toys, tactile books, and other low-tech interactive experiences that are still fully digital.

ClipWidgets Link to heading

ClipWidgets are modular, 3D-printed widgets that you can snap onto the edge of a smartphone case to add buttons, sliders, and dials. What makes them clever is that they don’t require any additional electronics—just the phone’s camera and flash.

Inside the case is a conical mirror that reflects the entire edge of the phone into the camera’s field of view. Each widget has printed color markers that rotate or slide as you interact with them. The camera tracks these movements in real time.

With ClipWidgets, you can create your own game controller, DJ rig, or custom accessibility tools using only a hobbyist 3D printer and some markers. And because the widgets are modular, they can be reconfigured without needing to reprint the entire system.

Echotube Link to heading

Echotube uses a simple rubber tube and an ultrasonic transducer to add press and touch sensing to everyday objects. It’s based on the principle of echo location—measuring how sound bounces inside the tube to detect where the user is pressing.

The system is low-cost, robust, and can operate even under water. It’s especially well-suited for applications in tough environments, like outdoors or submerged conditions. We’ve used it for everything from interactive desks to bicycle counters.

Echotube’s strength is its modularity: one transducer can monitor several meters of tubing, and the tubes can be routed around or through objects to add interaction exactly where you need it.

MorpheesPlug Link to heading

MorpheesPlug is a toolkit for making objects that can dynamically change their shape using pneumatic actuation. These include widgets that fold, spiral, bump, stretch, or compress—all powered by air.

We built a control module, a design tool, and a library of six standardized widgets that can be embedded into existing designs. This allows even novice designers to prototype dynamic, responsive interfaces—without needing deep knowledge of robotics or programming.

Use cases include posture-correcting cushions, interactive umbrellas, and even physical bar charts that change shape to represent data. The system supports over 80% of the known shape-change patterns in HCI literature.

Bitey Link to heading

Bitey is a wearable input method that lets you perform actions by clicking your teeth. Using a bone-conduction microphone placed behind your ear and a bit of machine learning, Bitey can distinguish between different sets of teeth—front, back, left, right—and map them to different actions.

This can be used to answer phone calls, control playback, or navigate interfaces, all hands-free. We found it to be accurate, discreet, and robust to ambient noise (and even eating!). While training is required per user, it’s a quick and effective alternative input method.

LaCir Link to heading

LaCir introduces a novel material for laser cutting that combines structural integrity with electrical conductivity. It’s made of a conductive layer sandwiched between two plastic layers, allowing both the structure and the circuit of a device to be fabricated in a single step on a standard laser cutter.

Designers can cut physical shapes and electrical traces simultaneously, and even create functional joints that also conduct electricity. The system supports embedding common components like magnets and screws for additional capabilities.

LaCir dramatically simplifies the process of prototyping interactive devices by collapsing multiple fabrication stages into one—and removing the need for custom hardware or printer modifications.