I wore Google Glass for nine months. I sought to understand what my undergraduate research advisor, Thad Starner, and his colleagues experienced in the 1990s at MIT Media Lab. They extolled the symbiotic relationship they developed with their head-worn display computers, how it made them more effective learners and even helped them build stronger interpersonal relationships. During those nine months, I lived with purpose-built artificial intelligence (AI) software on my head. I got a sense for what those early cyborgs from the 1990s felt. I too felt connected to my digital footprint in an empowering, beneficial, and symbiotic way.
My passion for human-computer interaction (HCI) research stems from such experiences with wearable computers and augmented reality (AR). Now, as AI continues to grow in impact and scale, I want to bring AI closer to the human experience where I believe it can have an unambiguously positive impact in our lives. I aim to create virtual characters and artificially-intelligent agents that live with humans, learn how to work with humans, and enable us to achieve our highest ambitions. With my research work, I aim to take the world one step closer to blurring the lines between the physical and digital worlds.
SpaceX: An Introduction to HCI in Practice
My interest in HCI started during my internship at SpaceX. I took an interest in my team’s UX research and wearables software work. I shadowed our team’s UX researcher, Virgil, as we biked across our sprawling urban factory campus to interview our internal products’ end users. I, in turn, carefully observed Virgil’s process and communication style as he presented mockups of new user interfaces.
I also helped him run time studies to better understand how workers interfaced with our internal enterprise resource planning and manufacturing execution software. I noticed technicians were straining their necks by looking up at a monitor, down at their PCB assembly, and then pulling out a keyboard to enter data; I knew there was a better way to perform this task with head-worn displays. From there, I became interested in how AR could improve the workflow of factory and warehouse workers.
Georgia Tech: HCI Research in AR and Wearables
At Georgia Tech, Professor Starner took an interest in my AR ideas and we worked together for the next 18 months on research involving head-worn displays. I worked closely with my two co-authors (one undergraduate student and one Master’s student) to develop a scientific study, build software, and run a full 12-subject user study evaluating the effectiveness of head-worn displays (HWDs) and a wearable RFID verification system for warehouse order fulfillment.
We had four conditions, the most tiring of them holding a paper pick-list to specify what items to select out of a rack. When participants wore Glass and put on our RFID bracelets, I could see the joy in their faces as they seamlessly interacted with the pick-list data presented in their field-of-view with no extra gestures needed to verify their picks. We found that this HWD/RFID system was a significantly better solution than the other industry-standard methods we tested in dimensions like speed, accuracy, and user comfort.
We published our work at the ACM International Symposium on Wearable Computers (ISWC 2018). Our first author presented our work in Singapore, after which we were awarded Best Paper for conducting a carefully controlled study comparing new methods to current practice in a practical area.
With Professor Starner, I also worked on an order fulfillment study for sparsely populated warehouses. I developed a sophisticated Python algorithm for computing optimal pick paths for workers. Drawing on concepts from my Game AI and Combinatorics courses, I created a data structure to discretize and represent the library space in an easy-to-modify JSON file.
The algorithm induced a subgraph on the 10 target books and applied a Traveling Salesperson Problem (TSP) algorithm to solve for an optimal path between them. From there, we reconstructed the traversal path back into the full library space (with 3D bookshelves), short-cutting line-of-sight path segments. This reconstruction task was highly involved. Our algorithm is available open source on GitHub. I also open-sourced a modular TSP solver, published as gt_tsp on Python’s package manager, pip. These pick paths became base datasets for developing AR guidance software for Magic Leap, HoloLens, and Google Glass headsets.
My final major research project with Professor Starner involved literature review, software development, and IRB approval for a dual-task user study on notification perception. We studied how aware users were of notifications presented on smartwatches versus HWDs when occupied by a cognitively demanding primary task—searching for a number in a grid. I piloted two subjects before graduating.
Stanford: Expanded Understanding of AR
I have been working in Professor Sean Follmer’s SHAPE Lab. Follmer’s perspective on making computing more tangible through physical affordances provided by mechanical systems has significantly expanded my own definition of AR. My work with Follmer’s postdoctoral researcher, Dan Drew, is currently focused on a literature review of sound-source localization methods involving multi-robot systems.
At Stanford, I have been an active participant in research-adjacent activities, including HCI seminars, invited talks, and a birds-of-a-feather group led by a PhD student focused on user modeling in HCI. I have taken Follmer’s course on creating Smart Products, where I gained hands-on experience with prototype-stage electrical engineering (EE) and low-resolution mechanical engineering (ME) design—skills I continue to intentionally develop.
A More Tangible Human Connection through Haptics & Wearables
I have been working on the founding team of a Stanford-affiliated early-stage consumer wearables startup, Tangible Teleportation Company. Our mission is to eliminate the physical–emotional gap between in-person experiences and video calls for distance-separated couples and families. We use immersive haptics in a neck-pillow form factor to communicate emotion and social touch over distance.
I have applied empathy-first design principles from the classroom to interviewing potential users, developing mobile applications for Android and iOS in my role as Head of Software, and testing our business hypotheses. With my growing ME and EE knowledge, I have also contributed meaningfully to hardware design decisions. My background in HCI research has played a strong role in user-centered design, novelty-seeking, and applying a more scientific lens to business decisions.
Ph.D. & Research Objectives
Pursuing my Master’s degree at Stanford has broadened my appreciation for computer science and deepened the fulfillment I find in HCI research. The moments of joy experienced by users of my projects make the effort worthwhile and sustain my motivation to continue this work.
I can see myself continuing to work with Professor Sean Follmer on making computing more physically engaging and personal. With Professor Allison Okamura and the CHARM Lab, I see an opportunity to research remote social interaction through haptics alongside PhD students such as Cara Nunez. As a graphic artist and photographer, I have also taken a strong interest in Pat Hanrahan’s work in computer graphics, as well as Doug James’s and Karen Liu’s work in animation and simulation.
I am excited by the possibility of collaborating with Stanford’s AI and NLP researchers. I believe my interests in HCI, AI, and computer graphics converge into a compelling research direction as I aim to take the world one step closer to blurring the boundaries between the physical, digital, and artificially intelligent worlds from a human-first design perspective.
This site is open source. Improve this page »