Sequential Support: 3D Printing Dissolvable Support Material for
In this paper, we propose a different perspective on the use of support material: rather than printing support structures for overhangs, our idea is to make use of its transient nature, i.e. the fact that it can be dissolved when placed in a solvent, such as water. This enables a range of new use cases, such as quickly dissolving and replacing parts of a prototype during design iteration, printing temporary assembly labels directly on the object that leave no marks when dissolved, and creating time-dependent mechanisms, such as fading in parts of an image in a shadow art piece or releasing relaxing scents from a 3D printed structure sequentially overnight. Since we use regular support material (PVA), our approach works on consumer 3D printers without any modifications.
To facilitate the design of objects that leverage dissolvable support, we built a custom 3D editor plugin that includes a simulation showing how support material dissolves over time. In our evaluation, our simulation predicted geometries that are statistically similar to the example shapes within 10% error across all samples.
Martin Nisser, Junyi Zhu, Tianye Chen, Katarina Bulovic, Parinya Punpongsanon, and Stefanie Mueller.
In Proceedings of the Thirteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '19). ACM.
CurveBoards: Integrating Breadboards into Physical Objects to Prototype Function
in the Context of Form
CurveBoards are breadboards integrated into physical objects. In contrast to traditional breadboards, CurveBoards better preserve the object’s look and feel while maintaining high circuit fluidity, which enables designers to exchange and reposition components during design iteration.
Since CurveBoards are fully functional, i.e., the screens are displaying content and the buttons take user input, designers can test interactive scenarios and log interaction data on the physical prototype while still being able to make changes to the component layout and circuit design as needed.
We present an interactive editor that enables users to convert 3D models into CurveBoards and discuss our fabrication technique for making CurveBoard prototypes. We also provide a technical evaluation of CurveBoard’s conductivity and durability and summarize informal user feedback.
Junyi Zhu, Lotta-Gili Blumberg, Yunyi Zhu, Martin Nisser, Ethan Levi Carlson, Xin Wen, Kevin Shum, Jessica Ayeley Quaye, and Stefanie Mueller.
In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). ACM.
Seismo: Blood Pressure Monitoring using Built-in Smartphone Accelerometer and Camera
MOBILE HEALTH / BLOOD PRESSURE / PTT
Seismo, a non-invasive blood pressure monitoring system utilizing the smartphone’s camera and accelerometer. Providing an easy way to measure blood pressure via a smartphone can deliver data that may help uncover trends with finer granularity than is currently possible using conventional blood pressure monitors.
Abstract: Although cost-effective at-home blood pressure monitors are available, a complementary mobile solution can ease the burden of measuring BP at critical points throughout the day. In this work, we developed and evaluated a smartphone-based BP monitoring application called Seismo. The technique relies on measuring the time between the opening of the aortic valve and the pulse later reaching a periphery arterial site. It uses the smartphone's accelerometer to measure the vibration caused by the heart valve movements and the smartphone's camera to measure the pulse at the fingertip. The system was evaluated in a nine participant longitudinal intervention study. Each participant participated in four sessions that involved stationary biking at multiple intensities. The Pearson correlation coefficient of the blood pressure estimation across participants is between 0.20-0.77 (mean=0.55, std=0.19), with an RMSE between 3.3-9.2 mmHg (mean=5.2, std=2.0).
Seismo: Blood Pressure Monitoring using Built-in Smartphone Accelerometer and Camera.
Conference on Human Factors in Computing Systems (CHI), 2018
BiliCam: Using Mobile Phones to Monitor Newborn Jaundice
SENOSIS HEALTH / MOBILE HEALTH / JAUNDICE MONITOR
BiliCam. A smartphone-based, non-invasive medical application, BiliCam uses the on-device camera to monitor jaundice in newborns. It does this by taking pictures of a newborn’s skin with a color calibration card. Initial assessments of BiliCam are very promising, showing results comparable to the gold standard, a blood test. The original studies were conducted using the same smartphone model, but the cameras for different smartphone models have unique responses to different wavelengths of light.
Therefore, my focus has been on expanding existing machine learning approaches for use on other phone models. I have also been working to improve the user interface and develop processes to assess user habits and thereby build more intuitive interfaces. I am also being actively working on the debugging mode of our application so that detailed analysis datas of OpenCV & algorithm can be viewed during the calibrations to ensure the sample quality and train the machine learning algorithm more efficiently. It is this systematic and comprehensive engineering effort that is moving the company toward making BiliCam the first mobile health application to get FDA approval. Working at Senosis combines the best of industry and academic research, and it has helped me recognize how critical industry knowledge can be for research, and vice versa.
The company then got acquired by Google in June 2017.
Parking System for Capitol Hill Community
The project uses sensors to detect the number of cars going into and out of a garage. This allows for real time updates of the number of spots available in a garage and assists in Cap Hill studies of shared parking.
Our project uses RFID tags as sensors to count the number of cars leaving and entering an enclosed parking garage. The tags are triggered when metal objects, cars, pass by. By using Sllurp to pull data from the reader, we are able to track and record the number of cars entering and leaving a garage, along with the number of available parking spots in a garage.
The benefits of this project for District Shared Parking is to allow the district to accurately track the number of cars in a garage throughout times of the day. This allows for short term studies of shared parking to ensue. The district would have data allowing users to see if there are available parking spots and how many. This could also allow studies regarding the times of the day with low or high garage space availability. In turn, shared parking could be possible between parking garages.
In the second phase of this project, the WISP camera is used to take photos of the parking area. By using machine learning, our program will determine whether there is a car in each of the available parking positions. Also, a photo will be updated to the website (http://18.104.22.168) and app so users can view the parking area themselves. This allows users or visitors to find which exact area of a garage has open spaces. In both cases, the benefits for Capitol Hill Parking is immensely advantageous in the wireless and no battery design.
The project has been filed for a patent under the guidance of Prof. Joshua Smith.
Our project design implements RFID tags which placed under speed bumpers. There are two speed bumpers that are placed at the entrance into the garage, separated by approximately 1.5 meters apart. All cars entering or leaving the garage would pass over the speed bumps and therefore, over the tags. Because the bumps also require cars to slow down, it ensures the tags have an adequate amount of time to send a signal to the antennae. Based on which set of RFID tags are triggered first, either from bumper 1 or bumper 2, the order in which they are triggered allows us to determine whether a car is entering the garage or leaving it. The stand holding the antenna is placed a few meters from the bumper. This is attached to the reader which will pick up any triggered signals from the tags.
The software aspect of this design uses Sllurp to pull data from the reader. As a car moves over the speed bumps and over the tags, various RFID tags will be cut off. The objective is to read the data from these tags. From start, the tags will be read. There was a useful tagLastSeen in one of the many libraries which helped us write the code efficiently. Once a tag is seen, there will be a tag call back function. Also, stemming from the tag call back, the program will create a log file that shows all the data. The logged file will store extensive data from the entire run time of the program. The architectural diagram is showed in figure at the left of this section.
Progressing into the design strategy for phase 2 would be the long term study in which we implemented the WISP. Also powered by RFID readers and with the advantage of being wireless without batteries, this was a great choice for monitoring parking spaces. The WISP is placed at a relatively high up area of a wall so that it can get a wide view of the parking spots in the garage. The optimal angle would capture multiple parking spots, where if one car were to park, the other spaces would still be in a visible line of sight to the WISP camera.
We also want to be able to show users an updated and recent picture of the garage to which they can reference when parking. This picture would be similar to the photo on the right of the figure at the bottom of this section, but is run through Matlab first where histeq filter is used. This filter is the histogram equalization and transforms the intensity of the image so the histogram output approximately matches the histeq. It’s a global color that is applied to overall generate a more smooth and clear image. This image is updated to our website/app so that users can have a photo to reference in real time.
The architectural diagram is showed in figure at the right of this section. An example of an optimal angle is shown beside, which was taken during a test visit to the Capitol Hill garage. The picture on the right shows the quality of a WISP camera photo.
To make the system more intuitive, we wanted to build something that could easily enable people to search for parking spaces. The best form is within an app, where drivers can quickly look at the app to find available parking spots. A website (http://22.214.171.124) was also built so that users could look at details about the project but also to view the map in greater clarity before they head out on a drive.
HemaApp: Noninvasive Blood Screening of Hemoglobin Using Smartphone Cameras
UBICOMP / MOBILE HEALTH / NONINVASIVE BLOOD TEST
HemaApp, a smartphone application that noninvasively monitors blood hemoglobin concentration using the smartphone's camera and various lighting sources. Hemoglobin measurement is a standard clinical tool commonly used for screening anemia and assessing a patient's response to iron supplement treatments. Given a light source shining through a patient's finger, we perform a chromatic analysis, analyzing the color of their blood to estimate hemoglobin level. We evaluate HemaApp on 31 patients ranging from 6 -- 77 years of age, yielding a 0.82 rank order correlation with the gold standard blood test. In screening for anemia, HemaApp achieve a sensitivity and precision of 85.7% and 76.5%. Both the regression and classification performance compares favorably with our control, an FDA-approved noninvasive hemoglobin measurement device. We also evaluate and discuss the effect of using different kinds of lighting sources.
I was hired after the initial publication to help the team improve the data quality of the current system and the deployment. My technical contribution involves low level, direct access to components such as the LEDs and the camera on Android phones. By changing the scanning pattern of the camera, I boosted the scanning speed to a higher fps. I also achieved synchronous filming of both front and rear camera. I am also actively working on using other optical sensors on the phone (proximity sensor & laser displacement sensor) to replace/improve the camera data, which is considered the 'next generation' of HemaApp and will lead to follow-up publications once finished.
Drone Hacking & Overcontrol
The aim of this project is to build a motion detective drone. To accomplish the goal, we divided the project into two parts. The first part is trying to hack and control the drone, and the second part is to implement the computer visualization by openCV.
At the beginning, we thought they would use standard SPP or HID for controlling. After did some research, we found out that the drone was actually using Bluetooth 4.0, the Bluetooth GATT service. Besides, based on this service, all the commands and communications between the drone and the phone were encoded. As a result, we started our hacking of the board via an Android phone which had a system version over Android 4.4. After the Android 4.4, there is a mode called Bluetooth HCI snoop log which can be turned on under Developer options in the Settings which allows users to monitor all the communications that transmit through the Bluetooth. A sample of commands data we captured is shown as above. There are over 2000 commands transmit through the Bluetooth over 5 seconds, so I used WireShark to filter out the useless data and analyze all the potential possible commands, and wrote our own App for testing purpose. After a long time of hacking and testing, we filtered out all the basic movement commands (takeoff/landing + 4 directions) and finally took the control of the drone by our own app.
For the object detection, the design implements OpenCV as the tool. In OpenCV, there are pre-trained cascade classifers for detecting objects like face, eyes, and some other objects. This project utilizes C++ to detect the object movement and corresponding them with the commands previous found .
Air Pollution Monitoring
Assist with community air monitors for Imperial, CA. The crew in SEAL Lab & Seto Lab already deployed 20 monitors last year, and the next 20 new monitors will be designed for this year. Data are streamed to UW servers via Wi-Fi connections by Arduino board in monitors. Data can be viewed through a self-designed mobile application.
I worked closely as a Research Assistant for both UW SEAL Lab in Electrical Engineering department and Seto Lab in Public Health department over this project. My part in the project is to design the mobile application and website for data analyzing and presenting. I also worked on maintaining the database on the UW server for the project.
Pokémon Game via FPGA
To build asynchronous serial network system between two FPGA boards, the first thing is to set up the clock control. So we designed two modules to keep track of the incoming or outgoing data. Then use shift registers to shift the data from parallel to serial or the other way around. These modules are tested separately using iVerilog and gtkwave. Based on the functionality of these sub modules and the previous microprocessor, we developed complete serial network system.
After debugging for several days, communicating inside our FPGA board is successful. Then when we tried to communicate with another group’s board, tons of bugs were find out and resolved. For example, different default and clock settings for the shift register between two groups. After the successful communication, a final project is then designed by the two groups together, the Pokémon, which used to be a popular battle game. The players will play this game using the keys on the board. Furthermore, the game process will also be displayed on the console and screen through a VGA port. That is how the video game comes out. With player pressing the keys on the board, the Pokémon on the screen will have corresponding behavior, which can be fantastic.
As for the game design, two players are included in this game. Six Pokémon's are initialized with certain strength and intelligence. Every Pokémon’s health, mana and skill damage are set according to a equation related to its strength and intelligence. Each player would choose 3 Pokémon's with 2 kinds of heals, called mango and tango. Mango would restore the current Pokémon of the player by 50 mana and tango would restore 50 hp. The player could choose the commands by pressing the two Keys on the board. The player can choose among the commands attack/ tango/ mango. There are two data related each attacking skill: one is the amount of damage to the enemy, and the other one is self-cost of mana. For each Pokémon, there are four different skills, with different amount of damage and cost. If a Pokémon’s health is 0 after being attacked, then the next Pokémon would take the position but if the third Pokémon’s health is attacked to 0, then this player loses. A player could choose to eat mango or tango in his turn, restoring 50 mana/health for the current Pokémon. But if a player chooses to take heals, then cannot choose to attack at that turn and every player has two mangos and two tangos. It will show the current health for both players, but the mana of enemy will be hidden.
For the graphic display design, six different Pokémon and one selection menu are drawn on the screen. During the game, only two selected on stage Pokémon are shown at the opposite corner of screen, and each of them has its own health. For the player’s Pokémon, there is another bar drawn to show the amount of mana left. The health and mana volume would change as the game going. When the health bar is empty, the Pokémon on screen would automatically switch to the next available Pokémon. If there is no available Pokémon left, the corresponding player lost the game, and his Pokémon will disappear. All the Pokémon's are originally designed (the appearance, not the characters) and drawn pixel by pixel via Verilog.
Cross-compiled on both Pyhtek and Beaglebone Black board with Linux-based system, controlled through Bluetooth by self-written Android Application, containing five different modes: Auto Pilot Mode (automatic detects the route through four attached ADC sensors in the front), Touch Control, Gravity Mode (through cellphone embedded gyroscope sensor), Voice Control (Google Voice API) and Husky Mode (using the FaceDectionListener embedded in camera to make the tank follow the controller like a pet).
The video above is a short demonstration of the Husky mode and Auto Pilot mode.