Projects


University of Michigan, Ann Arbor
In this project, we seek to investigate the impact of added subclass information on the performances of traditional machine learning classifiers. My expertise in interactive system building and crowdsourcing comes in when we need to develop an efficient labeling tool for gathering superclass/subclass information of the images in the dataset from the crowd, and in the meantime evaluate their performances and fine-tune our system accordingly. In the future, we vision to deliver a web-based interactive system on image labeling that features front-end multi-worker collaborating in real time and back-end machine learning classifying and re-sampling on the fly.
In this project, we seek to develop a figure and diagram editing application on iPad that fulfill the needs of the physically incapacitated. Considering the significant loss of fine motor skills of our customers, all the functionality such as object creation, navigation, and scaling can all be accessed using a single gesture: tapping. I lead the design and development of the application. We aim to make this application a member of the assistive technology family.
This project aims to extract physical driving conditions and parameters for 3D-reconstruction from real, unconstrained vehicle crash videos available on public video sharing websites with the help of a human in the loop. I developed a reconfigurable, web-based vehicle crash scene annotation UI that enables crowd workers to efficiently and effectively provide measurement values of objects in a crash scene, as well as a reusable annotation server backend that recruits crowd workers for real-time tasks, collects responses and visualizes the obtained data samples.
In this project, we study the problem of detecting human-object interactions (HOI) in static images, defined as predicting a human and an object bounding box with an interaction class label that connects them. HOI detection is a fundamental problem in computer vision as it provides semantic information about the interactions among the detected objects. We introduce HICO-DET, a new large benchmark for HOI detection, by augmenting the current HICO classification benchmark with instance annotations. We propose Human-Object Region-based Convolutional Neural Networks (HO-RCNN), a novel DNN-based framework for HOI detection. At the core of our HO-RCNN is the Interaction Pattern, a novel DNN input that characterizes the spatial relations between two bounding boxes. We validate the effectiveness of our HO-RCNN using HICO-DET. Experiments demonstrate that our HO-RCNN, by exploiting human-object spatial relations through Interaction Patterns, significantly improves the performance of HOI detection over baseline approaches.
Publication:

Yu-Wei Chao, Yunfan Liu, Xieyang Liu, Huayi Zeng, Jia Deng. Learning to Detect Human-Object Interactions. (arXiv.org)

University of Michigan - Shanghai Jiao Tong University Joint Institute
In this project, we created a web-app and its corresponding customer services that facilitates travelling in China. Tourists could use our web-app to input their travel preferences, and instantly get matched to the most suitable professional local guide. There are also evaluation systems in place to guarantee the quality of our customer services and people's travel experience.
Ever imagined playing a guitar with "laser" strings? You've just got yourself a chance! In this project, I replaced ordinary guitar strings with laser beams, and used an electronic stereo system as the resonator. I designed and implemented the entire electronic control system so that it could well simulate an acoustic guitar.



Go to top