Welcome, my name is Qijia Shao, currently a fifth-year Ph.D. student in the Computer Science Department at Columbia University. I am a member of the Mobile X Lab, under the supervision of Professor Xia Zhou.
I received my master’s degree from Dartmouth College, and my bachelor’s degree with the highest honor from Yingcai Honors College at University of Electronic Science and Technology of China (UESTC). I was an exchange student in my junior year at EECS department of National Chiao Tung University. I spent my senior year as a research assistant at Missouri S&T, advised by Professor Y. Rosa Zheng.
I am passionate about solving exciting and impactful real-world challenges. My research is mainly about turning everyday objects in to sensors, for sensing physical and physiological signals around humans and robots.
I play with various modalities of signals from both software and hardware sides. I have designed and prototyped different practical systems leveraging the latest technical advances (e.g., Multimodal Deep Sensing, Mixed Reality/AR/VR, Humanoid Robot) for human motion teaching (soft flex/pressure sensors and camera @UbiComp’21), human activity/behavior monitoring (computational fabrics @UbiComp’19; EMG and impedance sensors @UbiComp’21), cross-medium localization (laser light @MobiSys’22), and interactions (conductive threads @CHI’20). Feel free to contact me if interested in similar topics!
- [09/2022] Finished my internship at Snap Research and submitted a paper.
- [06/2022] I graduated from Dartmouth with a master’s degree (surprisingly) and is moving to Columbia with Xia. Will miss here Hanover!
- [05/2022] Started my research internship at Snap Research, working on reducing the motion-to-photon latency for enabling various cool interacvtive systems. Stay tuned!
- [03/2022] Our paper “Sunflower: Locating Underwater Robots From the Air” has been conditionally accepted to MobiSys 2022. The first system ever achieves wirelessly localizing underwater robots from the air withut additional infrastructure. Laser light is our secret for cross-medium sensing. Please check out our demo video here!
- [09/2021] We are presenting both ASLTeach and FaceSense in UbiComp 2021!
- [07/2021] Our COVID-motivated paper “FaceSense: Sensing Face Touch with an Ear-worn System” is accepted with minor revision by IMWUT (UbiComp2021). It’s more than one-year-long effort collaborating with 4 universities. Cheers for the team’s hard work during the pandemic! Please check out our paper for more details.
- [06/2021] Started my research internship at Signify (Philips Lighting), focusing on deep learning and sensor data fusion!
- [11/2020] Gave a guest lecture on next-generation mobile platform – computational fabrics in CS 69/169 at Dartmouth.
- [10/2020] Our paper “Teaching American Sign Language in Mixed Reality” was accepted by IMWUT (UbiComp2021). A great collaboration with researchers from cognitive science and education department at Dartmouth and sign language experts from Gallaudet University. This is our first work on teaching human motion at population scale without coaches. Check out the presentation for more details!