Peer-reviewed Conference Papers
[C.9] Toby Jia-Jun Li, Marissa Radensky, Justin Jia, Kirielle Singarajah, Tom M. Mitchell, and Brad A. Myers. PUMICE: A Multi-Modal Agent that Learns Concepts and Conditionals from Natural Language and Demonstrations. Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST 2019). [Paper PDF][Video]
[C.8] Toby Jia-Jun Li, Igor Labutov, Xiaohan Nancy Li, Xiaoyi Zhang, Wenze Shi, Wanling Ding, Tom M. Mitchell, and Brad A. Myers. APPINITE: A Multi-Modal Interface for Specifying Data Descriptions in Programming by Demonstration Using Natural Language Instructions. Proceedings of the 2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC 2018). [Paper PDF][Video]
[C.7] Toby Jia-Jun Li and Oriana Riva. KITE: Building conversational bots from mobile apps. Proceedings of the 16th ACM International Conference on Mobile Systems, Applications, and Services (MobiSys 2018). [Paper PDF][ACM DL][Talk Video]
[C.6] Yuanchun Li, Fanglin Chen, Toby Jia-jun Li, Yao Guo, Gang Huang, Matthew Fredrikson, Yuvraj Agarwal, and Jason I. Hong. PrivacyStreams: Enabling Transparency in Personal Data Processing for Mobile Apps. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (PACM IMWUT / UbiComp 2017) [Paper PDF][ACM DL][Website]
[C.5] Toby Jia-Jun Li, Yuanchun Li, Fanglin Chen, and Brad A. Myers. 2017. Programming IoT Devices by Demonstration Using Mobile Apps. Proceedings of the International Symposium on End User Development (IS-EUD 2017). Best Paper Award. [Paper PDF][SpringerLink]
[C.4] Toby Jia-Jun Li, Amos Azaria, and Brad A. Myers. SUGILITE: Creating Multimodal Smartphone Automation by Demonstration. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI 2017). Best Paper Honorable Mention Award. [Paper PDF][ACM DL][Video][GitHub][Google Play]
[C.3] Isaac Johnson, Yilun Lin, Toby Jia-Jun Li, Andrew Hall, Aaron Halfaker, Johannes Schöning, and Brent Hecht. Not at Home on the Range: Peer Production and the Urban/Rural Divide. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI 2016). [Paper PDF][ACM DL]
[C.2] Toby Jia-Jun Li, Shilad Sen, and Brent Hecht. Leveraging Advances in Natural Language Processing to Better Understand Tobler’s First Law of Geography. Proceedings of the 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (SIGSPATIAL 2014). [Paper PDF][ACM DL]
[C.1] Shilad Sen, Toby Jia-Jun Li, WikiBrain Team, and Brent Hecht. WikiBrain: Democratizing Computation on Wikipedia. Proceedings of the 10th International Symposium on Open Collaboration (OpenSym / WikiSym 2014). [Paper PDF][ACM DL]
[B.2] Toby Jia-Jun Li, Igor Labutov, Brad A. Myers, Amos Azaria, Alexander I. Rudnicky, and Tom M. Mitchell. Teaching Agents When They Fail: End User Development in Goal-oriented Conversational Agents. Chapter of Studies in Conversational UX Design, Robert J. Moore, Margaret H. Szymanski, Raphael Arar, Guang-Jie Ren eds. Springer, 2018. [Springer]
[B.1] Brad A. Myers, Andrew Ko, Chris Scaffidi, Stephen Oney, YoungSeok Yoon, Kerry Chang, Mary Beth Kery, and Toby Jia-Jun Li. Making End User Development More Natural. Chapter of New Perspectives in End-User Development, Fabio Paternò and Volker Wulf, eds. Springer, 2017. [SpringerLink]
Posters and Workshop Papers
[W.6] Toby Jia-Jun Li, Marissa Radensky, Tom M. Mitchell, and Brad A. Myers. A Multi-Modal Approach to Concept Learning in Task Oriented Conversational Agents. Conversational Agents: Acting on the Wave of Research and Development – CHI 2019 Workshop. [Paper PDF]
[W.5] Marissa Radensky, Toby Jia-Jun Li, and Brad A. Myers. Poster: How End Users Express Conditionals in Programming by Demonstration for Mobile Apps. 2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC 2018) Poster Track. [Paper PDF]
[W.4] Toby Jia-Jun Li, Igor Labutov, Xiaohan Nancy Li, Tom M. Mitchell, and Brad A. Myers. Supporting Co-adaptive Human-Agent Relationship through Programming by Demonstration using Existing GUIs. Rethinking Interaction CHI 2018 Workshop. [Paper PDF]
[W.3] Toby Jia-Jun Li. End User Mobile Task Automation using Multimodal Programming by Demonstration. IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC 2017) Graduate Consortium. [Paper PDF][IEEE Xplore]
[W.2] Toby Jia-Jun Li, Brad A. Myers, Amos Azaria, Igor Labutov, Alexander I. Rudnicky and Tom M. Mitchell. Designing a Conversational Interface for a Multimodal Smartphone Programming-by-Demonstration Agent. Conversational UX Design CHI 2017 Workshop. [Paper PDF]
[W.1] Toby Jia-Jun Li and Brad A. Myers. Smartphone Text Entry in Cross-Application Tasks. CHI 2016 Workshop on Inviscid Text Entry and Beyond. [Paper PDF]
Invited Talks and Presentations
[P.6] Toby Jia-Jun Li and Forough Arabshahi. Machine Learning from Human Instruction: Every Person a Programmer. Talk at J.P. Morgan. New York, NY. May 24, 2019.
[P.5] Brad A. Myers and Toby Jia-Jun Li. Teaching Intelligent Agents New Tricks: Natural Language Instructions plus Programming-by-Demonstration for Teaching Tasks. Human Computer Interaction Consortium (HCIC ‘18). Watsonville, CA. June 25, 2018.
[P.4] Toby Jia-Jun Li and Brad A. Myers. SUGILITE: Enabling InMind Agent to Learn New Tasks from User Demonstration. Talk at Oath (formerly Yahoo!). Sunnyvale, CA. May 30, 2018.
[P.3] Toby Jia-Jun Li, Josh Ford, Doug Downey, Brent Hecht, Vijay Murganoor, and Shilad Sen. Atlasify – The Geography of Everything. 3M Science and Engineering Symposium. St Paul, MN. June 25, 2015.
[P.2] Toby Jia-Jun Li, Josh Ford, Doug Downey, Brent Hecht, Vijay Murganoor, and Shilad Sen. Atlasify – The Geography of Everything. The Social Media and Business Analytics Collaborative (SOBACO) Spring Research Symposium. Minneapolis, MN. May 14, 2015.
[P.1] Toby Jia-Jun Li and Brent Hecht. WikiBrain: Making Computer Programs Smarter with Knowledge from Wikipedia. The Social Media and Business Analytics Collaborative (SOBACO) Spring Research Symposium. Minneapolis, MN. May 6, 2014.
PUMICE: A Multi-Modal Agent that Learns Concepts and Conditionals from Natural Language and Demonstrations
We present a new multimodal domain-independent approach that combines natural language programming and programming-by-demonstration to allow users to first naturally describe tasks and associated conditions at a high level, and then collaborate with the agent to recursively resolve any ambiguities or vagueness through conversations and demonstrations. Users can also define new procedures and concepts by demonstrating and referring to contents within GUIs of existing mobile apps. We demonstrate this approach in PUMICE, an end-user programmable agent that implements this approach.. Read our UIST 2019 paper on PUMICE
Kite: Building Conversational Bots from Mobile Apps
Task-oriented chatbots allow users to carry out tasks (e.g., ordering a pizza) using natural language conversation. The widely-used slot-filling approach for building bots of this type requires significant hand-coding, which hinders scalability. Kite is a practical system for bootstrapping task-oriented bots. Kite’s key insight is that while bots encapsulate the logic of user tasks into conversational forms, existing apps encapsulate the logic of user tasks into graphical user interfaces. A developer demonstrates a task using a relevant app, and from the collected interaction traces Kite automatically derives a task model, a graph of actions and associated inputs representing possible task execution paths. A task model represents the logical backbone of a bot, on which Kite layers a question-answer interface generated using a hybrid rule-based and neural network approach. Using Kite, developers can automatically generate bot templates for many different tasks. Read our MobiSys 2018 paper on Kite
SUGILITE – Programing by Demonstration for Mobile Intelligent Personal Assistants
SUGILITE is a new multi-modal, interactive, programming by demonstration (PBD) system that enables end users to add new capabilities to an intelligent assistant by programming automation scripts for tasks in any existing third-party Android mobile app using a combination of demonstrations and verbal instructions. SUGILITE leverages state-of-art machine learning and natural language processing techniques to comprehend the user’s verbal instructions that supply missing information in the demonstration, such as implicit conditions, user intents and personal preferences. The user’s demonstrations on the GUI are used for grounding the conversation and reinforcing the natural language understanding model. The follow-up system EPIDOSITE extends SUGILITE to support programming for smart home devices. Read our CHI ’17 Paper about SUGILITE / Watch a SUGILITE demo video / Check out our GitHub Repository / Try out SUGILITE at Google Play
Atlasify – Spatialization, Visualization and Spatial Information Retrival
Atlasify is a novel information retrieval / interactive visualization system supporting exploratory search. As the Lead Student Researcher and Head Developer, I designed and implemented the system, enabling the system to dynamically compute the semantic relatedness for any given keywords and render the interactive map instantly. Since its beta release in June 2015, Atlasify has acquired thousands of active users and been featured on Wired, Phys.org and ACM Newsletter. Based on Atlasify, we are now conducting a variety of user behavior studies and investigating the HCI aspect of spatialization and spatial information retrieval system design. Read more about Atlasify or Try out the beta version of Atlasify