Kite: Building Conversational Bots from Mobile Apps
Task-oriented chatbots allow users to carry out tasks (e.g., ordering a pizza) using natural language conversation. The widely-used slot-filling approach for building bots of this type requires significant hand-coding, which hinders scalability. Kite is a practical system for bootstrapping task-oriented bots. Kite’s key insight is that while bots encapsulate the logic of user tasks into conversational forms, existing apps encapsulate the logic of user tasks into graphical user interfaces. A developer demonstrates a task using a relevant app, and from the collected interaction traces Kite automatically derives a task model, a graph of actions and associated inputs representing possible task execution paths. A task model represents the logical backbone of a bot, on which Kite layers a question-answer interface generated using a hybrid rule-based and neural network approach. Using Kite, developers can automatically generate bot templates for many different tasks. Read our MobiSys 2018 paper on Kite
SUGILITE – Programing by Demonstration for Mobile Intelligent Personal Assistants
SUGILITE is a new multi-modal, interactive, programming by demonstration (PBD) system that enables end users to add new capabilities to an intelligent assistant by programming automation scripts for tasks in any existing third-party Android mobile app using a combination of demonstrations and verbal instructions. SUGILITE leverages state-of-art machine learning and natural language processing techniques to comprehend the user’s verbal instructions that supply missing information in the demonstration, such as implicit conditions, user intents and personal preferences. The user’s demonstrations on the GUI are used for grounding the conversation and reinforcing the natural language understanding model. The system points the way to allowing the general public to more effectively use their smartphones, IoT devices and intelligent assistants, increasing the adoption, efficiency and correctness of uses of these technologies. The follow-up system EPIDOSITE extends SUGILITE to support programming for smart home devices. Read our CHI ’17 Paper about SUGILITE / Watch a SUGILITE demo video / Check out our GitHub Repository / Try out SUGILITE at Google Play
Atlasify – Spatialization, Visualization and Spatial Information Retrival
Atlasify is a novel information retrieval / interactive visualization system supporting exploratory search. As the Lead Student Researcher and Head Developer, I re-implemented the system with Leaflet on the front end and WikiBrain on the back end, enabling the system to dynamically compute the semantic relatedness for any given keywords and render the interactive map instantly. Since its beta release in June 2015, Atlasify has acquired thousands of active users and been featured on Wired, Phys.org and ACM Newsletter. Based on Atlasify, we are now conducting a variety of user behavior studies and investigating the HCI aspect of spatialization and spatial information retrieval system design. Read more about Atlasify or Try out the beta version of Atlasify
WikiBrain – A Java Library for Wikipedia-based Algorithms
WikiBrain is a Java-based library/framework wrote by Shilad Sen, Toby J Li, Brent Hecht and a group of undergraduate students at Macalester College. It democratizes access to a range of Wikipedia-based algorithms and technologies, enabling anyone with basic Java programming skills to utilize state-of-art semantic relatedness algorithms, page view data analysis and spatial queries in a few lines of code and easily analyze terabytes of Wikipedia data. After its debut at OpenSym/WikiSym ’14, WikiBrain has been used by many in both academia and industry. Learn more about WikiBrain / WikiBrain homepage
Peer-reviewed Conference Papers
[C.8] Toby Jia-Jun Li, Igor Labutov, Xiaohan Nancy Li, Xiaoyi Zhang, Wenze Shi, Wanling Ding, Tom M. Mitchell and Brad A. Myers. 2018. APPINITE: A Multi-Modal Interface for Specifying Data Descriptions in Programming by Demonstration Using Natural Language Instructions. Proceedings of the 2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC 2018). [Paper PDF][Video]
[C.7] Toby Jia-Jun Li and Oriana Riva. 2018. KITE: Building conversational bots from mobile apps. Proceedings of the 16th ACM International Conference on Mobile Systems, Applications, and Services (MobiSys 2018) [Paper PDF][ACM DL][Talk Video]
[C.6] Yuanchun Li, Fanglin Chen, Toby Jia-jun Li, Yao Guo, Gang Huang, Matthew Fredrikson, Yuvraj Agarwal and Jason I. Hong. PrivacyStreams: Enabling Transparency in Personal Data Processing for Mobile Apps. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (PACM IMWUT / UbiComp 2017) [Paper PDF][ACM DL][Website]
[C.5] Toby Jia-Jun Li, Yuanchun Li, Fanglin Chen and Brad A. Myers. 2017. Programming IoT Devices by Demonstration Using Mobile Apps. Proceedings of the International Symposium on End User Development (IS-EUD 2017). Best Paper Award. [Paper PDF][SpringerLink]
[C.4] Toby Jia-Jun Li, Amos Azaria and Brad A. Myers. 2017. SUGILITE: Creating Multimodal Smartphone Automation by Demonstration. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI 2017). Best Paper Honorable Mention Award. [Paper PDF][ACM DL][Video][GitHub][Google Play]
[C.3] Isaac Johnson, Yilun Lin, Toby Jia-Jun Li, Andrew Hall, Aaron Halfaker, Johannes Schöning and Brent Hecht. 2016. Not at Home on the Range: Peer Production and the Urban/Rural Divide. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI 2016). [Paper PDF][ACM DL]
[C.2] Toby Jia-Jun Li, Shilad Sen and Brent Hecht. 2014. Leveraging Advances in Natural Language Processing to Better Understand Tobler’s First Law of Geography. Proceedings of the 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (SIGSPATIAL 2014). [Paper PDF][ACM DL]
[C.1] Shilad Sen, Toby Jia-Jun Li, WikiBrain Team and Brent Hecht. 2014. WikiBrain: Democratizing Computation on Wikipedia. Proceedings of the 10th International Symposium on Open Collaboration (OpenSym / WikiSym 2014). [Paper PDF][ACM DL]
[B.2] Toby Jia-Jun Li, Igor Labutov, Brad A. Myers, Amos Azaria, Alexander I. Rudnicky and Tom M. Mitchell. Teaching Agents When They Fail: End User Development in Goal-oriented Conversational Agents. Chapter of Studies in Conversational UX Design, Robert J. Moore, Margaret H. Szymanski, Raphael Arar, Guang-Jie Ren eds. Springer, 2018. [Springer]
[B.1] Brad A. Myers, Andrew Ko, Chris Scaffidi, Stephen Oney, YoungSeok Yoon, Kerry Chang, Mary Beth Kery and Toby Jia-Jun Li. Making End User Development More Natural. Chapter of New Perspectives in End-User Development, Fabio Paternò and Volker Wulf, eds. Springer, 2017. [SpringerLink]
Posters and Workshop Papers
[W.5] Marissa Radensky, Toby Jia-Jun Li, and Brad A. Myers. 2018. Poster: How End Users Express Conditionals in Programming by Demonstration for Mobile Apps. 2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC 2018). Lisbon, Portugal. October 2, 2018. [Paper PDF]
[W.4] Toby Jia-Jun Li, Igor Labutov, Xiaohan Nancy Li, Tom M. Mitchell and Brad A. Myers. 2018. Supporting Co-adaptive Human-Agent Relationship through Programming by Demonstration using Existing GUIs. Rethinking Interaction CHI 2018 Workshop. [Paper PDF]
[W.3] Toby Jia-Jun Li. 2017. End User Mobile Task Automation using Multimodal Programming by Demonstration. IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC 2017) Graduate Consortium. [Paper PDF][IEEE Xplore]
[W.2] Toby Jia-Jun Li, Brad A. Myers, Amos Azaria, Igor Labutov, Alexander I. Rudnicky and Tom M. Mitchell. 2017. Designing a Conversational Interface for a Multimodal Smartphone Programming-by-Demonstration Agent. Conversational UX Design CHI 2017 Workshop. [Paper PDF]
[W.1] Toby Jia-Jun Li and Brad A. Myers. 2016. Smartphone Text Entry in Cross-Application Tasks. CHI 2016 Workshop on Inviscid Text Entry and Beyond. [Paper PDF]
Invited Talks and Presentations
[P.5] Brad A. Myers and Toby Jia-Jun Li. 2018. Teaching Intelligent Agents New Tricks: Natural Language Instructions plus Programming-by-Demonstration for Teaching Tasks. Human Computer Interaction Consortium (HCIC ‘18). Watsonville, CA. June 25, 2018.
[P.4] Toby Jia-Jun Li and Brad A. Myers. 2018. SUGILITE: Enabling InMind Agent to Learn New Tasks from User Demonstration. Talk at Oath (formerly Yahoo!). Sunnyvale, CA. May 30, 2018.
[P.3] Toby Jia-Jun Li, Josh Ford, Doug Downey, Brent Hecht, Vijay Murganoor and Shilad Sen. 2015. Atlasify – The Geography of Everything. 3M Science and Engineering Symposium. St Paul, MN. June 25, 2015.
[P.2] Toby Jia-Jun Li, Josh Ford, Doug Downey, Brent Hecht, Vijay Murganoor and Shilad Sen. 2015. Atlasify – The Geography of Everything. The Social Media and Business Analytics Collaborative (SOBACO) Spring Research Symposium. Minneapolis, MN. May 14, 2015.
[P.1] Toby Jia-Jun Li and Brent Hecht. 2014. WikiBrain: Making Computer Programs Smarter with Knowledge from Wikipedia. The Social Media and Business Analytics Collaborative (SOBACO) Spring Research Symposium. Minneapolis, MN. May 6, 2014.