The following are open projects that are available to PhD students. If you’re interested to know more, email me at harold@comp.nus.edu.sg  or come by my office at COM2-03-03.

Joint Artificial-Intelligence and Human Decision-Making

How can we better integrate human and AI-based decision making? Consider a large team of humans and intelligent machines who are collaborating on a joint decision-making task; what frameworks and methods are necessary to coordinate activities, and ensure the efficient distribution of accurate information? Is it possible to scale these systems to support nation-wide activities, for example, as an alternative to standard political referendum procedures? This project involves tackling these issues to invent new forms of collaborative decision-making that supercede what is currently being used. We will advance the state-of-the-art in online machine learning and decision-theoretic methods, along with interaction patterns to facilitate natural interaction between elements. 

Bridging Human Teaching and Robot Learning

In the near future, robots will pervade our social space. There’ll be robots everywhere: in our workplaces, malls, and homes. Although many of us in Computer Science are comfortable with robot programming, many lay-people are not similarly skilled. Can we enable people to teach robots similar to how we teach each other? Standard Learning by Demonstration (LbD) approaches use machine learning methods to learn from human demonstrations in the form of trajectories. We are going to break this trend and consider rich training scenarios where (i) the human is not just a labeller/demonstrator and (ii) the machine is not just a passive observer. We will craft new machine learning methods that will enable the robot (i) to process information beyond labels (e.g., reasons why actions are correct/wrong), and (ii) to interact with the human similar to how a student may pose questions to a teacher. 

Interpretable and Controllable Deep Generative Models

A key problem with today’s deep networks are that they are non-interpretable; analyzing the weights and activations gives you little insight into what the model is doing. This is a significant problem that limits applicability of deep networks: would you trust a medical diagnosis model that is unable to tell you why it thinks you have cancer? In this project, we will advance current deep models by making them more amenable to human interpretation. For example, we will craft new variants of deep generative models that learn disentangled and interpretable features spaces. 

Collaborative AI with Cognitive Models

How we interact and work with one another can be greatly influenced by internal states. Consider the last few times you collaborated with another person: how much of your behavior was influenced by whether you were focussed or distracted, energized or tired, full or hungry? As we increasingly collaborate with artificial intelligent agents, one key question is whether AI-based decision-making and learning can be improved if machines understood their human collaborators better. This project will involve proposing new human cognitive models that will be integrated into our machine learning systems. For example, we will be studying how to probabilistically estimate latent (hidden) human properties such as trust, fatigue, prejudices, and cognitive overload. We will then “close-the-loop” by using these cognitive models to affect resultant actions in an effort to improve collaborative task outcomes. 

Advertisements