A Probabilistic Look at MDS

Headed to NYC in July! Will be presenting my paper on a probabilistic variant of multi-dimensional scaling (MDS) at the 2016 International Joint Conference on Artificial Intelligence (IJCAI)! The acceptance rate was below 25% so, it’s certainly satisfying that the paper was accepted.

You can read the pre-print here and slides here.

About the work: we take a fresh Bayesian view of MDS—an old dimensionality reduction method—and find connections to popular machine learning methods such as probabilistic matrix factorization and word embedding. The probabilistic viewpoint allows us to connect distance/similarity matching to non-parametric learning methods such as sparse Gaussian processes (GPs), and we derive a novel method called the Variational Bayesian MDS Gaussian Process (VBMDS-GP) [yes, a mouthful!]. As concrete examples, we apply it to multi-sensor localization and perhaps more interestingly, political unfolding.

Political Unfolding

In the unfolding task, we projected political candidates to a 2-d plane using their associated Wikipedia articles and ~15,000 voter preference survey done in 2004 for other candidates. The projection is not perfect since we use very simple Bag-of-Words (BoW) features—I think Sanders is a more liberal than the map implies—but is nevertheless coherent. We see our favorite political candidate, Donald Drumpf, projected to the conservative section and President Obama projected near the Clintons.

The model can be extended in lots of different ways; I’m working on using more recent variational inference techniques, plus maybe some “deep” extensions.


Puzzle of the day from Workable

Workable published a short data-science (probabilistic) puzzle at http://buff.ly/1Rip3b0:

Suppose we have 4 coins, each with a different probability of throwing Heads. An unseen hand chooses a coin at random and flips it 50 times. This experiment is done several times with the resulting sequences as shown below (H = Heads, T = Tails) Write a program that will take as input the data that is collected from this experiment and estimate the probability of heads for each coin.

Well, I thought I would spend a few minutes on it and well, a few minutes turned into more than 30. My solution is simply maximum likelihood estimation (MLE), i.e., minimizing the negative log likelihood of the data given the model parameters:

-\mathrm{log} L(\theta) = -\sum_{k=1}^{N} \mathrm{log} \sum_i^4 \mathrm{Bin}(d_k; 50, \theta_i)p(c_{i,k})

where d_k is the number of observed heads in the sequence of 50, c_i is the coin selected (p(c_{i,k}) forms a categorical distribution over the 4 coins for each sequence), and  \theta_i is the probability of heads for coin i.

Since I’m all about trying Julia nowadays, that’s what I coded it up in (hosted on github). The first-cut solution (coinsoln_old.jl) found the following MLE estimates (negLL: 278.2343):

Maximum Likelihood Estimates: [0.428 ,0.307, 0.762, 0.817]

The first solution didn’t use any speed-up tricks nor the derivatives so, it should be easy to follow but is not terribly accurate or efficient. I then tried out automatic differentiation, which required minor code changes and sped up the computation significantly. This updated, faster solution (coinsoln.jl) found a slightly different result using conjugate gradients (negLL: -7.31325)

Maximum Likelihood Estimates: [0.283, 0.283,0.813,0.458]

Oh, and if I made any stupid errors, please let me know.

Learning Assistance by Demonstration

Learning Assistance By Demonstration SystemIt’s been a while since my last post. Excuse: thesis write-up. Update: Thesis submitted!

In other news, our recent work on Learning Assistance by Demonstration was accepted this year’s IROS! It’ll be a fun and interesting conference in Tokyo, Japan! You can find a preprint here.

Abstract:  Crafting a proper assistance policy is a difficult endeavour but essential for the development of robotic assistants. Indeed, assistance is a complex issue that depends not only on the task-at-hand, but also on the state of the user, environment and competing objectives. As a way forward, this paper proposes learning the task of assistance through observation; an approach we term Learning Assistance by Demonstration (LAD). Our methodology is a subclass of Learning-by-Demonstration (LbD), yet directly addresses difficult issues associated with proper assistance such as when and how to appropriately assist. To learn assistive policies, we develop a probabilistic model that explicitly captures these elements and provide efficient, online, training methods. Experimental results on smart mobility assistance — using both simulation and a real-world smart wheelchair platform — demonstrate the effectiveness of our approach; the LAD model quickly learns when to assist  (achieving an AUC score of 0.95 after only one demonstration) and improves with additional examples. Results show that this translates into better task-performance; our LAD-enabled smart wheelchair improved participant driving performance (measured in lap seconds) by 20.6s (a speedup of 137%), after a single teacher demonstration.

Download Draft PDF

Reading Data on YARP with Python

YARP is another robot development platform, similar to ROS. I had to code up a simple data reader in Python (operating over YARP ports) and couldn’t find any good examples. After some experimenting, I found a solution that worked for me. The following is a simple code snippet for other YARP Python newbies:

class ExampleReader:
    def __init__(self):
        #create a new input port and open it
        self.in_port = yarp.BufferedPortBottle()

        #connect up the output port to our input port
        yarp.Network.connect("/example/data:o", "/example/data:i")


    def getData(self):
        #in this example, I assume the data is a single integer
        #we use read() where the parameter determines if it is 
        #blocking (True) or not.

        btl = self.in_port.read(True)

        my_data = btl.get(0).asInt()

        #if you have doubles, you can use asDouble()
        #or strings can be obtained using asString() 

        return my_data

ARTY IROS Workshop Paper

Just got news that our paper on the ARTY smart paediatric wheelchair was accepted to the IROS 2012 Workshop on Progress, Challenges and Future Perspectives in Navigation and Manipulation Assistance for Robotic Wheelchairs.

Abstract: Standard powered wheelchairs are still heavily dependent on the cognitive capabilities of users. Unfortunately, this excludes disabled users who lack the required problem-solving and spatial skills, particularly young children. For these children to be denied powered mobility is a crucial set-back; exploration is important for their cognitive, emotional and psychosocial development. In this paper, we present a safer paediatric wheelchair: the Assistive Robot Transport for Youngsters (ARTY). The fundamental goal of this research is to provide a key-enabling technology to young children who would otherwise be unable to navigate independently in their environment. In addition to the technical details of our smart wheelchair, we present user-trials with able-bodied individuals as well as one 5-year-old child with special needs. ARTY promises to provide young children with “early access” to the path towards mobility independence.

Download Preprint

More information about ARTY (with video).