Workable published a short data-science (probabilistic) puzzle at http://buff.ly/1Rip3b0:
Suppose we have 4 coins, each with a different probability of throwing Heads. An unseen hand chooses a coin at random and flips it 50 times. This experiment is done several times with the resulting sequences as shown below (H = Heads, T = Tails) Write a program that will take as input the data that is collected from this experiment and estimate the probability of heads for each coin.
Well, I thought I would spend a few minutes on it and well, a few minutes turned into more than 30. My solution is simply maximum likelihood estimation (MLE), i.e., minimizing the negative log likelihood of the data given the model parameters:
where is the number of observed heads in the sequence of 50, is the coin selected ( forms a categorical distribution over the 4 coins for each sequence), and is the probability of heads for coin .
Since I’m all about trying Julia nowadays, that’s what I coded it up in (hosted on github). The first-cut solution (coinsoln_old.jl) found the following MLE estimates (negLL: 278.2343):
Maximum Likelihood Estimates: [0.428 ,0.307, 0.762, 0.817]
The first solution didn’t use any speed-up tricks nor the derivatives so, it should be easy to follow but is not terribly accurate or efficient. I then tried out automatic differentiation, which required minor code changes and sped up the computation significantly. This updated, faster solution (coinsoln.jl) found a slightly different result using conjugate gradients (negLL: -7.31325)
Maximum Likelihood Estimates: [0.283, 0.283,0.813,0.458]
Oh, and if I made any stupid errors, please let me know.
If you’re ever getting “no such instruction” errors using Theano, e.g.,
no such instruction: `vmovd %rax, %xmm0′
try inserting the following lines into ~/.theanorc (create if missing)
More info on stackoverflow.
It’s been a while since my last post. Excuse: thesis write-up. Update: Thesis submitted!
In other news, our recent work on Learning Assistance by Demonstration was accepted this year’s IROS! It’ll be a fun and interesting conference in Tokyo, Japan! You can find a preprint here.
Abstract: Crafting a proper assistance policy is a difficult endeavour but essential for the development of robotic assistants. Indeed, assistance is a complex issue that depends not only on the task-at-hand, but also on the state of the user, environment and competing objectives. As a way forward, this paper proposes learning the task of assistance through observation; an approach we term Learning Assistance by Demonstration (LAD). Our methodology is a subclass of Learning-by-Demonstration (LbD), yet directly addresses difficult issues associated with proper assistance such as when and how to appropriately assist. To learn assistive policies, we develop a probabilistic model that explicitly captures these elements and provide efficient, online, training methods. Experimental results on smart mobility assistance — using both simulation and a real-world smart wheelchair platform — demonstrate the effectiveness of our approach; the LAD model quickly learns when to assist (achieving an AUC score of 0.95 after only one demonstration) and improves with additional examples. Results show that this translates into better task-performance; our LAD-enabled smart wheelchair improved participant driving performance (measured in lap seconds) by 20.6s (a speedup of 137%), after a single teacher demonstration.
Download Draft PDF
YARP is another robot development platform, similar to ROS. I had to code up a simple data reader in Python (operating over YARP ports) and couldn’t find any good examples. After some experimenting, I found a solution that worked for me. The following is a simple code snippet for other YARP Python newbies:
#create a new input port and open it
self.in_port = yarp.BufferedPortBottle()
#connect up the output port to our input port
#in this example, I assume the data is a single integer
#we use read() where the parameter determines if it is
#blocking (True) or not.
btl = self.in_port.read(True)
my_data = btl.get(0).asInt()
#if you have doubles, you can use asDouble()
#or strings can be obtained using asString()
Just got news that our paper on the ARTY smart paediatric wheelchair was accepted to the IROS 2012 Workshop on Progress, Challenges and Future Perspectives in Navigation and Manipulation Assistance for Robotic Wheelchairs.
Abstract: Standard powered wheelchairs are still heavily dependent on the cognitive capabilities of users. Unfortunately, this excludes disabled users who lack the required problem-solving and spatial skills, particularly young children. For these children to be denied powered mobility is a crucial set-back; exploration is important for their cognitive, emotional and psychosocial development. In this paper, we present a safer paediatric wheelchair: the Assistive Robot Transport for Youngsters (ARTY). The fundamental goal of this research is to provide a key-enabling technology to young children who would otherwise be unable to navigate independently in their environment. In addition to the technical details of our smart wheelchair, we present user-trials with able-bodied individuals as well as one 5-year-old child with special needs. ARTY promises to provide young children with “early access” to the path towards mobility independence.
More information about ARTY (with video).
Just submitted an IROS camera-ready copy of some recent work on online spatio-temporal learning:
In this work, we are primarily concerned with robotic systems that learn online and continuously from multi-variate data-streams. Our first contribution is a new recursive kernel, which we have integrated into a sparse Gaussian Process to yield the Spatio-Temporal Online Recursive Kernel Gaussian Process (STORK-GP). This algorithm iteratively learns from time-series, providing both predictions and uncertainty estimates. Experiments on benchmarks demonstrate that our method achieves high accuracies relative to state-of-the-art methods. Second, we contribute an online tactile classifier which uses an array of STORK-GP experts. In contrast to existing work, our classifier is capable of learning new objects as they are presented, improving itself over time. We show that our approach yields results comparable to highly-optimised offline classification methods. Moreover, we conducted experiments with human subjects in a similar online setting with true-label feedback and present the insights gained.
This work was nominated as a finalist for the 2012 CoTeSys Cognitive Robotics Best Paper Award.
Paper accepted at IJCNN 2012; Never been to Brisbane!
Summary: In this work, we contribute the online echo state gaussian process (OESGP), a novel Bayesian-based online method that is capable of iteratively learning complex temporal dynamics and producing predictive distributions (instead of point predictions). Our method can be seen as a combination of the echo state network with a sparse approximation of Gaussian processes (GPs). Extensive experiments on the one-step prediction task on well-known benchmark problems show that OESGP produced statistically superior results to current online ESNs and state-of-the-art regression methods. In addition, we characterise the benefits (and drawbacks) associated with the considered online methods, specifically with regards to the trade-off between computational cost and accuracy. For a high-dimensional action recognition task, we demonstrate that OESGP produces high accuracies comparable to a recently published graphical model, while being fast enough for real-time interactive scenarios.