## Wednesday, June 27, 2012

### Rank+IR

Mennon et. al. have a paper at ICML 2012 called Predicting Accurate Probabilities with a Ranking Loss. The basic idea is to train a classifier on a ranking loss (e.g., AUC), then post-process the classifier scores with isotonic regression to calibrate the classifier. In contrast with training a classifier using a proper scoring rule (e.g., logistic regression), this procedure non-parametrically explores the space of link functions and the claim is this leads to better results. Note exploring the space of link functions non-parametrically is intuitively safe'' from a model complexity standpoint because this is a one-dimensional procedure which operates on the scores output by the underlying classifier.

It turns out we accidentally backed into this at eHarmony. When I joined the production system delivered matches sequentially so we started with a ranking loss. Later the production system switched to using a linear program to deliver matches, and the easiest thing to do was to add a calibration step at the end of the training pipeline, and we did isotonic regression with linear interpolation. We wanted to switch to directly training for classification with a proper scoring rule, but we started subsampling the negatives so we needed to continue to calibrate the classifier and therefore it never happened. The whole time we suspected we were being incoherent.'' Hey, it's better to be lucky than good. Now, if I find myself in a similar situation in the future, I'll be able to articulate a rationale for the approach.

The meta-lesson here is if you are an applied machine learning practitioner and you see a paper with Charles Elkan's name on it, you should read it. I've yet to be disappointed.