Saturday, September 21, 2013

What Lies Between R and D

At Microsoft I'm part of an applied research team, technically part of MSR but forward deployed near a product team. Microsoft is experimenting with this kind of structure because, like many organizations, they would like to lower the impedance mismatch between research and production. After a year situated as such, I'm starting to appreciate the difficulty.

Consider the following scenario: a production team has a particular problem which has been vexing them lately and they are flummoxed. They schedule a meeting with some authorities in the research department, there's some discussion back and forth, but then no follow-up. What happened? (There's also the converse scenario: a researcher develops or hears about a new technique that they feel is applicable to some product, so they schedule a meeting with a product group, there's some discussion back and forth, but then no follow-up. I won't be discussing that today.)

I think about why nothing resulted from such a meeting in terms of incentives and motivations. In other words, there is some reason why the researchers felt there were better uses for their time. This leads to the question of what are the desires and goals of someone who would devote their lives to research (remember philosophy means “love of knowledge”). Once they achieve a minimum level of financial support, intellectuals have other motivations that kick in. A big one is the desire for prestige or egoboo (the same force that drives blogging and open-source software). The popular culture academic caricature of the anti-social misanthrope in the corner seems highly inaccurate: the successful researchers I've known are highly social and collaborative people who identify with a research community and seek the respect and attention of (and influence over) that community. Ultimately such prestige is redeemable for opportunities to join institutions (e.g., universities or industrial research departments), and hanging out with other smart people is another major motivation for intellectuals, as many of them recognize the nonlinear power of agglomeration effects. In other words, it is widely recognized that hanging out with smart people makes you smarter and gives you better ideas than you would have in isolation.

Cognizant of the previous, I'm trying to understand how a researcher would go about allocating their own time. First let me say I'm not trying to be normative, or give the impression researchers are obsessively self-centered. To some extent everybody in a company is self-centered and getting activity aligned with group goals is imho mostly related to incentives (including social norms). One thing that should be clear from the preceding paragraph is that money will not be an effective incentive for most intellectuals, unless you are talking about so much money that they can essentially build their own research institute à la Stephan Wolfram. Just like in the VISA commercial, there are some things money can't buy, and it turns out intellectuals want those things. Those things are roughly: overcoming intellectual challenges, working with other smart people, and influencing entire research communities.

So back to that problem the product team brought to the researchers. Is it that the problem is not sufficiently challenging? From what I've seen that is not the issue: if there is a straightforward solution, the researchers will provide some pointers and consultation and everybody will be happy. More typically, the problem is too challenging, sometimes fundamentally, but often due more to idiosyncratic aspects.

Fundamentally challenging problems are like the problem Hal Duame recently blogged about, and the best part of his blog post was the line “Ok I'll admit: I really don't know how to do this.” I think the response from researchers is often silence because it takes a very confident person to say something like that, especially when they are supposed to be the expert. For the researcher deciding how to allocate their time, fundamentally challenging problems are risky, because it is difficult to obtain prestige from lack of progress. Therefore I think it is reasonable for researchers to only devote a portion of their problem portfolio on the fundamentally difficult.[1] (By the way, there is an art to knowing where the frontier is: that portion of the limitless unknown which is challenging but potentially within reach and therefore worthy of attention.) It is sometimes possible to make partial progress on fundamental problems via heuristic approaches (aka hacks), but it is difficult to get community recognition for this kind of activity.

In contrast to fundamental challenges, challenges due to idiosyncratic constraints are pervasive. After all, the product team is often somewhat familiar with the possibilities of the state of art in a field, which is what motivated the meeting to begin with. However there is some reason why the straightforward solution cannot be applied, e.g., too expensive, too strategically implausible, too complicated to implement reliably, too incompatible with legacy infrastructure, etc. Whether or not such problems get addressed has to do with whether the community will find the constraints interesting (or, with a really senior thought leader, whether or not the community can be convinced that the constraints are interesting). Interesting is often a function of generality, and idiosyncratic problem aspects are inherently problem specific. Possibly after addressing many different idiosyncratic problem presentations, a researcher might be able to generalize across the experiences and abstract a new class of problems with a common solution, but it is again a risky strategy to allocate time to idiosyncratic problems with the hope that a generalization will emerge, because without such a generalization obtaining community recognition will be difficult.

Sometimes problems present a multi-objective optimization scenario which goes beyond conceptual complexity into ambiguity. In other words, it's not clear what's better. Under those conditions the community can focus on an objective which is well-defined but irrelevant. At UAI this year Carlos Uribe stated that more accurate prediction of the star rating of a Netflix movie has, as far as they can tell, no impact on the customer experience. He had to say something like this because for several years it was possible to get a best paper by doing better on the Netflix data set, and he'd like to see us focused on something else.

So what should an organization with a multi-billion dollar research department do to lower the impedance mismatch between research and production? I don't know! I think part of the answer is to change what is considered prestigious. I could almost see an institution taking the position of «no publications», not because they are afraid of informing the competition, and not because they fail to see the value of collecting one's thoughts presentably and subjecting them to peer review; but rather because the external communities that manage publications allocate prestige and therefore effectively control the compensation of the research department. However I don't think this is tenable. So instead one has to create and foster venues where the idiosyncratic is embraced, where partial solutions are recognized, and where mere accounts of practical challenges and experiences (i.e., confusion) is considered a contribution.

For me personally, it's clear I need to get out more. I like going to the big ML conferences like NIPS, ICML, and UAI, but I've never been to KDD. KDD papers like Trustworthy Online Controlled Experiments: Five Puzzling Outcomes Explained suggest I'm missing out.

1


You might be asking, ``don't researchers devote themselves exclusively to the fundamentally difficult?'' Video game designers will tell you people like problems that are hard but not too hard; but even this perspective assumes researchers have plenary discretion in problem selection. In practice there are career considerations. Additionally, researchers invest and develop a certain proficiency in a certain area over time, and there are switching costs. The result is a large portion of activity is incremental progress. They're called Grand Challenge Problems for a reason!