August 3, 2011
Reading Patrice Debré ‘s excellent biography of Louis Pasteur, one thing becomes clear very quickly: Pasteur was an autocrat in the lab. In fact, his associates seemed to have asked whether he could ever truly collaborate with anyone.
Work hours in Pasteur’s lab were regulated, leisure time was viewed with suspicion. Experiments were undertaken only after solitary deliberations by the master himself (p.160, my loose translation).
Which leads to the following question: what is the relationship between mentorship style and scientific productivity? My prior is that there isn’t any. Apprentices just sort themselves on the basis of their compatibility with their advisers’ temperaments, such that, in equilibrium, and controlling for unobserved heterogeneity, autocratic PIs, democratic PIs, and “hands off” PIs (to adopt a crude and not necessarily exhaustive taxonomy) achieve the same level of average productivity.
But while this is fine as null hypothesis, there are of course complications. For example, do labor market processes really make it possible for trainees to match with mentors in this way? How do trainees balance scientific interests, status, and temperament in the sorting process? And even if sorting is efficient on all dimensions, are certain styles more conducive to the production of scientific breakthroughs (vs. the slow accretion of “normal” scientific results in the Kuhnian sense)? Finally, Are there gender differences in the extent of mismatch, and does this have consequences for the underrepresentation of women in some scientific fields?
I think there is a research agenda here. Maybe this has been explored already, but I somehow that such studies are serious in the sense of taking sorting processes seriously. Am I wrong? I certainly hope so.
August 1, 2011
By now, it is widely understood that Google’s PageRank algorithm builds on the insight that the eigenvector of the largest eigenvalues for the adjacency matrix representing the set of all web pages and the links between them, yields a pretty good measure of a page’s influence. Probably less well-known is that the use of eigenvector centrality or its variants to measure the influence of particular nodes was well-known to network sociologists ever since Phil Bonacich‘s pioneering work in the early 1970s.
From my admittedly cursory search of the citation record, it does not seem that Brin and Page credited this earlier literature in their early efforts to rank web pages’ influence. How could this be? I see three possibilities.
- They may have been genuinely ignorant of Bonacich’s contributions, and simply reinvented the wheel with a 25 year lag. As an economist, I am genuinely distraught by this possibility, since it implies that the social return to network sociology research was zero (or at least a very small fraction of what it could have been).
- The use of eigenvector centrality to measure influence in networks might have been well known to contemporaries of Bonacich in computer science, but it circulated within their field in the form of examples rather than a codified body of knowledge, maybe because it was not clear what kind of problems it could be applied to.
- Brin and Page might have been aware of the network sociology literature, but chose to credit computer scientists (including their advisers) for parochial reasons.
Which of these possibilities is the correct one? Writing this particular piece of scientific history could tell us much about how the market for scientific credit works — or sometimes fail to work.
NB1: of course, none of this is meant to belittle Page and Brin’s enormous contribution. First, they recognized the usefulness of couching the search challenge in network terms; this was a key recombinative step. Second, the complementary algorithmic innovations that made it feasible to compute the centrality scores for an extremely large (and constantly growing) network were essential to realize their vision.
NB2: Steven Levy’s rather excellent history of Google does not enable one to adjudicate between these interpretations.
NB3: The citation behaviors of physicists who study networks also exhibit pronounced parochialism. They generally ignore the contributions of Linton Freeman and other network sociologists. This tend to annoy my sociologist colleagues to no end, probably for good reasons…
January 14, 2009
Who might possibly come out ahead in a recession? In industries with network effects (2-sided platforms, exchanges, markets, video games, computing software + hardware, social networks, etc.), the incumbent is typically placed in an unassailable position: positive feedback means that the big first mover just gets bigger, and wins the market. I’ve discussed in other work (“Economic and Technical Drivers of Technology Choice: Browsers” with Tim Bresnahan and “Competition between Exchanges: Lessons from the Battle of the Bund” with Estelle Cantillon) how there are narrow windows of opportunity and a critical set of conditions necessary for a second-mover to defeat a first-mover advantage under network effects. Specifically, being able to tap into new adopters, rather than trying to get customers of the incumbent to switch, and doing so when market demand is exploding, are key conditions. Large, market-wide, economic downturns are one of the great shake-ups that could generate just these conditions for an entrant. The economy contracts, firms drop out, customers drop out, but hopefully recovery follows. This means demand growth is now present, and a bunch of customers have been freed from switching costs that kept them with the incumbent. The entrant who can grab these customers returning to the economy might have a better shot at gaining market share in the presence of network effects than pre-recession. There is a glimmer of opportunity to be found in this recession. Of course, this is conditional on being able to hold out doing something else during the recession until the economy expands again.
August 11, 2008
As an economist, I find myself spending most of my time thinking about what kind of policies we can implement to foster innovation. A recent piece in The New Yorker (Jonah Lehrer, Annals of Science, “The Eureka Hunt,” July 28, 2008, p. 40), cuts down to a much finer level – the brain. What I find fascinating about this work is that much of what we think about the creative process – making connections between ideas that people hadn’t seen before; that radical innovation sometimes becoming harder as you become more of an expert in an area because you are familiar with a particular set of ideas – shows up in studies of the brain and appears to have a neuro-foundation.
As an economist, I always want to think of things in terms of the policies we’d want to pursue. To me this work suggests the advantage of being in environments where you are exposed to ideas that you wouldn’t typically encounter. That is a feature of highly interactive environments, whether it is Google! or the Niels Bohr Institute, where a ideas bounced around freely leading to a tremendous amount of creative science.
For more on the underlying research see Mark Jung-Beeman, who has a very cool website, and John Kounios on the cognitive neuroscience of solving problems with insight; Jonathan Cohen; Earl Miller; Sohee Park on schizophrenia and creativity; Jonathan Schooler on problem solving more generally. Here is a link to a longer piece on the relationship between this work and work on creativity in other fields.
August 4, 2008
It’s election season, discussion of climate change ought to be at the center of discussion, and to a certain extent, it is, but only in buffoonish, jingoistic ways. I am saying nothing original when I claim that only technological change offers a (possible) “solution” to the threat of global warming. But one should not stop the discussion at this very vague, programmatic level. Instead, I’d like to make the case for a combination of push and pull policies.
Many (especially on the conservative side of the aisle) are skeptical of the rationales used to argue in favor of a carbon tax or a “cap and trade” system. In a world where new coal-fired electricity plants open every week in China, the direct effects on temperature levels from a US-only policy shift could be minute when compared to the very real costs it will impose on the economy. These critics have a point, but they miss the effects on the R&D incentives that a carbon tax would generate. That’s the example of a demand-pull policy shift: price carbon emissions at a price that reflects their true social cost, and entrepreneurs will rush in to devise a better carbon-trap.
But that is probably not enough, for these entrepreneurs do not have a very strong scientific base to draw upon. Policy-makers should also attend to the supply-side of climate change R&D, by creating and/or reforming instiutions susceptible to quickly widen the world’s stock of climate-change relevant ideas. In the United States, much investment is channeled through sclerotic institutions (such as the DoE and the national labs), and not enough funds are awarded according to meritocratic criteria, and insulated from direct political control. To be sure, the sheer scale and capital-intensity of some energy-saving technologies will always justify a role for large institutional structures. But what is missing from the scientific landscape is something like a “National Institute of Climate Change Research,” operated along the line of the NIH, with a unique focus on extramural investments.
How about it, Barack Obama?
July 29, 2008
I went online today to buy my best friend’s romance novel, Reckless, at Amazon.com. The first thing that came up upon going to Amazon was an advertisement for “Kindle: Amazon’s Revolutionary Wireless Reading Device.” E Ink ‘s first electronic reader customer was Sony, for the Sony Reader, released in September, 2006. Kindle wasn’t released until a year later (also using E Ink’s technology), but Kindle advertised access to 140,000 titles as opposed to Sony’s 10,000 titles(Sony looks like its bookstore now offers 50,000 titles, and in December 2008 Sony & Borders co-branded their eBook Store). While the numbers of Kindles and Sony Readers sold is not definitive, it would seem that probably more Kindles have been sold than Readers, despite the year lag.
It could be that Kindle is a better technology, although there are plenty of customer reviews on both sides, but repeatedly there are two forces that it doesn’t hurt to have on your side in order to defeat a first mover advantage: distribution and complementary assets. We’ve seen this with TiVo vs. the cable companies set-top box DVRs (I wrote a chapter about this), where a sufficient but worse product sold more units than a better first-mover due to incumbent position in the home and a superior distribution channel and complementary asset (cable access and the cable installation network). We’ve also seen this with the Browser Wars (I wrote a paper & a chapter about this), where Internet Explorer overcame Netscape’s lead in the browser market with the help of distribution through exploding PC sales & Windows as a complementary asset to control access to that channel of distribution.
So it’s not always sufficient to build the first or a better mousetrap (in itself, another story about complementary assests overcoming first mover advantages): figure out how to get distribution and complementary assets on your side.