Tuesday, December 10, 2013

Skewed Incentives In Academic Research

I think this will sound troublingly familiar to most Malaysian academics, and not just within the economics community (excerpt; emphasis added):

Our uneconomic methods of measuring economic research
Stan Liebowitz

Academic economists – especially in the US – are continuously evaluated, with salaries and promotions hanging on outcomes. This column argues that the methods – identified from a survey of economics department chairs – are likely to reduce the amount of research created, perpetuate inefficiently sized research teams, promote false authorship, and penalise honest researchers. They also provide departments with excessive leeway to engage in potentially capricious behaviour.

...Our methods for measuring and rewarding research – the key component for promotions and salaries – create inefficiencies and are inconsistent with what we teach our students about efficient production. Further, this inefficiency might be caused by economists’ own rent-seeking through the vehicle of departmental politics.

The manner in which we credit coauthorship and evaluate articles induces overly large research teams, encourages false authorship, enhances subjectivity, and penalises honest researchers.

It is easy to create a division-of-credit system that gives researchers the correct incentives to choose efficiently-sized authoring teams. Simply put, a rule where the coauthors’ credit shares sum to one (i.e. full proration of credit) provides the correct incentives for choosing team sizes, and any other division rule does not. Yet, as Figure 1 shows, full proration of credit is almost never used in economics departments, according to my recent survey (Liebowitz forthcoming) of department chairs. More than a third of departments (16 out of 45) give each coauthor full credit for the entire article. Only one department completely prorates credit.

Zero proration is a flagrant violation of economic logic. For two identical quality articles, one written by a single author and the other written by four authors, should the credit to each of the four coauthors really be the same as the reward to the sole author? Do we normally say that efficient production requires that inputs get paid their marginal revenue product multiplied by the number of coworkers?

If the four-authored paper is not written with each author providing one-fourth or less effort compared to each author working alone, then that size of team is inefficient. But if each coauthor is given full credit, they have an incentive to coauthor even when the number of papers written by the four-author team is much lower than the number of equal quality papers they could write working alone or with smaller teams.

Departments that fail to discount by the number of coauthors should be embarrassed to use a measurement process that incorporates a logical error that would not be allowed in a micro principles course.

In research universities in Malaysia, promotion to Associate Professor or Professor requires the publication of a certain number of journal articles (the number varies depending on university). Once promoted, you’re also required to maintain a certain rate of publication over a number of years.

There are a couple of problems with this system. The first is that the number of recognised journals is generally limited, while the number of people seeking to publish has expanded exponentially – it’s a global market, and the rapid development of a number of countries (China for example) has greatly expanded the pool of researchers. It’s therefore much harder to get published than it ever was before.

This partially leads to the second problem as identified in the article above – since almost all universities recognise partial authorship as full authorship, this leads people to gaming the system.

If I have a couple of friends working on their own research, we can agree to put each others names on all three papers, since we’ll get credit for all three rather than just the one if we publish on our own. There’s no change in research output, nor is there necessarily an increase in quality since we don’t have to collaborate even if we say we did.

Another way to game the system is “pay to publish” – some recognised journals are nothing more than legalised extortion rackets with minimal peer-review or editorial oversight. You just pay their “publication” fee, and you get published. That’s particularly attractive when you’re trying to fulfil your annual publication quota, and because publication in the better recognised journals could take years because competition is so fierce.

As it stands, the publication quota system simply provides the wrong incentives.


  1. Dear Hisham,

    I cannot speak for other local universities, however, as far as I know, the academicians at my institution of higher learning use authorship points in published papers with multiple authors. The corresponding and lead author gets extra, while the rest shares what's left. Promotion exercises calculate authorship points, not just the number of papers.

    On the matter of predatory and bogus journals, the University sort paper publications in cited and non-cited journals (CiJ), essentially weeding out non-indexed publications. In addition, journals with impact factor (IF) is accorded more prominence.

    It is not a perfect system, even as complicated as it is, and there is a lot of issues even with the present system. There are a lot of riders rather than writers. Proration itself is supposed to discourage that, but in practice, genuine collaborative papers work out better. There are many types of publications out there.

    The real problem is not with the publication quota or the publish or perish atmosphere. The real matter here is about scholastic performance and the contribution to a body of knowledge.

    1. @anon 1.51

      Google thinks you're a spammer :)

      You've got a better system than the university my wife works in. There's certainly a case for a fairer system, though I haven't actually seen that works as well as it should.

      However, there is empirical evidence that the bottleneck at publication is real and having negative effects, above and beyond the quality of scholarship:


  2. http://www.telegraph.co.uk/science/10507434/Nobel-prize-winner-accuses-scientific-journals-of-tyranny.html