The Controversy Over Revealing Referee Information

In early December 2020 I posted a revision of the data appendix associated with my working paper The remarkable growth in financial economics, 1974-2020 that included a table showing information about the 1,941 people who wrote 24,009 referee reports between 1974-2020. The information included the referee's name, the number of reports written, the average turnaround time, and the proportion of acceptance decisions made by the editor that were associated with the referee's reports.

My intent in producing this information was to show the vast number of people who have contributed to the success of the JFE over this long time span and to give these individuals credit by letting others know about their significant efforts. The JFE has listed a rolling 12-month summary of the refereeing efforts of the editorial board and ad hoc referees since 1996. These web pages list the referee's name, institutional affiliation, the number of reports written, and average turnaround time. The intent of these listings has always been to encourage referees to be prompt in completing their refereeing duties, and to allow others to see the work being done by so many of our professional colleagues. Indeed, referees are the most important factor of production for any Journal's editorial office, and they are usually undercompensated and underappreciated for their efforts.

The new piece of information contained in my data appendix was the "acceptance rate." In the body of the paper I have a significant discussion of the important matching problem that all editors face when selecting a suitable referee for all papers that are submitted for consideration (excerpt below):

As with many things in economics, the distribution of the refereeing workload is positively skewed. As shown in Fig. 9a, of the 1,941 people who have written referee’s reports for the JFE between 1994-2020, about 40% of these people have prepared one or two reports. On the other hand seven percent of the referees have written more than 40 reports (the maximum is 215). The distribution of acceptance rates in Fig. 9b is even more unusual. Most referees (59%) have never accepted a paper, and a handful have relatively high acceptance rates. It is clear though that the acceptance rates are higher for people who have written more reports, since the equal-weighted average acceptance rate is 6.7% while the average weighted by the number of reports is 11.1%. In fact, the acceptance rate distribution for the 349 referees who have written 20 or more reports looks fairly normal. This is consistent with a sorting process where editors choose inexperienced, or at least infrequent, referees to review papers that they forecast are unlikely to become publishable in the JFE. Table 13A in the internet appendix lists all of the people who have served as referees from 1994-2020, along with the number of reports, acceptances, rejections, and average turnaround times.

Editors often select experienced referees for papers that the editor thinks have a higher likelihood of eventually becoming publishable. This sorting model makes sense in the context of the dynamic quid pro quo system that helps academic publishing work. Even when authors pay “large” submission fees, and referees receive “large” honoraria for their on-time work, the compensation for referees is far below their opportunity cost of time, especially for the most experienced referees, who are also among the most prolific authors. Nonetheless, experienced referees often devote a lot of time to reading others’ papers and writing reports on them. Since authors do not know the identity of the referee, only the editor can observe the valuable work contributed by the referee. The implicit compensation experienced referees receive is that they expect the editor to devote scarce high quality refereeing resources to their papers when they submit as authors.

It is not true that all experienced, prolific authors also serve as frequent referees. As mentioned in section 2.2, another way the JFE rewards referees is to list on the editor’s web page all of the people who have refereed papers in a recent 12 month period, along with the number of reports they have written and the average turnaround time. This provides quantifiable evidence of professional service to colleagues and Deans. In addition, for accepted papers, if the authors thank “an anonymous referee,” the editor asks the referee if they are willing to reveal their identity in the published paper.

Another aspect of the sorting process in selecting referees is that it is expensive for the editor if the referee errs in being too generous in assessing the paper. This often results in asking a second person to review the paper, or it could result in publishing a paper that lowers the quality of the journal. Given this asymmetric loss function, it is normal for editors to learn about the referee’s quality by asking them to review lower quality papers. I remember that my first six or seven referee reports for the JFE in 1976 were all for papers that were easy rejection decisions. One day I commented to Mike Jensen that I would love to see a paper that might actually have a chance to be accepted. As a result, the next two papers I reviewed were Roll (1977) and Scholes and Williams (1977), which have 858 and 937 citations in the SSCI through 2019, so Mike obviously had decided that he could trust my judgment.

Another important source of information about referees is the knowledge of the members of the Editorial Board. Editors frequently ask members of the Board for recommendations of possible referees as a way to broaden the set of people who contribute to the Journal. Young scholars have incentives to produce high quality reports to establish a good reputation with the editor.

My intent in explaining the referee selection process was to make it clear that the choice made by editors in selecting referees for a particular paper plays a dominant role in the "acceptance rate." I should have been more clear in the paper that the "acceptance rate" reflects the decision made by the editor, not the recommendation by the referee, and I will make that change when I revise the paper.

Despite my intent, apparently some members of our profession misinterpreted the information on acceptance rates and then berated particular individuals by name on various social media outlets for having acceptance rates that are "too low." I do not use any form of social media, but my understanding is that these personal attacks occurred in forums where the attacker, but not the victim, could hide behind an anonymous identity. The behavior of those who misinterpreted acceptance rate data in this way is unprofessional and inappropriate.

In retrospect, I wish I had never have included the data on acceptance rates in the appendix to my paper. For that I apologize. The online version of the appendix no longer includes that information. As a testament to the unfortunate (in my opinion) role that social media now plays in our lives, the data appendix has been downloaded 7,700 times versus 1,868 downloads of the actual paper, according to the SSRN web site as of January 20, 2021. Now that the controversial information has been removed, those downloads have slowed down.

At this time, I have emailed 321 frequent referees for permission to report their summary refereeing information, in particular, acceptance rates. Thus far, I have received 226 responses (after two weeks), which is a 67% response rate. Of those responses, 217 (96%) agreed to have their information included in a new version of Table 13A. Adding 7 referees who are deceased, this group of 224 referees who have agreed to being reported represents 12% of the total group of referees, but 56% of the acceptances and 51% of the reports, and the weighted average turnaround for this group is 35 days.

As a result of the social media ruckus related to this controversy, Elsevier, the American Finance Association, and the Society of Financial Studies have issued a joint statement that assures that:

"The editors of the JF, RFS, and the incoming editor of the JFE hereby affirm our commitment to never publicly disclose data that could reveal individual referees’ confidential recommendations, or any other confidential information about the editorial process.

The editors of the JF, RFS, and the incoming editor of the JFE further believe that editors are bound by the AFA Code of Professional Conduct and Ethics, sections 6.(d)(5) and 6.(f), as well as the rules for human subjects research, to not publicly disclose such confidential information."

I believe this is a very reasonable statement, but I want to make it clear that my data appendix did not reveal the confidential information from any referee's report with respect to any submitted paper. All that was revealed was the proportion of acceptances (and that reflects the editor's decision, not necessarily the referee's recommendation), and the average turnaround time for 1,941 different people related to 24,009 reports written over a 25 year period. Therefore, the idea that my mistake was an ethics violation seems inappropriate to me. I can relate that many of the people whom I have queried about their willingness to reveal their information share my view.

Below are a non-random sample of excerpts from the emails I received (all appropriately anonymous):

  • I have no objection to editors using their journals' records for bibliometric research as long as no one can tie any particular referees to any particular decisions.

  • I agree that a referee’s rejection rate is not injurious to careers (e.g., the information content is quite subtle, since it depends on what types of papers the referee is receiving, not just on how harsh the referee is).

  • I found the table interesting and appreciate the transparency.

  • I didn’t feel that it in any way breached my confidentiality as a reviewer, nor did I perceive it as something that required my prior consent to disclose.

  • One of the things I always thought was inhibiting people from being better referees . . .was the lack of information about other referees. The only other reports and recommendations most people (editors excluded) see, are the ones on their own papers. As such, I found your tables illuminating and I think I can use it to become a better referee . . . Regardless of any short-term frictions, I think that in the long run you have done the profession a great favor by publishing this, and I want to thank you for it.

  • I am surprised about the reaction to the table. When someone referred to me about it, I responded by saying it should all be public all the time at this level of granularity.

  • For what it’s worth, I asked two junior colleagues about the refereeing data and neither gave a care one way or the other.

  • I think it is useful for all of us to understand the reviewing process at journals, and I found your article, appendix, and tables all to be illuminating.  My apologies for having to deal with this pushback.  I have heard nothing negative from any of my colleagues/contacts/coauthors, junior or senior, about this issue. 

  • I actually really appreciated the transparency about number of reviews, rejection rates, and turnaround times.

  • I am in agreement that there is nothing inappropriate about what you documented. In fact, I think it's a good thing. I have never received assurance from any journal that they would not use my decision information in this way.

  • I am fine with using my name and other info like position . . . I don't feel "injured" and I am certainly not "angry." In truth, I think that we need a lot more transparency in the review process.

  • I believe it is of relevance to the profession and it does not in any way infringes on my privacy.

  • Nothing on the Table 13A is controversial from my perspective. Yet I can see why some folks might feel disclosing the accept rate makes them uncomfortable if they are an outlier. There is clearly no bad intent, and indeed written text about the Table should help very much.

  • The allegation is absurd! What times we live in!

  • I have no problem with your publishing aggregate statistics that contain my data, as long as I’m not linked to specific papers. In fact, I agree with you that the accept/reject statistics you originally published do not reveal which referees reviewed which papers. I applaud the transparency of JFE policies over the years. The publication of regular statistics about turnaround time encourages fast turnaround and responsible refereeing. And the JFE is unique in giving referees the choice of whether they want to reveal their identities on accepted papers (even though I have always declined such offers).

  • I think you have done a tremendous public service by providing much needed transparency on the refereeing process. For what it’s worth, when reviewers submit referee reports, while there is an expectation that the identity of the reviewer will not be disclosed to the authors - there was never an understanding that aggregated data would not be revealed. The JFE has for long published data such as average turnaround times, etc. I am surprised that anyone would view this as having malicious intent and will encourage my colleagues to view this as a positive for the profession.

  • I also appreciate the openness of the information. All journals maintain and use this type of information in one form or another. Of course, some junior faculty have gone bananas over-analyzing the list and finding “tough” versus “lenient” referees, which is, as you point out, a rather silly undertaking. I recall being an AE for . . . and getting nothing but the promising but hard to referee articles, so I would have likely looked “lenient” then, but may have looked like the toughest . . . referee on earth early in my career.

  • I actually think this data should be publicly available on an on-going basis, and is quite helpful for me to calibrate whether I'm too strict or lax in my refereeing, and transparency is good, and we should act by it.

  • My reaction was the same: you were publishing these stats regularly anyways. I do not know any of my colleagues who are seriously concerned with this. The public response is totally unexpected.

  • I personally found it very valuable the statistics you have provided on the JFE website over the years, including Table 13A – it is a great service to the profession, thank you!

  • I cannot understand why the information you mention is confidential and personal. I am all for greater disclosure in our profession. We financial economists delight in pillorying companies and institutions for lack of transparency. What is sauce for the goose is sauce for the gander.

  • I believe your transparency is exemplary and commendable. I now have more respect for reviewers who have 0% acceptance rates, as they have shown more integrity and less bias than reviewers who may have chosen to accept papers by peers with whom they are familiar. Such a high-integrity-reviewership group is essential for maintaining the high standard of the JFE, and I believe in time more scholars will recognize this fact.

  • As you discuss, one has to be a bit careful on how to interpret reject rates (and in fact everything in Table 13A) given referees are chosen by the editor. But this should be completely obvious to all readers. After all, dealing with selection problems is our business.

In addition, I know that several members of the profession have expressed surprise that one very distinguished member of our profession (who I cannot mention by name) has written a large number of reports for the JFE and that his turnaround time is very fast. Obviously, I was aware of this fact because I was the editor, but even people who served long terms as editors at other journals were shocked by this person's generosity in helping so many papers improve. I view the fact that many senior members of the profession were surprised as proof that the information in Table 13A was valuable and worthwhile to make public.

Finally, I received three supportive letters from John Cochrane, Campbell Harvey, and René Stulz.

I have been informed that I should not produce any list of referees that includes their names, so I have removed the relevant pages on the web page of the JFE editor's office and will not be releasing this type of information in the future, and I will also not be producing a new version of Table 13A. Moreover, I have been told not to describe the payments that the JFE makes to referees in compensation for on-time reports, even though that information is unrelated to any particular referee.

I believe that this is a mistake and that the positive incentive effects that referees receive from the JFE have been a huge benefit in eliciting high quality, speedy reports for many decades. Hiding that information from the broader finance academic community is a disservice, in my opinion.

G. William Schwert
Managing Editor through June 30, 2021

Revised: 1/25/2021


I have received very generous and constructive advice from Renee Adams, John Campbell, John Cochrane, Harry DeAngelo, Darrell Duffie, Eugene Fama, Kenneth French, Itay Goldstein, Campbell Harvey, Terrence Hendershott, David Hirshleifer, Jun-koo Kang, Ron Kaniel, Bryan Kelly, Michelle Lowry, Stefan Nagel, Lasse Pedersen, Raghu Rajan, Patricia Schwert, Robert Stambaugh, René Stulz, and Ivo Welch. Obviously, they bear no responsibility for the views expressed here.

 

Journal of Financial Economics
jfe@jfe.rochester.edu

Simon Business School
University of Rochester
Rochester, New York 14627
www.simon.rochester.edu
  © Copyright 2021, Journal of Financial Economics Editorial Office