An interesting publishing model

There’s been some renewed discussion in the blogosphere and CS media lately about the broken model of CS publishing. In the latest issue of Communications of the ACM, for instance, Moshe Vardi’s editor’s letter discusses hypercriticality, or the tendency of some reviewers to be overly and needlessly negative, and how this is harmful to our field:

We typically publish in conferences where acceptance rates are 1/3, 1/4, or even lower. [Actually, the top conferences in my field have acceptance rates closer to 10%!—acd] Reviewers read papers with “reject” as the default mode. They pounce on every weakness, finding justification for a decision that, in some sense, has already been made….If the proposal is not detailed enough, then the proposer “does not have a clear enough plan of research,” but if the proposal is rich in detail, then “it is clear that the proposer has already done the work for which funding is sought.”

What is to be done? Remember, we are the authors and we are the reviewers. It is not “them reviewers;” it is “us reviewers.”…This does not mean that we should not write critical reviews! But the reviews we write must be fair, weighing both strengths and weaknesses; they must be constructive, suggesting how the weaknesses can be addressed; and, above all, they must be respectful.

A mailing list I’m on pointed me towards this blog post, lamenting the state of systems-level HCI research (basically a good discussion of what type of work is “valued” by a subfield, and how this plays out in the review cycle—I can certainly relate!), and concluding with the following:

What is the answer? I believe we need a new conference that values HCI systems work. I also have come to agree with Jonathan Grudin that conference acceptance rates need to be much higher so that interesting, innovative work is not left out (e.g., I’d advocate 30-35%), while coupling this conference with a coordinated, prestigious journal that has a fast publication cycle (e.g., electronic publication less than 6 months from when the conference publication first appears). This would allow the best of both worlds: systems publications to be seen by the larger community, with the time (9-12 months) to do additional work and make the research more rigorous.

These are all great questions and valid points, but it’s easy to just wring your hands and say “oh well, I need the publications so I’ll just play by the rules” rather than trying to change the system.

But one conference seems to have been paying attention to the discussion.

VLDB, a databases conference, is trying out a new reviewing model, attempting to combine the best features of journal pubs (multiple reviews and rebuttals) and conference pubs (timely publication, quick(er) turnaround time).

PVLDB uses a novel review process designed to promote timely submission, review, and revision of scholarly results. The process will be carried out over 12 submission deadlines during the year preceding the conference. The basic cycle will operate as follows:

A Rolling Deadline occurs on the 1st of each month, 5:00 AM Pacific Time (Daylight Savings observed according to US calendar).

· Initial Reviews are intended to be done within one month, and they will include notice of acceptance, rejection, or revision requests.

· Revision Requests are to be specific, and moderate in scope. Authors will be given two months to produce a revised submission.
· Second Reviews are intended to be returned within one month, after which a final decision will be made. Second reviews are to directly address the authors’ handling of the requested revisions

What’s more, they also address some of the points made in the first two articles I linked to, and common complaints about the review process (emphasis mine):

The revision process is intended to be a constructive partnership between reviewers and authors. To this end, reviewers bear a responsibility to request revisions only in constructive scenarios: when requests can be addressed by specific and modest efforts that can lead to acceptance within the revision timeframe. In turn, authors bear the responsibility of attempting to meet those requests within the stated timeframe, or of withdrawing the paper from submission. At the discretion of the Program Committee, mechanisms may be employed for reviewers and authors to engage in further dialog during the revision period.

This is a really fabulous idea. There are still issues—you trade off between submitting early in the review cycle (and getting more feedback) and having a long turnaround time to publication, and it’s unclear if people will still wait until the “hard” deadline (March 1) to submit their work. But if it works, this could really revolutionize how we think about both conference and journal publishing.

(Oh, and it appears they have a semi-loose connection/agreement with a journal, encouraging submissions of extended version of the conference papers—not clear if these will be “fast-tracked” at all, though.)

I will be watching what happens, and hope that the VLDB organizers prepare some sort of summary of what worked well and what could be improved (and whether this actually works!) so that other conferences can adopt and adapt this particular model.