CIKM reviewing: too much and too little


I've just finished my CIKM 2010 reviewing assignments, and have come out of the process with a number of questions:

Is nine papers too many to review?

This year, I was assigned nine papers to review. Last year it was five. For SIGIR (if my own local cache of reviews is to be believed) it was only two this year (so few!) and eight the year before.

Nine papers is quite a heavy reviewing load. It used to take me the best part of a day to give a responsible review. Now, with experience, I seem to have got it down to four hours or so. Even so, a nine-paper stack takes a few days to get through. Another colleague, himself a responsible reviewer, opted out this year because of the load.

The advantage with having a large number to review, though, is that it gives you a sufficient sample to calibrate your own reviewing. This is particularly important for a conference with as low an acceptance rate as CIKM, whose rate has hovered around 16% in recent years. With nine papers, you have a ration of one accept and one maybe. With only, say, four papers, it is more difficult to know whether you are being overly indulgent or overly stingy.

Is 16% of submissions too few to accept?

There was a very interesting article in the most recent issue of the Communications of the ACM, by JiLin Chen and Joseph A. Konstan, entitled "Conference Paper Selectivity and Impact". The authors surveyed the ACM Digital Library, comparing the acceptance rates and impact factors of ACM conferences and journals. They found that, on average, journal articles achieve a similar citation rate as papers from conferences with an acceptance rate of 35%. This is interesting in itself, as it implies (as people in CS always claim) that top conferences are more prestigious than journals (although part of the cause might be the ACM's glacial turnover rate in getting journal submissions to print).

More interestingly for our current discussion, though, was the finding (on slightly questionable methodology, it is true) that the most-cited 10% of submissions at conferences with acceptance rates below 15% receive fewer citations than the most-cited 10% of submissions at conferences with between 15% and 25% acceptance rates. I say that the methodology was slightly questionable because it compared all the papers from a conference with 10% acceptance with, say, the most highly cited half of papers from a conference with 20% acceptance, which elides random effects (more citeable papers from the stricter conference may have been rejected by chance). Nevertheless, the finding gives quantitive form to the often-expressed rule-of-thumb that an acceptance rate over 20% means accepting bad papers, whereas one under 20% means rejecting good ones.

Is it the case (as the authors speculate) that conferences with too-tight acceptance rates are squeezing out innovative papers in favour of formulaic ones?

Are big conferences eating small conferences?

The main reason, of course, why CIKM reviewers have such a heavy workload, and CIKM acceptance rates are so low, is that CIKM receives a huge and swelling number of submissions, as the following table shows:

Year Location Submissions Acceptance rate
2006 Arlington, VA, US 537 15%
2007 Lisbon, Portugal 512 17%
2008 Napa Valley, CA, US 772 17%
2009 Hong Kong, China 847 15%
2010 Toronto, Canada 945 ??

(By the way, the above table also gives quantitive justification to my qualitative assesment that no-one in their right mind wants to go to a conference in Europe.)

Meanwhile, my anecdotal impression is that smaller conferences are struggling to attract submissions; certainly, this has been the case for a number of smaller conferences I have been involved with recently.

There are many reasons why big conferences could be eating small conferences. One of them might be a growing emphasis upon prestige of venue in assessing publication impact. Whatever the reason, it is hardly a desireable trend. The large conferences are sausage factories for academic publications -- CIKM churned through 55 presentations on the opening day last year, ignoring posters. There is also both anecdotal (and, given the previous point, now quantitative) evidence that the large conferences suppress innovation in favour of formula. Plus the sheer scale of the larger conferences make them questionable venues for actually meeting and getting to know other people in your area -- unless, that is, you are a senior researcher meeting up with your peers.

In contrast, smaller conferences provide more of the opportunities that conferences properly provide (leaving aside their dubious role as high-status publication venues). The lack of parallel tracks means that there is more unity to the proceedings, and more of an opportunity to hear a range of work. The smaller number of attendees means there is more of an opportunity to get to know other researchers. And the higher acceptance rate means there is more room for novel, off-beat, and experimental ideas.

Why do people publish incrementally at conferences?

One of my pet hates is researchers who publish their work incrementally at conferences: the basic idea as an innovation at one conference, then a series of tweaks to it at successive ones. Two or three of the CIKM submissions I reviewed this year were in this category: one group of authors gave additional experiments for a previous publication; another developed a framework for choosing the parameters for their earlier optimization method. (For the third, double-blind reviewing meant I couldn't be sure whether they were minimally extending their own work, or someone else's.) This is a real perversion of the purported point of conferences as venues for the fast publication of new ideas. Once you have your SIGIR publication for your research program, surely that is enough validation? Isn't the next step to develop it over the next year or so into a fully-fledged journal article, rather than dribbling out new ideas as they occur to you? Such incremental submissions should be stamped out with all vigour. These minor evolutions clutter up high-profile conferences like CIKM and SIGIR, squeezing out newer, fresher work.

Why am I repeatedly given my own submission to review?

This is the second CIKM at which I have been assigned my own submission to review. The first time appeared to be a chance occurrence, but the second began to make me suspicious. How many others does this happen to? Let's assume for simplicity that people review only in the research sub-field they submit to; that there are as many sub-fields at CIKM as there are sessions; and that submissions are evenly divided amongst sessions. There have been around 900 submissions to CIKM for each of the past two years, and 25 or so sessions per conference. This means 36 submissions per sub-field. If you review 8 submissions, then there is a (35/36) * (34/35) * ... * (28/29) = 7/9 chance that you will not randomly be given your own paper to review, or a 2/9 chance that you will. Therefore, the chance of it happening two years in a row is 4/81 (although of course there's a selection bias here: you only take notice of the two-year sequence that it does happen in). So it's not that improbable an occurrence. But given it must be happening to over 20% of the program committee each year, surely the allocation algorithm would have been fixed to make sure reviewers weren't assigned their own paper? Or am I the only one who lets the chairs know when it happens?

2 Responses to “CIKM reviewing: too much and too little”

  1. Why do people publish incrementally at conferences?

    Because that's how you publish a lot of papers.

    An interesting, secondary question, is whether your goal should really be to publish a lot of papers. As for myself, I derive no pleasure out the dividing up my work into small increment for the sake of having a long publication list. I also do not appreciate the kind of papers that are generated in this manner.

  2. Anonymous says:

    I am new to research and didn't know research == publication!! I was shocked at the quality of some of the papers at these so called big conferences. I started thinking about how many papers I could have written, if I wanted a non-significant publication at a conference. If people would only publish significant results, it would be a lot of help to the readers and everyone!

Leave a Reply