Disclaimer - please read this before proceeding further. The text below represents EXCLUSIVELY personal views of the author and nobody else. Any names mentioned here are for illustration only and these people were not in any way involved in preparing this document (or even informed about its existence). The document is long and probably not very coherent. A reader who thinks that the author makes too categorical statements can insert "I believe that" in front of every single sentence of this document (which is what seem to be expected for almost any statement in a siggraph submission to avoid hurting someone's feelings). Since the result will look idiotic, I will refrain from doing this. And, by the way, I do not recommend reading this at all if siggraph is too close to your heart and/or you are easily offended. If you feel like responding to this document in public, the most appropriate place would probably be a discussion forum dedicated to current state of CG research.

Summary. After doing computer graphics research in academia for almost 8 years (here is more on my background), I decided to leave the field. One of the main reasons for this difficult decision is my deep disgust for the state of affairs within computer graphics research community and my inability to fit well within existing system. Over the years, this led to growing frustration, inability to raise enough external money and, as a result, uncertain prospects for tenure (i.e., a permanent position) in a research University. However, to increase the chances of this document being treated as a bit more than whining of a looser, I would like to emphasize that my decision is NOT due to denial of tenure (or any indication that this would happen). This document presents a particular point of view and is intended primarily as an answer to several people who asked about my reasons. Who knows, it might even be of some use for younger members of the community if they want to have better chances of being successful. On the other hand, I do not have any illusions that something would really change as a result of it.

Reasons. General intellectual health of any community which relies on peer-review process depends largely on its members being responsible, tolerant to alternative points of view and generally having friendly attitude towards the work of other people (if not towards other people themselves). Unfortunately, all of these factors seem to be diminishing within rendering community during the last few years and the dominance of siggraph conference over the field makes any changes (other than to the worse) very unlikely. [Although this document applies to rendering community most directly simply because I worked mostly in this domain, the situation in other subareas seem to be not much better. For example, in my personal (but limited in this case) experience, fluid dynamics people are by far the worst in most respects relevant to this document]

First, there seem to be elevation of some research areas (and, to a degree, researchers working within them) at the expense of others just because these areas (people) are considered "hot". Of course, it is natural that some areas receive more attention than others and for any individual researcher to believe that some areas (usually ones in which this person is working) are of greater importance than the rest. [For one, I find most of current "hot areas" comparatively UNinteresting to me.]. Yet, most people would probably agree that it is absolutely wrong to allow these areas/people to dominate "just because". Yet, this is exactly what happens at present. Due to having publications over a rather wide set of topics, I have been reviewing and otherwise exposed to a wide variety of submissions. There is no doubt in my mind that papers from certain areas (and then from certain people) have much easier time being accepted to siggraph and probably other conferences/journals even though in many cases they contain mostly trivial (although well sold, of course) material. At the same time, papers of significantly higher overall quality are being summarily labeled "incremental" while containing much more interesting ideas (and no, I am not talking about my own papers except one - see below). It is time for the community to understand that absolutely any paper in a somewhat mature field (computer graphics arguably entered this period at least ten years ago) can be labeled incremental - after all, it relies on previous work. Unfortunately, having siggraph in the dominant position it now occupies, implicitly suggests that all what is published there is of higher quality than what is published in other venues. This is simply not true and what siggraph really encourages is a pseudo-scientific style of presentation, extreme over-selling of one's work (as long as it is not done in an obvious way) and even blatant misrepresentation of results. The latter I found out the hard way for two recent (2003+) siggraph rendering papers where none of the (as usually, exorbitant) claims hold and the results demonstrated were due to completely different issues having nothing to do with proposed techniques. In this kind of environment, it is relatively clear that an easier way to get into siggraph is simply to stop being honest and camouflage the true benefits and limitations of one's work as some people choose to do - most reviewers do not take the time to dig beyond the smooth claims anyway. I simply got tired of putting a lot of work for the entertainment of five people of which statistically 1.5 do not know what they are talking about  and another two do not bother to read the submission with any degree of care. [Many thanks to that 1.5 knowledgeable reviewer who actually took the time to carefully read my papers]. I do not see a point of continuing working in a system where practical usefulness or non-triviality of ideas are largely irrelevant. Unfortunately, siggraph seem to be spreading its metastases to smaller conferences (such as rendering workshop) which used to be much friendlier and enjoyable venues. 

Finally, you may consider it vanity but after certain point one can only say "you do not like what I am doing - fine, but I happen to like it. So I will not bother you any more." I wish I could simply go to computer graphics industry where (as at least I hope) there is some reward for making things work and achieving useful result. Of course, nobody might need me there either (never tried to apply). More immediately, unfortunately, I have to stay in NY area (two body problem). So I will try something completely different - after all, I do like learning/doing new interesting things and current environment in computer graphics research is just not suited for this any more...

A perfectly legitimate questions to ask is "But wait, aren't you one of "them", i.e. people who write reviews and decide the fate of submissions ?" "Haven't you ever oversold your own stuff to get in?" Guilty as charged for the first count. (and the fact that I am becoming part of the system is one of the reasons I am leaving, although relatively minor - I am certainly not a saint to experience any prolonged struggle with conscience over this). When I started reviewing papers, I was very concerned about being too harsh on people and giving too low scores. Yet, to my great amazement, these scores usually ended up being some of the highest. Well, after reading enough of reviews on my own work, I got a better idea of exactly what an ugly cut-throat system this is ("kill the neighbor, so that he can't kill you") and "adjusted" to the average level. I think I trashed (i.e. not just recommended rejection, but did it in a most unkindly way possible) only one paper over the years and that was a conscious decision trying to prevent from publication something which was extremely oversold yet had no non-trivial substance if one looked closely. Guess what - the paper was accepted to siggraph next year (I was not reviewing it then). Yet, I never been on siggraph committee, so at least for this conference any of my reviews were mostly irrelevant anyway - I think it is quite clear that except in extreme cases, committee members can essentially make any decision they want, regardless of external reviews and it is easy to dress it up as "objective". At smaller conferences (I have been on several committees) it is certainly possible to have primary/secondary reviewer overruled by externals. At siggraph, overwhelming push to kill papers easily prevails. This seem to be even more true in recent years when many siggraph committee members (at least in rendering) are still establishing their own careers and there is just too much in stake for them not to be aware (maybe on subconscious level) of the effect of accepting/rejecting particular paper on their own status. For the second question (overselling my own stuff), I always been very careful to expose all known big problems with any proposed technique and not to make any overblown statements. Most likely I did not fully succeeded since I did want to get stuff published which is next to impossible if one disclose the whole truth.

Proof? I am sure almost everyone has their own ugly siggraph-related stories and I had my fair share as well. I will mention just two since the rest involves papers on which I had co-authors and I do not want to speak for any other people. One case includes a rejection of paper describing a tone mapping algorithm (later appeared in rendering workshop) which is almost identical to an algorithm described in a paper accepted to siggraph (in later literature, the RW paper is rarely quoted along with the siggraph one demonstrating the "superconference" status). Why? Well, random is random and one might be willing to live with this. A more recent case which much more significantly affected my decision moves beyond randomness right into the shady area of incompetence/agendas. It involves some BRDF work (still unpublished and will probably never be now). For one, the siggraph 2005 committee of supposedly experts was essentially unable to recognize a non-gaussian/Phong shape of specular highlight on a standard sphere-under-single-light picture. They then went on basically telling me that they think A&S BRDF model would probably work better than the proposed one (guess what "A" stands for in A&S?). As I later found out, they also disliked the tone of my rebuttal which (who cares that this has nothing to do with paper's quality) contributed to the rejection decision. [Lesson to younger people: treat siggraph committee as infallible gods in your rebuttals - they certainly think about themselves as such]. I was quite upset and posted a (what I thought as quite moderate) note online about this. It appeared to attract much more attention than I ever expected since it was simply stating something which most already knew - that siggraph process is badly broken. But I guess saying what one thinks in the open is unusual enough in graphics. On June, 9, 2005 I even asked to be put on the next siggraph committee, in part to see if things are really as bad as I thought they are. According to July, 25, 2006 e-mail from Julie Dorsey, the papers Chair (the first communication on the matter I received except for the acknowledgement of the request), she "considered my request very carefully, ... but I was not ultimately asked to do serve on the committee". With all this, there was no chance the paper would ever be accepted to siggraph, but for fun, I resubmitted for siggraph 2006 after including some extra images which address the last year complaint. Now nothing could be found to be technically wrong with the paper (after requested improvements, it got average rating 3.0 compared with 3.3 on the original), so it was simply labeled "incremental" and rejected. Late addition: We just received results from a smaller conference on what was probably the last paper I was actively working on. Since there are other people involved, I will not give details, but this is one of the worst cases I ever seen (and that is saying a lot) with reviewers simply have not read the paper and based their review essentially on their perception that particular AREA is not interesting to them. This only confirms the direction the field is going and makes me quite happy I am getting out of this.

Suggestions? Many people I talked to, including some quite well known graphics researchers, agree (in private) with much of what is written above. Some of them, interestingly,  praise siggraph in public reasoning that there nothing can be done, so one just have to play by the rules and one of these rules is that you do not speak openly against the system. Yet, I  do not think it is all that difficult to change the situation by looking at how other, more mature fields operate. Unfortunately, none of this will even happen - there is too much investment in the system and too little desire to fundamentally change it, especially  from people who actually can make such changes. Some of them even honestly believe that siggraph is the best thing which happen to computer graphics. This is a cycle, of course - to be considered a top researcher, you need to publish in siggraph but if you are able to successfully publish at siggraph, why would you want to change the system? Still, I will put a few things here in remote hope someone at least thinks about this.

 First and foremost, siggraph should cease to be THE ONLY place people consider for presenting their best papers. It is very common for people to postpone publication of a completed work for a year or even more simply because the author truly believe something to be a "siggraphable". Obviously, the community does not benefit from such delay but author's decision is certainly understandable given siggraph overblown status. Yet, in such fields as physics/chemistry even top conferences generally considered to be mostly places for people to meet, present recent not yet fully worked out ideas (siggraph sketches is the closest to this) and for the industry to exhibit. It is not uncommon for scientists not to list any conference publications on their CVs. The main reason graphics people give why journals are not appropriate is that they are "too slow". Yet, given modern publishing industry, there is no technical reason why a journal can not be as fast (or faster) than the half a year conference time frame. In fact, many top hard science journals ARE faster (if you do not trust me, just pick up recent issue of Phys. Rev. or PRL - they say when a paper was submitted). What is needed, of course, is the commitment of the community to take non-siggraph reviewing seriously (see below)

Of course, conferences have much greater standing in computer science and it would probably be too extreme to go directly to "hard science" model. Yet, note that in any other CS field there are SEVERAL truly top (not perceived as "second-best" as they are in graphics) conferences which are considered truly equal to each other with none having "superconference" status.  This also at least partially takes care of the "hot area" issue since each conference might have a different perspective on what is "hot". I know of a few people consciously do not submit their best work to siggraph but unfortunately, since there are so few of them (and, to a degree, since nobody knows that this is their own decision), they only hurt themselves at the moment. But in the long run the right thing to go with siggraph is to keep the exhibit, courses and sketches but completely kill papers program (or keep it just for presenting the work of a few overblown egos who need a reason for claiming they are above everyone else)

Part of the problem is the overwhelming number of submissions to all top conferences/journals which makes it hard to provide reasonable quality reviews. However, this problem can be at least partially reduced. First, people should exercise a moderate amount of self-control and try not to apply a Monte Carlo approach to submission ("let's just try - it might happen"). Again, this is very much understandable given siggraph status and (if one removes top and bottom 10%), complete randomness of its decision process. 

Still, all what is mentioned above will never replace basic ethics of a reviewer who by agreeing to review should understand that (s)he takes on a serious task requiring some serious time commitment and will make an honest attempt to impartially evaluate the paper while keeping a generally friendly attitude towards the work of others. If you can't do it, don't take it on. I (and probably most people) would rather have a decision on my paper based on two thorough reviews instead of five which each took less than ten minutes. It would also be nice to have "if in doubt, accept" attitude while reviewing rather than current "find any reason to reject" one. Those who believe that increased number of reviewers make much difference should remember that MC noise goes as 1/sqrt(N), so for x2 decrease in noise level one need x4 increase in number of samples. In other words, one needs 8 reviewers (the number which would be considered impractical by most committees) to have twice the level which can be obtained with just two. [And, of course, recent siggraph push to go from 4 to 5 and then to 6 reviewers per paper is simply ridiculous when viewed from this position] Hard science people realized that and my experience is that in most cases only two (or at most three in difficult cases) reviews are solicited. All what large number of reviews per paper does is decreasing the average quality of a review and make it harder to find qualified reviewers since people have to do more of them. Plus, at the end, any normal committee member would probably non-uniformly weigh reviews of different quality and from different people (and they rightfully should) anyway. So, why waste people's time? For a true noise-free evaluation of a paper, it has to be presented to the whole community but this means it has to be published. Again, hard science people recognized that and created paperless publishing systems such as arXiv which essentially became the main way of information dissemination. But this is different culture, of course - people tend to run all their work by the whole community before any formal publication, which allows authors to immediately gauge the interest in their work, get comments (and based on them improve the quality of eventual submission), etc. Of course this requires generally friendly attitude  which is non-existent within graphics. The system is formally open to computer graphics papers as well, but with everyone so engaged in trashing other people's work and trying to publish their own, it is no surprise that nobody uses it - there were whopping 11 posts in 2005.

Another way of reducing reviewing load and through this increasing the quality is to apply editor's power to reject a paper without review if it is clearly below standard. This, of course, puts extra burden of the editors to be very fair and very knowledgeable, but after all, they are supposed to be top experts ! This can easily eliminate, say, 10-20% of siggraph submissions (and at least as much for journals) which have zero chance anyway and might reduce the MC-type submissions a bit. I am not sure if editors/committees currently even have such powers. It is, of course, harder to implement for conferences which have strict deadline and, at least initially, guaranteed to create a stream of complaints but hopefully will lead to better self-control on the part of researchers. Hard sciences implement this in some top journals (such as PRL) and seek to expand this practice - see http://prl.aps.org/edannounce/PRLv95i7.html

Finally, while I keep referring to hard sciences PROCEDURES as something to follow, I would like to emphasize that computer graphics is simply not a hard science and current attempts to present it as such are misguided at best. One can pretend as long as one wants to be doing something scientific, but at the end the success or failure of any specific technique is, quite literally, in the eye of beholder. Most real scientists would laugh at the claim that rendering papers are "scientific" - I remember one (and only one I ever saw) case when a rendering paper (which is uniformly considered one of the most solid ones in the field) was presented to a mixed audience. After the talk, physicists in the audience easily picked it apart pointing out lots of very fundamental problems in the assumptions and the derivation while finding many inconsistencies in the final images. At the end the rather frustrated presenter had to admit that (despite all the heavy math in the paper) the only real justification remaining is that the images looked good to him and the rendering community. Therefore, the following reasons should, by themselves, never be sufficient for rejection (I had all of these for my papers, sometimes the opposing ones for the same submission): 1. "results do not look good to me" - if you don't like the appearance of the results, does not mean (except in extreme cases of clear artifacts and such) that everyone else does not like them. It works the other way around, of course. Extensive user studies are great, but they are almost never practical, so rejecting because of lack of them is simply inappropriate in most cases. 2. "there is no scientific justification for this technique". If one exists, it's great, but in general, there is no need to have one. Similarly, I consider "it's a hack" to be a complement (if it works). In rendering, for example, it is relatively easy to crunch the math if you have the right technical skills but to come up with a useful shortcut usually requires a non-trivial idea. Almost all techniques which are actually used in practice are hacks to one degree or another.

Comments? If the unlikely case you read this document to this point, you might have your own view on the health of computer graphics research. Well, it does not make much difference to the author which way the community goes now, but in the even more unlikely case you want to share your own thoughts, I am willing to post them here, anonymously if requested (there is, as I mentioned, a penalty for speaking openly). The only requirement is that you should have spend enough time within the community to know what you are talking about.  I honestly do not think, however, there is enough genuine interest out there to continue this discussion. Late addition: After this page was up for only a few days, I received some comments from a senior researcher which included a  suggestion of having a discussion forum on these and related issues.  Well, it now exists and should allow anonymous posts. It also  does not require you to create any accounts to participate - just click "post a new message" and let your point of view known. To some degree, its existence is a test on its own - let's see if anyone in the community even cares about all this. You are welcome to put a direct link to the forum on your own web page and otherwise spread news about it - I do not pretend to be known enough for this page to be ever visited by any significant fraction of the community.

Acknowledgements. I would like to thank several people. I actively worked with a few of them over the years, while some I have never even met (and a few might even not know about my existence). I owe many of them a lot for their support and all of them for just for doing excellent work which made graphics research interesting to me. This "A-list" (presented in no particular order) is probably incomplete and I might add people as I remember them. Peter Shirley, Pat Hanrahan, Jos Stam, Holly Rushmeier, Greg Turk, Steve Marschner, Jerry Tessendorf, Nelson Max, Hugues Hoppe, Hong Qin, Klaus Mueller, Arie Kaufman, Hanzpeter Pfister, Greg Ward, George Drettakis, Simon Premoze, Jim Arvo, Adam Finkelstein, Daniel Cohen-Or , Eric Lafortune, Philipp Slusallek, Alexander Keller, David Gu, David Forsyth, Alex Efros, Elaine Cohen, Jessica Hodgins, Przemyslaw Prusinkiewicz, Michael M. Stark, Jack Tumblin, Amitabh Varshney

I wish best of luck to everyone who has the strength to stay within the system and continue doing interesting work. I can only say "So long and thanks for all the fish..." (no hints intended, just like the book)