SEAD White Papers review process
Draft in progress.
By October 15:
We identify a panel of reviewers. Assuming 75 papers, the panel consists of 5 people reviewing 15 papers each. (Better, 7 or 8 people review about 10 papers each.)
By November 1:
Reviewers identify special areas of interest/expertise.
By November 15:
By November 21:
Amy and Rodrigo catalog all the papers.
By November 30:
Roger, Stro, and Lurl somehow assign the papers to appropriate categories. (How?)
By December 7 (perhaps as part of the agenda for a SEAD meeting in Austin):
Roger, Stro, and Lurl match papers to reviewers, ensuring each paper will get multiple reviews. (2? 3? 4?) The distribution looks something like this:
We need to develop and communicate review criteria, emphasizing that we will only select
certain papers for inclusion in the resulting SEAD report. With luck, we will simply be able ask
reviewers for a thumbs- up or down, rather than developing some sort of scoring system.
By January 31:
Reviewers return written reviews with associated scores.
By February 8:
Amy (?) records the inclusion feedback in the table, so the spread now looks something like this:
By February 15:
Roger, Stro, and Lurl discuss any papers with questionable reviews. Roger sends feedback to authors whose papers will not be included in the final SEAD document, encouraging them to publish elsewhere.
By February 22:
Amy (?) compiles the Suggested Actions sections of all the accepted papers.
By March 15:
Roger, Stro, and Lurl discuss implications of the compiled Suggested Actions and determine how best to put forward the results.