Thursday, November 10, 2011

musings on peer review

I've been reading up in a new field of study recently and as a result have been thinking about information provenance, reputation and collaboration.

Once upon a time, getting up to speed on a new topic in computing science required finding a good library and wading through a stack of conference proceedings and journals. Now it requires an online search and, critically, the ability to evaluate the quality of the massive quantity of material that comes back.

Formal peer review of publications is gone, unable to scale to online use. Meanwhile online systems for review, comment and reputation management are still in their infancy. The best we have right now is an ad-hoc stew of brands, social graphs and distributed conversations.

Those with enough time to invest can build a mental model of a new field that, after the initial investment in learning the landscape, allows them to maintain an ongoing overview of developments with relatively little effort. Those who's work only tangentially involves a deeply technical topic don't have this luxury. They typically perform searches not to learn about the new field in general, but to get specific solutions to a problem outside of their core field of expertise, after which they move on. Such users vastly outnumber the domain experts for niche topics like transactions.

What implications does this new model of communication and information dissemination have for the behaviour of professed experts in technical fields? Clearly our obligation to ensure that information we provide is accurate remains unchanged, and is in any case in our own best interest. Should we consider there to be an implied professional obligation to publish information only in a context that allows feedback to be attached directly to it e.g. blog with comments enabled? Or even allow collaborative editing e.g. wiki? How about taking time out to correct misinformation where we find it - is that now part of our social contract?

Question for the audience: Where do you get your information on transactions, and how do you assess its quality?

1 comment:

dhartford said...

paraphrase question: "How do you learn a domain (transactions) where it is not your core field of expertise but be able to measure the quality?" For me, 'By Real-World (critical) Example'. If someone has demonstrated how they solved a problem using the domain (transactions) with code, and those that are critical about their own attempts/failures (so when you run into them you can go back and go 'oh yeah, thats why they didn't do it that way'), are usually the highest quality of measurable information to get you going in short-order on a subject.

Unfortunately, I can see as a domain expert trying to come up with problems that people can relate to and how the domain solves those (many, many nuance) problems can be daunting without having others in the domain collecting and sharing the examples (and then, as mentioned, peer-review the examples to the problems/domain).

my two coppers,
-D