The Leiden Manifesto for Research Metrics suggests ten principles to guide research evaluation.
Here’s the .pdf (which may download automatically).
Data are increasingly used to govern science. Research evaluations that were once bespoke and performed by peers are now routine and reliant on metrics. The problem is that evaluation is now led by the data rather than by judgement. Metrics have proliferated: usually well intentioned, not always well informed, often ill applied. We risk damaging the system with the very tools designed to improve it, as evaluation is increasingly implemented by organizations without knowledge of, or advice on, good practice and interpretation.
Here’s one guideline:
5. Allow those evaluated to verify data and analysis. To ensure data quality, all researchers included in bibliometric studies should be able to check that their outputs have been correctly identified. Everyone directing and managing evaluation processes should assure data accuracy, through self-verification or third-party audit. Universities could implement this in their research information systems and it should be a guiding principle in the selection of providers of these systems. Accurate, high-quality data take time and money to collate and process. Budget for it.
Yes, agreement on the fact base under discussion would seem an obvious requirement. Especially in cases where the researchers’ findings are controversial and/or may have an impact on the careers or portfolios of interested parties.
Otherwise, findings can just be misrepresented or inappropriate inferences drawn. To say nothing of academic sneers or even stirring up science writers to wave their pom poms for an opposing view.
Follow UD News at Twitter!
Search Uncommon Descent for similar topics, under the Donate button.
Hat tip: Pos-Darwinista