Currently, the there are three ways of determining the efficacy of scientific consumables: 1) trust the datasheet and/or references provided by the supplier, 2) trust the anecdotal experience of friends/colleagues, 3) don’t trust anyone and do the experiments required to validate the product in your experimental set-up yourself. In a perfect world, the first option would work first time and the references would be comprehensive and up to date. However, more often than not, the information available from the supplier is an estimate of the concentration or protocol to be used in a specific model. In addition, the references listed are highly unlikely to report any failings of a product. In reality, the second option is usually more reliable. Colleagues are able to share their first-hand experience in what worked, and more importantly, what didn’t. However, what is obviously the best (even if much more time-consuming) option, and the one you will most likely end up doing anyway when the first two fall through, is option three. Only this option can fully validate the product in your system under your experimental conditions.

So, if almost everyone is performing these validation experiments, why are they not available for reading by the scientific community? They are a vital part of the process to produce data for publishable papers so why are they not part of the final report? I would argue that one way in which we can implement a system for reporting the quality and performance of scientific consumables is to add a mandatory sub-report to every journal submission. This report should take the form of a database entry and exist as part of the online supplementary material, so no massive amount of extra work is required on the part of the authors and there is no print space limitation. It should outline the products and supplier used, what tissue used in, any alterations to recommended protocols etc. With enough compliance from journals and researchers, this whole system could become an online database with unique identifiers or external hyperlinks for individual commercial products which when clicked lead back to a repository of shared information, much like the authorship database ORCID http://orcid.org. Authors could then simply update this database with their experimental experiences under defined headings and add the product links to their methods and materials sections. These unique identifiers could then also be added to the suppliers’ webpages and datasheets as well as the literature.

Ultimately, this more transparent approach to method optimization may also have the collateral effect of improving the reproducibility of data.

How do we develop a system for quality assurance and reporting of performance for scientific consumables and products?
Tagged on:     

Leave a Reply

%d bloggers like this: