Alexis St-Gelais, M. Sc., chimiste
We have just discussed the various types of scientific literature, and their peer-reviewing process. All in all, peer-review is a huge step toward ensuring that quality results are published and discussed within and outside the scientific community. Alas, just as with any activity involving human beings, the process has its flaws. As such, the mere fact that a manuscript has been published in a journal claiming to be peer-reviewed (or looking like it) is not sufficient to entirely drop guard. Here are a number of elements one should know about before venturing into the world of scientific reading.
Money Attracts Predators
There are two business models for peer-reviewed journals, which are, for the most, managed by companies seeking profit. The classical model involves that submission and reviewal of manuscripts is free, but that readers must pay quite a bit of coin to access the published papers. In the recent years, a big shift toward Open access started to emerge. Following this philosophy, a paper can be freely accessed by anyone, given that the authors of the study paid a fixed amount (from a few hundred to thousands of dollars) to cover the publishing fees. Funding agencies in Europe have notably began including amounts specifically intended to cover open access fees, and request that this publishing model is adopted as a condition of their financial support.
But this induced an unwanted side effect.
Most scientists, for the best or the worst, are heavily judged based on the number (and, sometimes, but not always, quality) of publications they release. In the academic world, this is considered to be a good measure of their research efficiency. This creates an obvious incentive to publish or perish, a well-known saying within any scientific field. Within the open access model, scientists desperately wanting their work to be published are thus ready to pay for it. And this is how predatory journals were born.
These journals are simulacra of serious publications, operating under seemingly trustworthy names. They offer a relatively decent fee to cover publishing costs – and for a reason, because most of the time, the peer-review process is either absent (and grossly simulated) or overly complaisant. In reality, any piece of paper, no matter how flawed, will be published, and the journal will grab the coin. With such shortcuts, they can also offer attractively fast publication paces, and they will put their hands on anything that decent journals deemed unworthy to be published.
This is why a list of predatory publishers and a list of predatory standalone journals are regularly updated. The sad thing about this is that valid conclusions can end in predatory journals (a given predatory published can run several journals), and this casts a troubling shadow over otherwise valid results. Everything in those journals should not be trashed altogether, but a high level of critical thinking is advised upon reading their articles. A full list of the criteria used to determine if a journal is predatory was established by Jeffrey Beall, who gave his name to the aforementionned lists – Beall’s lists.
For the sake of discussion clarity, let us say that the whole concept of predatory publishing is not based on science, but on opinions. Some people are challenging this view, and while several journals are without doubt filled with nonsense, some predation criteria are debatable and unduly crushes efforts of emerging publishers that just have not yet fully reached maturity. It can also be argued that predatory publishing is a symptom of an underlying problem – here is an example.
Reviewers are Humans
Reviewers have their own stuff to take care of. So they might not have time to properly look at what they receive, and let mistakes slip through. Hopefully, using many reviewers increases the odds that at least one will pay enough attention.
Reviewers might also have preconceived ideas on their topic, and reject new and valid conclusions because they do not appreciate the way it meddles with their own views. Once again, several reviewers might have different views, and present opposed conclusions to the Editor. The general trend, however, is to trust the one who rejects rather than those who accept willingly.
Finally, reviewers do not know everything. A statistician reviewing a paper containing results pertaining to say a set of chemical syntheses, then to their biological testing and the interpretation of the results following statistical methods, will be able to accurately judge the worth of the last part, but hardly of the rest of the manuscript. The use of several reviewers with complementary expertises can minimize this problem.
The multiplicity of reviewers is thus a useful factor to the quality of peer-review, in my opinion, but there are limits to how many can be assigned to a single task. It can also be challenging to find enough reviewers to properly run a journal, given that the global volume of incoming manuscripts is steadily growing, once again due to the publish or perish pressure.
Yet, I still keep some belief that the peer-reviewing process is useful. I must say that this opinion is not unanimously shared. For example, one can read the rather gloomy view of Richard Smith, former Editor of the British Medical Journal, on the matter.
Conflicts of Interest
Scientists are part of society. They are not otherworldy beings, and curiosity is not always their only motive to move forward. Conflicts of interest may arise – for example, one of the coauthors in our study on Schinus molle is also an operator of the distillation facility. Consequently, he has a financial interest to see the product we studied on the market, and this was stated in our article. This should not prevent scientists from making valid contributions to the general knowledge, but such cases should be clearly stated at the end of the article. So when looking at an article, skip to the disclosure statement at the bottom – and keep what is said there in mind when reading the conclusions.
Failure to declare a conflict of interest is a highly questionable behavior. Such would be the case, for example, if an important essential oil distributor participated to a study relating to an oil he would be selling while promoting the findings comprised in the paper, but would not tell a word about this situation within the manuscript. Sadly, judging whether there is a conflict or not is largely subjective.
I can hardly recall how many times I have rolled my eyes reading yet another article reporting phthalates (here is one I picked up randomly*) as being important constituents of plant extracts or essential oils. When becoming familiar with a given field, one starts to get a feeling of what makes sense, and what does not. Phthalates are not produced by plants, and are extremely common in plastics, laboratory solvents, and even protection gloves. The odds are really, really high that the presence of a phthalate in a matrix arises from contamination, and even if it is detected or isolated, it should not be considered as natural. So, whenever I see a phthalate in a discussion or a table of results, I start questionning the whole article. Spotting those unusual conclusions should encourage the reader to use critical thinking. The same would be true if a single paper reported something completely different from a fair number of other reports heading the same way, but failed to explain it or at least back it on several valid replications.
There likely are other parameters that should be taken into account when reading scientific literature, and that I do not have clearly in mind as of now. What should be remembered from all this is that caution is always advised when dealing with scientific reports (or any writing derived thereof). Training, sufficient time and valid theoretical bases are required to become proficient with this skill. In case of doubt, the best weapon is always to talk with other knowledgeable people, and keep an open but critical mind.
*It is also by the way published in a predatory journal, and its content should not have been published at all given the highly tentative identifications provided in the table. This simply looks as a copy/paste from a MS database search result list.