The use of wet-to-dry dressings has been the standard treatment for many wounds for decades. However, this technique is frowned on because it has various disadvantages. In this process, a saline-moistened dressing is applied to the wound bed, left to dry, and removed, generally within four to...
By Ron Sherman MD, MSC, DTM&H
The Primary Issues with Systematic Reviews
My contribution to this column is very much overdue. Among other things, I have spent much of my time this past 12 months preparing to write my first "systematic review." The experience has been both illuminating and frustrating, and I am now feeling both respect for the art, and grief over its gross inadequacies.
Mind you, I have consulted many systematic reviews for guidance, and wholeheartedly support the concept of unbiased, comprehensive reviews. My issues are not so much with the art form itself, nor even with those who practice the art of conducting these rigorous reviews. Rather, my major frustrations with systematic reviews are these:
1) Many reviewers are not themselves experts in the field they are reviewing, although they may be expert reviewers. We all know that "book knowledge" does not equate to practical knowledge, and this is made even worse when the books contain ambiguous or conflicting information (as is often the case with clinical studies). We look to the systematic review for more than just a blind statistical analysis; we want to find answers – or at least the wisdom to help us with our pressing clinical questions – from a thoughtful interpretation of the limited existing data.
Maybe I am expecting too much from the reviewers; maybe they should only be expected to present their findings and not provide an interpretation. But I expect a good interpretation ("discussion section") even from the authors of basic science research, and therefore I expect the same from the authors of systematic reviews and meta-analyses. I believe the best people to provide this discussion are researchers who can interpret their findings in the context of what they know about the specific field. We already know that "more research is needed;" what we really want to know is "what should I do now?"
2) References retrieved and reviewed are only as good as the terms used to search, and the index terms associated with the relevant articles in the literature. Therefore, important articles may be missed in the search. Perhaps the reviewer has limited the review to randomized clinical trials (RCT), leaving out non-randomized clinical trials, or controlled studies not prospectively conducted. Clearly, the RCT is to be held in highest regard; but when such studies are few in number and we must rely on meta-analyses to make clinical decisions, this is precisely the time that we should also consider the results of less pristine controlled clinical studies. Most systematic reviews are carefully crafted so that they do not miss any relevant literature, but one should always ask: what relevant clinical studies have been omitted, and do they support or contradict the reviewer's conclusions?
3) Systematic reviews limit themselves to clinical studies, often ignoring laboratory data. Admittedly, the effects of a treatment or intervention in animals or in tissue culture do not necessarily equate to the effects in humans. Yet, sometimes the laboratory can be a better testing ground, especially when the hypotheses being tested involve significant risks or noxious interventions. Reviewing laboratory data helps me to appreciate potential adverse events, especially when the number of humans studied is too small to detect rare events. When a mechanism of action has been worked out, and when that action fits well with my understanding of the world, then it is easier for me to accept the efficacy findings of a new treatment even when human trials are few or imperfect (at least until more clinical data becomes available). In short, I like to consider the laboratory effects of an intervention in making my clinical decisions, especially when clinical trials are few or lacking in certainty.
4) Too many readers (if not also some authors) erroneously believe that the systematic review eliminates all bias and is the final word on the topic. I have heard people argue that maggot therapy is not effective because a 2002 Cochrane review noted that there was insufficient data to show that maggot therapy was effective! Ten years have passed by us since then, and many clinical studies of maggot therapy have been published in the interim. The review itself has since been revised (2010), with a very different conclusion about maggot therapy for diabetic foot ulcers. Still, the original review continues to be cited in the literature and cited by the wound care community at large because it was a Cochrane systematic review. After all, "need we read more?"
I believe we do need to read more. We need to read more into every study, including systematic reviews and meta-analyses, and we need to keep reading the subsequent literature. We need to treat systematic reviews as we would any other piece of published research: a valuable contribution, if it holds up to careful critique, but not a flawless work nor the final word on the topic.
Improving the Standards of Systematic Reviews and Meta-analyses in Wound Care
We should expect more from our reviewers than "more research is needed on this topic," and we should expect more from ourselves as therapists. After critically reading the systematic reviews and all of the other available literature, we must be prepared, willing, and able to make up our own minds, independently, because ultimately – after weighing benefits against risks, weighing one treatment against another, weighing one study against another – it is our job to decide which is the best course of action for each and every single patient, in his/her situation, at that present moment.
About The Author
Ron Sherman MD, MSC, DTM&H has led a long career at the forefront of biotherapy, pioneering the development of medicinal maggots for over 25 years. He is now retired from his faculty position at the University of California, but continues to volunteer as Director and Board Chair of the BTER Foundation, and as Laboratory Director of Monarch Labs.
The views and opinions expressed in this blog are solely those of the author, and do not represent the views of WoundSource, Kestrel Health Information, Inc., its affiliates, or subsidiary companies.