I find that there is always an implicit, and often explicit, assumption about the linearity of the process of doing a systematic review. Put simply, we do the scope, we then do the search, the appraisal, the data extraction, and then we try and make sense of it all. When writing up the review, we use neat flow diagrams that says we found 1395 articles in the original search, that there were 1148 after de-duping, 386 after first appraisal and so on… it’s all so very neat and so very linear. The reality, of course, is that often after the search and appraisal has been done, there is a drip, drip of articles that suddenly appear from here and there, from references of found articles, random Google searches etc, that did not come up in the original search; the implication when this almost inevitably happens is that the search, and by extension the searcher, was in some way a ‘failure’. (I could use a baseball analogy here about the hiding-to-nothing that pitchers are on, but I shan’t… it would probably just confuse.)
But of course the search is not a failure. The process of searching is iterative, as an initial review of MEDLINE, EMBASE etc will never pick up everything (we all know about balancing sensitivity, precision etc). I had never heard the term before (being a recent convert to the Info world), but ‘reference harvesting‘ (or snowballing, if you want to be risque) is a phrase that “refers to locating relevant references from the bibliographies of existing scientific articles in a continuous process of evaluation, inclusion and synthesis.” Note, ‘continuous process’; the searcher does not just do the search then walk away.
What we, or at least I, need to do, is:
1. Educate those non-searchers around us that the searching process is iterative and ongoing, and that not every relevant document will be found in the initial search (because they’re not indexed on the databases searched, or because articles have not been indexed properly, or a peculiarly written title/abstract means that free text terms did not pick it up, etc)
2. Create and use search assessment forms/checklists or whatever you use to track your search, with explicit reference to reference harvesting, and have the ability to track the iterative process.
3. Use keywords etc in you bibliographic software (Ref Man, Endnote…) to record for each article where it was found (e.g. MEDLINE, Cochrane, Hand searching, Google search…), and of course ensure that each article found and appraised has been suitably recorded.
I think these steps are particularly important when hunting for guidelines. Never a particularly systematic process…