Just a quick warning that there is currently a problem with the new OvidSP platform when you try and export/download results using the .txt format. Unfortunately it does not tell you there is a problem – everything seems just dandy in fact – but when you look closely you might be missing a handful of records. Quite worrying, if you’re doing a systematic review, and dealing with many hundreds of records. You might miss the key study and you’d never know it…
When I noticed this was happening I asked Ovid what was going on. They confirmed there was a known problem. However, they did not apologise or acknowledge in any way that it might be an inconvenience. Thanks guys, good customer service. They just said my name would be added to the list of customers that have reported the issue and who would be told when they come up with a solution. Nice.
PubMed has a new clinical queries page, apparently. To be honest I used the old one so infrequently that it could have changed six months ago and I never would have noticed. I tend to use PubMed for quick and not so dirty searches of the literature but if I’m doing a “real search” I use Ovid because of its slight value-add functionality plus it has our company’s access to Embase. Perhaps because of this access to Ovid I’ve never really paid all that much attention to the development of PubMed which, particularly over the last six months or so, seems to be whizzing ahead.
Anyway, the clinical queries page is quite fun. You put in your term and get results for “clinical study categories”, “systematic reviews” (not really so much “systematic reviews”, more “aggregate research” or “tertiary research” or similar), and “medical genetics”; you can then click “see all” for, well, seeing all, and there are drop down menus for whether you want therapy or etiology etc, and broad or narrow filters. It’s easiest just to play with it. I like the fact that at the bottom of each list if you click the word “filter” it will show you the actual search string being used to filter your results (for therapy or etiology, broad or narrow etc) and the sensitivity/specificity scores of said filters (precision would be helpful too) along with the reference to the original paper in which the filters were based. All nice and transparent, and helpful if you wish to translate e.g. the prognosis filter to use in another database.
Of course though many of us want to add our own favourite filters. I always had a rough idea you could do this but had never bothered to really look into it, but it’s an absolute breeze. Fortunately I don’t have to describe how to add your own filters as Laika has already done such a good job of it (with screenshots and everything), and you can now add up to 15 of your own favourite search strings. Not sure what to add? Your friends at CRD/InterTASC will help you out. Once you get started you’ll be having such a whale of a time that you’ll be looking for excuses to do quick and, as mentioned before, not so very dirty searches in PubMed for all and sundry – dragging people in from the corridor – that sort of stuff. I’m sure you’ve been doing this for years, but it’s all new to me.
I’m really beginning to like PubMed. A big thank you to the US taxpayer.
This is a cool little toy, if you’ve ever wondered about the relative sizes of a grain of rice to a skin cell, a lysosome to a carbon atom; play with the slider underneath the graphic. It’s from the University of Utah.
Our other Alan wrote a piece a couple of posts ago on, amongst other things, this article in the Nursing Times. Basically it’s a press release for NICE, written by an implementation advisor for said large, powerful organisation. It describes how NHS Evidence will be so very useful for nurses, and I suppose it could be argued that it’s useful because the odd nurse (not literally you understand) may be flicking through the magazine, scan the article, and go and try out a bit of evidence hunting themselves. Surely a happy outcome.
But I fear that our odd nurse will only try out NHS Evidence once, maybe even twice, but probably not a third time. Why? Well, according to an issue of the Eyes on Evidence Newsletter (more PR from NICE) the top five most frequently used search terms were 1) asthma; 2) prostate AND cancer; 3) evidence; 4) flu OR influenza; 5) breastfeeding, so we get an idea of the level of sophistication behind most searches. Let’s try the top one, asthma. 5026 hits, including 516 guidelines and 1627 drug information pieces. You get the general idea. The first ‘guideline’ is “Guidelines for the prevention, identification and management of occupational asthma: evidence review and recommendations”, a pretty hardcore 88 page PDF of an evidence review, complete with evidence tables, from the British Occupational Health Research Foundation. Not at all helpful, I would’ve thought, to our odd yet eager nurse. If s/he wanted a good review of what to with someone with asthma surely they’d just go to an evidence synthesis product, such as Clinical Evidence, CKS, Dynamed or the Map of Medicine.
The problem is that NHS Evidence’s obsession with Google means that their search engine suffers exactly the same problem as Google (too many hits) but does not have the same saving grace (that the one you really want is at the top). Plus, and this is still really the crux of my problem, I still don’t have a clue who NHS Evidence is meant to be for – neither, I think, do the people behind NHS Evidence. “All things to all people” often ends up as nothing to no-one.
I haven’t really got into NHS Evidence yet, so maybe I’m missing something, but it all seems something of nightmare.
I want to look for guidelines, as you do, if you’re a medical informationy whatnot thingy. Let’s say I was looking for lower back pain and related conditions. The search “Low back or lower back or spinal pain or back ache or backache or Spondylolisthesis or scoliosis or Sciatica or “Spinal stenosis” or Lumbago” gave 1320 ‘guidelines’ (i.e. using the Guidelines filter) in NHS Evidence, 208 of which are apparently from NICE. This is silly. There’s no way that there are 1320 guidelines out there, or 208 NICE guidelines. In fact, when I look through a few of the results many of the hits were duplicates or ‘empty’ references. Do I really want to spend my time going though 1320 hits for a handful of useful guidelines. No, is the answer to that.
Now then, if I searched with the same terms in the good old fashioned Guidelines Finder (now a ‘specialist collection’), I get 47 hits. Forty seven useful and relevant (for the most part) hits, something I can quickly browse though and extract the few guidelines I actually want. Perfect. Does the job.
Therefore Guidelines Finder, at least for this common situation, is better than NHS Evidence. Much better. But Guidelines Finder might be under threat from the monster that is NHS Evidence. On the front page of Guidelines Finder they write: This collection is now NHS Evidence – national library of guidelines, and you will continue to be able to access all the content and features. The existing url for the collection will remain for now, but is likely to change later in the year as the specialist collections become fully integrated into the NHS Evidence portal. “Fully integrated” – sounds scary.
Now I know that NHS Evidence and the specialist collections are fundamentally different technologies, and both may have their uses, but in the age of information overload the collections rule – don’t you think? A request to NICE – please keep the specialist collections.
I’d never heard of Hanlon’s razor before. Apparently it is an adage that reads “Never attribute to malice that which can be adequately explained by stupidity”. I rather like that; I’ll have to remember it. It is a nice way to look at the world, as we can all be stupid sometimes. Take for example the new company Clinical Reader. The company has an online product that basically seems to be an RSS reader but that they decide which RSS feeds they’re going to track, not you. I came across it a few days ago as some clinicians were discussing what a good service it was on some mailing list. Well, I thought, that’s clinicians for you. The product didn’t seem bad enough to comment on, but likewise certainly didn’t seem interesting enough to comment on either – at least not from my point of view. So I thought to myself “well, our handsome and fragrant readership will probably come across it soon enough, and they can make their own perfectly balanced minds up about it…”, and left it there.
But today I see another twist on the Clinical Reader story, and what happens when you get involved in social networking tools without really knowing what you’re doing. The thing was that Nicole Dettmar (evidently a fan of The Prisoner) had pointed out in her blog that Clinical Reader were implying that they had been awarded ‘five stars’ by institutions such as the British Library, the NLM, Imperial College, The Lancet etc. She pointed out to them on Twitter than the NLM does not endorse anything, and that they ought to do something about it, and promptly received a reply threatening legal action (I love the use of ‘kindly’):
I mean to say, what a stupid (or malicious) thing to do. Of course everyone picked up on it and they received a barrage of tweets and blog commentary. As of writing they have since backed down, which they should do because they are plainly in the wrong, saying “We are keen to engage the twitter community the tweet made by a junior member of the team was poor judgment”. However the “five stars according to…” graphic remains throughout their site once you get past the first page.
Malicious or stupid? You decide. Either way it doesn’t reflect well on Clinical Reader.