The announcement of the latest Journal Impact Factors has prompted the expected heated discussion around their value, importance and relevance to authors, funders and the rest of academia. Despite all of these challenges and reservations, the Impact Factor is still noted by authors as one of the top elements they take into account when deciding which journal to submit their work to, and this got me thinking about the criticism publishers face for referring to Impact Factors when talking about the performance of their journals and making comparisons.
I recently had to tackle this head on when elements of the financial prospectus which had been created to inform the financial community of the risks and opportunities associated with our business were used to make wild claims about our company’s use of Impact Factors. Taken out of context and in isolation, these elements were used to conclude that we were paying lip service to DORA , that we were exploiting Impact Factors to market our journals, and that our only motivation for higher Impact Factors for our OA journals was to drive higher APCs. None of this could have been further from the truth and my full explanation, which can be read here (Times Higher Education kindly removing their own paywall so it can be read in full), sets out how we have significantly changed our business practices in this area.
It also got me thinking, though, that in all the noise about what is wrong with the Impact Factor we don’t seem to have progressed very far in deciding what it should be replaced with for each purpose it is currently used, particularly as there is a strong case for arguing that the original purpose for which Eugene Garfield created the Impact Factor – comparing the significance of journals within their peer group and their evolution over time – remains valid.
The problem though is that a journal’s Impact Factor is often applied far beyond its original purpose and used as a proxy for the significance of a single piece of work published in a journal and / or to judge a researcher who has published in those journals. These judgements aren’t what publishers do – that’s not our job – unfortunately it is what parts of the research community do. For example, academic appointment committees and grant award committees have done this for years and often continue to do so. And this in turn affects the behaviour of researchers when submitting their draft articles – in our author survey last year (completed by over 70,000 authors from all disciplines and regions) a journal’s Impact Factor was rated as one of the top four criteria when choosing where to submit a draft article, alongside a journal's reputation, relevance and quality of peer-review.
The problem though is that a journal’s Impact Factor is often applied far beyond its original purpose and used as a proxy for the significance of a single piece of work published in a journal and / or to judge a researcher who has published in those journals.
But with authors still wanting guidance as to which journal to submit to, funders and institutions needing to evaluate researchers, and journal and book information purchasers needing to assess ‘value for money’, we are long way from Randy Sheckman’s Utopian world where no metric is attached to scientific work. This point was very clearly explained by John Tregoning’s recent article in Nature where he asked “How will you judge me if not by impact factor? Stop saying that publication metrics don’t matter, and tell early-career researchers what does” and explained the challenge researchers like him face when his heart says one thing and his head says Impact Factor.
Article level metrics are the most obvious starting point: article level usage data, article level social media mentions and of course article citations should be used and built upon. Then there are researcher level metrics that can be created based in the aggregation of these article level metrics combined with author contribution statements. I would like to see these created to open, transparent standards so it is easy for all to understand, replicate and apply.
I am sure we can do this together but I doubt publishers can do this by themselves. At Springer Nature we are increasing our use of other journal-level and article-level metrics including article usage and altmetrics, views and downloads, article and book chapter citations and references and creation of author contribution statements, and making them all available to authors, funders and institutions. But for this to gain real acceptance and traction within the research community, I expect more input from the ultimate customers (i.e. authors, funders and institutions) and validation by an external body will be required.
At Springer Nature we are increasing our use of other journal-level and article-level metrics including article usage and altmetrics, views and downloads, article and book chapter citations and references and creation of author contribution statements, and making them all available to authors, funders and institutions.
I am very proud of what we do at Springer Nature and how we do it. The company remains committed to addressing the need for a wider and richer set of metrics across our subscription and OA models, and across our journals and books, so our authors can measure the ‘impact’ of their research, researchers, as content consumers, are able to evaluate the standing of our journals, articles and books, and librarians can make informed purchasing decisions. For over 10 years Nature editorials have expressed concerns about the overuse of Impact Factors and set out the case for a greater variety of more suitable metrics for different purposes. The question I pose to others is this – how can we all work together to create open metrics that better meet the needs of academic appointment committees and grant award committees? Only once we are agreed on these and they are available can use of the Impact Factor return to its original and still valid purpose.