The Impact Factor (IF) or Journal Impact Factor (JIF) is a bibliometric or scientometric parameter that has been widely used to determine the quality of a journal.
The JIF was introduced in the early 1960s by Eugene Garfield, the founder of the Institute for Scientific Information (ISI).
Despite the importance placed on the JIF by publishers, editors, and research contributors, it has always been a controversial metric.
In an interview, Eugene Garfield said that the JIF was created only to evaluate a journal and not the authors’ contributions.
This statement calls for a re-examination of the JIF as an assessment metric.
This article explores the JIF, its significance and reliability, and why it is considered a subject of concern for scientific progress.
What is a journal impact factor?
A journal impact factor is a metric or indicator used to measure the relative importance of a journal in its field by measuring the number of citations an average article receives in a particular year. It is calculated by dividing the current year’s citations by the number of citable sources of the journal from the previous two years.
One of the well-known sources of finding the journal impact factor for a specific journal is the annual publication of Journal Citation Reports (JCR) by Thomson Scientific.
For the year 2021, the formula to compute the JIF is as follows.
A = Number of citations in 2021 for articles published in 2020 and 2019
B = Number of articles published in 2020 and 2019
C = Impact factor for 2021
C = A/B
The value of C is published in 2022 after including all the citations for the year 2021.
The role of the JIF in the scholarly society
The JIF was initially developed to help libraries decide which journals to index and purchase for their journal collection. But this was a way for librarians to identify journals by the number of readers and not by quality.
Some academic disciplines have necessitated that the authors submit their papers to journals with a high JIF, especially those on the tenure track. Institutions tend to use the JIF of academicians’ publications as a major factor in their decisions regarding hiring, promotions, granting of funds, or incentives plans. Also, publishers use the high JIF to promote their journals and attract qualified authors to publish with them.
Questions about the reliability of the JIF
The extensive importance given to the JIF as a measure of a journal’s reputation has led to the misuse of this metric, potentially harming how individual researchers are evaluated. Since a single metric is given utmost importance for visibility and prestige, there have been instances of manipulating or hacking by reporting an incorrect number of published papers or using self-citation tactics to boost a journal’s impact factor.
Scholars have begun to raise their voices against using the JIF as a reliable metric to qualify research articles or an author’s credibility due to several reasons.
- The JIF does not represent the quality or actual citations of individual articles in a journal. It gives the average number of citations of the articles in a journal, which means that a small percentage of highly cited articles will produce a high JIF.
- The JIF varies for different disciplines and cannot be taken to compare journals across various fields. For example, biochemistry and molecular biology articles are likely to get five times more citations compared to pharmacy articles.
- There is a lack of checks over the manipulated data presented by the journal publications to derive a high JIF value.
- Review articles bring more citations, and this can lead to high visibility. Hence, some editors resort to publishing more review articles than new and original research work in their journals to achieve a high JIF.
- There is a lack of clarity about the qualifying criteria for identifying citable resources to be used while computing the JIF.
Due to the JIF’s questionable reliability, scientists, scholars, and academicians have suggested changes to the current system to reduce the dominance of the journal impact factor. Here are some recommended solutions:
- Facilitate new, improved metrics other than JIF like the Journal Usage Factor (JUF) and the Y-factor. PLOS Medicine supports the usage of these metrics as alternate assessing indicators.
- Set up new assessment methods and metrics to measure the quality of individual research contributions as opposed to the usage of JIF. The San Francisco Declaration of Research Assessment (DORA) strives towards this objective.
- Advocate the submission of relevant articles by applicants of funding institutions rather than those published in journals with high JIFs.
- Promote the use of more detailed key indicators like the diversity of the editorial board, transparency of manuscript acceptance, metrics around data citations, and independent analysis of research articles that form the foundation for open science.
- Create indicators or metrics that are transparent, reasonable, fair, inclusive, and reproducible with a clearly stated set of limitations and uncertainties.
- Establish a not-for-profit governing organization that controls the use of metrics and indicators and spreads awareness about the best practices and standards in open publishing.
- Encourage stakeholders in the publishing industry to be a part of initiatives like DORA, the UK Forum for Responsible Research Metrics, and COPE to keep track of the developments in these areas.
Responsible use of new metrics that qualify articles and journals is the need of the hour to overcome the ambiguity around the JIF. All members of the scientific society share the responsibility for the appropriate usage of indicators and must use it with caution as there is no one-size-fits-all approach for qualifying research across different disciplines.
Connect with us to know more about Kriyadocs
Image courtesy: People vector created by stories – www.freepik.com