Determining Quality in LIDT Scholarship: Using Multiple Metrics to Determine Rigor, Impact, and Prestige

This chapter discusses the importance of evaluating quality in Learning and Instructional Design Technology (LIDT) scholarship. The authors propose a framework that recognizes multiple forms of scholarship, including empirical research, applied and practical research, situated work, and theoretical work. They argue that LIDT is a complex field with diverse methods and indicators of quality, and that there is no single right way to do scholarship. The chapter also discusses the importance of evaluating journal quality and proposes three components of quality: rigor, impact, and prestige.  The authors provide guidance and frameworks for interpreting and evaluating scholarship, and suggest indicators of quality that can be used to make judgments about what to read and where to publish.


What is LIDT Scholarship?

The role of scholarship in any discipline is to create and share new knowledge about the discipline to improve theory, learning, and practice. Thus, Learning and Instructional Design Technology (LIDT) scholarship should create new theoretical, empirical, or practical knowledge for improving learning and performance, via the design, management, implementation, and evaluation of learning and performance. Under this definition, scholarship might be interpreted in a variety of ways, such as:

  • empirical research, focusing on psychology or human-computer interaction;
  • applied and practical research, to find out what and how a new type of technology works for individual learners as well as in particular contexts and settings;
  • situated work, that helps move educational institutions toward better future; or
  • theoretical work, that helps society to effectively grapple with ethical and strategic dilemmas that arise when technology and learning meet in unexpected ways.

Boyer (1990) proposed a framework that sought to more inclusively define scholarship as a combination of several distinct professional efforts, including (1) discovery (e.g., original or basic research), (2) integration (e.g., synthetic work), (3) application (e.g., work in schools or institutions), and (4) teaching and learning (e.g., pedagogical improvement). We share this view, and advocate for a broad view of scholarship in our field that recognizes many important aspects of knowledge creation beyond basic empirical research.

In addition, because our field has historically evolved as a meta-discipline encompassing the histories of many different movements (West, 2018), such as instructional design, instructional science, teacher education, human-computer interaction, media studies, etc.. There is also not a single scientific paradigm to which all researchers in the field adhere. Rather, our field exhibits a high degree of pluralism and is multiparadigmatic (Kimmons & Johnstun, 2019), embracing foundations and methods from fields as diverse as psychology, software engineering, human performance, and the humanities. This means that even among researchers in this discipline, assumptions, methods, and indicators of quality may vary depending upon goals and paradigmatic expectations (e.g., randomized controlled trials vs. design-based research vs. case studies vs. phenomenological inquiry vs. community-based participatory research).

So, what does all this mean? In our view, our field is uniquely complex, and there is not one right way to do scholarship. Thus, a more inclusive understanding of scholarship (e.g., Boyer, 1990) combined with the multiparadigmatic nature of educational technology means that there are many correct ways of doing good scholarly work in our field. Scholarship may look like a statistical analysis of groups, qualitative observation and interpretation of a phenomenon, action research on how to teach a course better (completed in partnership with a teacher), evaluation or design-based research completed while designing new learning activities or tools, rich descriptions of design case studies, or even philosophical development of new theories and frameworks. All of these forms of scholarship are important to the progress of our field and should be valued by journal reviewers, individual scholars, and administrators at universities.

However, even though all types of scholarship should be valued, that does not mean all scholarship is of equal quality or that one should spend equal time reading each of the hundreds of journals that could be considered part of our discipline. As such, we need guidelines to understand what quality scholarship in our field looks like given its diverse nature.

When I Learned the Importance of Evaluating Journal Quality

When I (Richard) was a student, I spent a year in one of my internships collecting and analyzing data evaluating the implementation of a new technology. I recognized the findings from this project could be interesting to others and should be published. However, I was overwhelmed by all of the different journals in our field, and did not know how to pick a good one. I searched on the internet and found a journal with a name that seemed to match the topic of the project. Excited, I submitted my article!

I heard back . . . nothing.

After many months, I emailed the editor of the journal and asked if they had received my article, and whether he had heard back from the reviewers yet. He answered, “Yes, we already published that article and it’s online!”

I was shocked, and despondent, because I knew what this meant. This journal was so desperate for content that they would publish something without solid peer review, revisions, or any quality control. I worried that this meant my scholarship would be wasted. I was right. The journal went out of business a few years later, and almost nobody has read or used the information from that article ever since.

Meanwhile, I wrote a second article about the project, using a different part of the dataset, and submitted it to a leading journal in our field. That paper is now one of the most cited papers I have ever written.

I learned the important lesson that it is very important to do high quality scholarship, but it is also very important to pick the right venue to share that scholarship. Ever since, I have made it a goal to help myself, and others, understand how to evaluate the right fit of a journal for the scholarship we do. This led me to edit a special series in Educational Technology evaluating different journals in our field (West, 2011 & West, 2016), and later studies looking at patterns across research in our field. 

In the following sections, we provide some guidance and frameworks for interpreting and evaluating scholarship and suggest indicators of quality that we can use to make judgments about what to read and where to publish.

Defining Quality in Scholarship

Judging quality in scholarship is difficult because what we consider to be quality is complex and context-dependent. For these reasons, scholars in our field have proposed that there is not a single statistic that we can use. Instead, a better way of discerning quality is to break it down into three components: rigor, impact, and prestige (Rich & West, 2018). In this section, we will define these components, identify indicators for each, and provide interpretations to assist LIDT scholars in understanding the overall quality of scholarly work.


Rigor

Because scholarship can be published via a variety of venues (e.g., book, journal, blog, magazine, conference proceeding, web-based seminar, podcast, audiobook, video-based exhibit/demonstration), the procedural rigor of a venue is often used as a proxy for discerning quality. More rigorous outlets tend to rely on specific criteria for publication and some form of “critical” peer review (e.g., double-blind), ostensibly to apply high standards of judgment to discern what will and will not be accepted to publish (Rich & West, 2018; West & Rich, 2012). Additional criteria of rigor may include how selective the journal is, reviews by scientific or institutional review boards, conflict of interest declarations, and how the publication of the research is funded. In addition, the quality and reputation of the expert making the rigor judgment can be helpful. For example, having a well-known editor review the work, completing a thesis under the guidance of a leading scholar, or having an exhibit accepted by a skilled curator could be examples of rigor.

Primary/Critical Criteria for Establishing Rigor

There are many factors that may be considered when establishing the rigor of a publication venue, but not all are equal. For example, it is important to have a declaration of conflicts of interest or a venue that does not extort payment from scholars in return for positive reviews of the work. However, while these criteria are important, they are not sufficient or equivalent to other criteria, such as the overall acceptance rate of the journal. In this section, we identify various criteria for establishing rigor and also discuss their relative importance to one another.

Acceptance Rates

Particularly in rivalrous publishing situations (such as print journals), competitiveness of the publishing selection process is commonly used as an indicator of rigor with lower acceptance rates being seen as more rigorous. Though there is no standard interpretation of acceptance rates across the field, journals and conferences with a historical acceptance rate below 20% are commonly viewed as exhibiting higher rigor, and those with an acceptance rate higher than 35% are commonly viewed as exhibiting lower rigor (cf., Figure 1). Interpretation of acceptance rates, however, should always be contextual. Many online journals, for instance, might have a higher acceptance rate than print-only journals simply because they have the capacity to print more articles, whereas a print-only journal might have acceptance rates below 10% simply because it can only accept a handful of articles each year.

Finally, other types of scholarship outside of journal articles may have different standards. For example, books may have higher acceptance rates even though they can be quite rigorous, because authors self-select for certain books to submit based on the perception of fit for their work. In fact, many online journals in recent years (such as those managed by the Public Library of Science) have moved to models of publishing wherein they will accept all articles that meet their scientific rigor requirements, thereby removing the relevance of acceptance rates altogether as an indicator of rigor.

Figure 1

Rigor Standards According to Journal Acceptance Rates

Secondary Considerations for Establishing Rigor


Review Methods

The most common criterion used to determine rigor is the method of review, wherein double-blind or juried review processes are seen as indicators of high quality (e.g., a research journal), followed by single-blind review (e.g., a trade magazine), editorial review (e.g., a chapter in an edited book), and finally self-publishing (e.g., a white paper or blog post; cf., Table 2). The principle behind this hierarchy is that having topical experts who are external to the project critically evaluate the work, without knowing who the authors are, is essential for showing that the work is objectively rigorous. In addition, there is wide variety in how journals calculate their acceptance rate, and those journals who openly display their process for determining acceptance rates (e.g. level of desk rejects, criteria for rejecting/accepting articles) show more rigor with their transparency. Finally, some journals thrive on frequent special issues to generate more content for the journal, but the special issues may have inflated acceptance rates above what is typical for the journal. When possible, it is important to consider these factors and the true acceptance rate for the issue you might be publishing in.

Figure 2

Rigor of Review Methods


Conflicts of Interest

Since educational technology has very pronounced market forces influencing the creation and adoption of educational interventions (such as software), scholars are often not immune to the effects of these forces in their own work. This means that if a scholar has a financial interest in the success of an educational technology startup or is hired by a company to evaluate its products, they should disclose this conflict of interest. For this reason, many venues require authors to explicitly declare any conflicts of interest. Declaring one’s conflicts of interest, when they exist, is a necessary prerequisite for considering the scholarship rigorous because it makes transparent potential biases that might influence the accuracy or objectivity of the work. One common potential conflict of interest could be special issues where authors review each others’ work in the special issue. While this may help them develop their ideas and create better scholarship, it typically does not result in a truly rigorous screening process.

Venue Funding Models

Historically, publishing venues have been funded by consumers of scholarship, such as journal readers or conference attendees, but in recent years, some venues have moved to author-funded models wherein authors must pay the publisher to print their work. Such approaches are generally seen to have low rigor (commonly called “pay-to-publish” or predatory models) because the publisher is incentivized for simply publishing high volumes of work and not for ensuring that the work is of high quality (i.e., that anyone will actually want to read it).

One caveat to this is that some open-access venues have moved to models of requiring authors to pay to make their articles open to the public. In terms of rigor, this sort of model is viewed differently than a pay-publish model and may still be rigorous provided that the venue (a) relies upon other rigor standards and (b) rejects work that does not meet these standards. For many rigorous venues, this open access fee is optional, thereby reducing the incentive for the publisher to produce poor quality work (see Table 3).


Table 3

Rigor of Venue Funding Models

Model  Rigor
Readers or attendees pay to access the scholarly work   High
Authors elect to pay to make their work (that has already been accepted for publication) open access   High
Authors pay to have their work reviewed or published   Low


Example Cases of Rigor

Taken together, these considerations can be used to produce a case of rigor for a variety of scholarly works. Some examples may be found in Table 4:


Table 4

Rigor of Some Example Publication Cases

CaseRigor
A research article is published in a journal that undergoes double-blind peer review and has an acceptance rate of 15%.High
A chapter is accepted for publication in a book after undergoing double-blind peer review.Moderate
An open-access journal article is published after it undergoes double-blind peer review and authors pay a processing fee.Moderate
An opinion piece is published in an online magazine with an acceptance rate of 30% after editorial review.Moderate-Low
A presentation is accepted to a conference after undergoing single-blind review, and the conference has an acceptance rate of 70%.Low
An organization hires a researcher to write a white paper about their product or service.Low
An author self-publishes a research article on their personal or institutional website.Low
An author pays a fee to have their work published in a journal or magazine.Low

Impact

According to West and Rich (2012), Impact is the influential contribution to the scholarly progress of a discipline (Rich & West, 2018; West & Rich, 2012). This might be understood at multiple levels of analysis:

(a) the impact that an individual work, like a journal article or book chapter, has on subsequent work;

(b) the aggregate impact that all of the works published in a particular venue, such as a journal or conference, has on the field; or

(c) the historical pattern of impact that a specific scholar has on the field through their work.

In addition, scholarship may have impact in various areas from a variety of data sources, such as the following:

  1. Teaching (often shown through adoption rates, embed rates in learning management systems, etc.),
  2. Research (citations, view rates, download rates, share rates, etc.),
  3. Policy (legislative bills, institutional policies, etc.), or
  4. Practice (evidence of impacting practice, social media traffic, practitioner adoption of ideas, etc.).

Historically, the most common indicator of impact has been citation counts or the number of times a work is cited by other scholarly works, either on its own or in aggregate (see Table 5). In more recent years, alternative data metrics (a.k.a. “alt-metrics”) have also been proposed as more direct or additional indicators of impact, such as how often a work is downloaded, accessed, shared on social media, or read.


Table 5

Common Impact Metrics by Level of Analysis and Data Type

Data TypesIndividual WorkVenue      Scholar
CitationsCitation Count Journal Citation Indicator
 Journal Impact Factor
 Journal Impact Factor (5 years)
 h-index
 h5-index
      Citation Count
      h-index
      h5-index
      i10-index
      i10-index (5 years)
      Publication Count
Alt-MetricsSocial Media Shares
Downloads
Access or Reads
 Aggregate Social Media Shares
 Aggregate Downloads
 Aggregate Access or Reads
      Aggregate Social Media Shares
      Aggregate Downloads
      Aggregate Access or Reads


Individual Work Metrics

Citation count

The most common impact indicator for an individual work like a journal article or book chapter is its raw citation count, which might be provided by a variety of services such as Google Scholar and the Scopus database. What counts as a citation varies by service, as some services, like Scopus, will only count citations from traditional scholarly works and others, like Google Scholar, will also include citations from blog posts, websites, and white papers. For this reason, any comparisons of citation counts should happen within the systems that are reporting the citations (e.g., Google Scholar counts should not be compared to Scopus counts), and there is no standard expectation for how often works in the field should be cited. Additionally, a variety of other factors influence citation counts, such as the type of work (e.g., literature reviews and theoretical work are cited more often than empirical studies), topical norms (e.g., hyped topics will receive greater attention than more academic topics), and even stylistic elements of the work, like title length, presence of a subtitle, abstract length, and readability (Kimmons & Larsen, 2021). Taken together, these realities suggest that evaluating individual work based on citation counts is complex and may be most appropriate in isolated comparison cases, such as when comparing the relative impact of two journal articles about the same topic that were published in similar journals over the same timeframe.

In general terms, articles in educational technology journals have exhibited increasing annual citation counts over the past five decades (Kimmons & Larsen, 2021), but current norms may be used as a simple indicator of an article’s performance in comparison to other articles published near the same time. Since 2005, the average number of citations per year of articles within the Scopus database has remained somewhat steady at around 2.9 per year (SD = 4.9; Kimmons & Larsen, 2021). We can use this to roughly separate articles into three groupings of Low, Medium, and High Impact by setting our cutoff points at roughly one-half of a standard deviation away from the mean (Table 6). Using this calculation, we could discern, for instance, that an article published 9 years ago with 270 citations (30 citations per year) has had a high impact, an article published 4 years ago with 4 citations (1 citation per year) has had a moderate impact, and an article published 15 years ago with 3 citations (0.2 citations per year) has had a low impact. Note that these numbers would only reflect citations within the Scopus database, as citation patterns in Google Scholar or other databases would be very different (and in the case of Google Scholar, much more inflated).


Table 6

Impact of Annual Journal Article Citation Counts per Year in Scopus

Calculation               Average Citations per Year Since Publication (2000 to 2020)     Impact
> M + 0.5SD                                                  5.4 or higherHigh
M ± 0.5SD                                                  0.5 – 5.3Moderate
< M − 0.5SD                                                   0 – 0.4Low


Alt-Metrics

Beyond citation counts, other alternative metrics have been proposed as better or more direct indicators of impact. This is partially because the foundational assumption that is made with citation counts is that scholars only have their desired impacts on the world if their work is cited by other scholars. However, much of the work that scholars do in our field is intended to influence things other than future scholarly writings, such as changes to professional practice, institutional norms, or learner behaviors. For this reason, indicators like the number of times a scholarly work was accessed, shared, or read might sometimes be a better indicator of impact if the goal of the work was to reach professionals or the public. This is especially becoming more meaningful with the development of Search Engine Optimization in combination with open-access peer-reviewed publications.

Analytics such as download counts, itemized population access, and peaks of session visits can become part of alt-metrics. Similar to the problems of interpreting citation counts, however, there are no standards for interpreting norms for these alt-metrics, making them most useful for isolated comparisons of specific works (e.g., which of two competing book chapters is being read more often by practitioners or which book chapters are most impactful from the same publisher, within the same discipline). Additionally, because many of these metrics exhibit far more range than citation counts (e.g., some works might be downloaded millions of times), comparative impacts should be considered logarithmically or exponentially rather than linearly. Table 7 provides an example of how this principle might be applied to a metric like PDF downloads or video views. As such, an educational YouTube video that had 120,000 views [log(120,000)=5.08] might be considered as having Moderate-High impact, while a PDF that was downloaded 5,000 times [log(5,000)=3.7] might have a Moderate-Low impact. Other scholars, or departments, may have different interpretations of what is considered “low” or “high,” but given the potentially viral nature of internet content, a logarithmic interpretation seems most appropriate, but this is an area that will continue to merit additional study and deliberation.


Table 7 

Logarithmic Interpretation of High-Variability Alt-Metrics (e.g., Downloads, Views)

Raw Value     Exponential     ValueLogarithmic       ValueImpact
10        101           1Low
100        102           2Low-Moderate
1,000        103           3Moderate-Low
10,000        104           4Moderate
100,000        105           5Moderate-High
1,000,000        106           6High-Moderate
10,000,000        107           7High


Publication Outlet Metrics

Many services also aggregate citation counts in a variety of ways. This is done in part to overcome the difficulties associated with determining impact of individual scholarly works but is also done as a predictive measure to signal to future authors the potential value of publishing in a particular publication outlet.

Journal Citation Indicator and Journal Impact Factor

Perhaps the most well-known aggregate metric is the Journal Impact Factor (JIF™), which is calculated by dividing the number of citations that a venue received in the current year by the sum of publications it produced in the two previous years. So, if a journal received 10 citations this year and published 6 articles last year and 4 articles the year before, then its Impact Factor would be 1.0. However because the JIF™ can be highly field-dependent and should not be used to compare the relative impact of journals in two different fields (e.g., such as Literature and Medicine), the owners (Clarivate) have now proposed a Journal Citation Indicator (JCI), which normalizes the statistic across disciplines (See https://clarivate.com/blog/clarivate-announces-changes-to-the-2023-journal-citation-reports-release)

Clarivate calculates the JIF™ metric and uses it to rank journals in various fields, but the metric only counts journals from within the Clarivate database. This means the JCI/JIF™ is highly limited in the conclusions it can support about scholarship in general.. Currently, the top quartile of journals in education and education research (SSCI) exhibit a JIF™ of 3.6 or higher, and journals in the top quartile are generally viewed as high impact (see Table 8; Clarivate, 2024). However, the primary drawback of using impact factors to determine the impact of a venue is that citation counts are highly variable. Overall impact factors might be heavily influenced by relatively few, highly-cited works. For example, if a journal published 10 articles in the target year with one receiving 100 citations and the rest receiving 0, then the Impact Factor would be 10, or the same as a journal that published 100 articles that each received 10 citations.

These impact factors have also been “inflating” over time, by one report in the field of ecology estimating that journal impact factors in their field were averaging .23 increase per year (Neff & Olden, 2010). While no similar data exists for the field of education that we are aware of, we have also noted sharp increases in impact factor inflation. For example, Educational Technology Research and Development’s impact factor has increased from about 1.5 to over 5.0 in the last decade. Given that, it is judicious when calculating the quality of publication venues to estimate some annual inflation, perhaps at least a .25 rate of annual inflation.

h-index and h5-index 

To reduce such volatility, the h-index is often used to indicate the number of works in a venue that are being cited. Its value is also calculated from raw citation counts, but h equals the number of works in the target year that had at least h citations. So, an h-index of 1 would mean that the journal had only 1 article with 1 citation, but an h-index of 10 would mean that the journal had 10 articles with at least 10 citations each.

In addition, the h5-index is a variation of the h-index that uses a five-year frame to further decrease volatility in results. Google Scholar uses the h5-index to identify the top journals in various fields. Currently the top 20 journals in educational technology exhibit an h5-index of 41 or higher (Google Scholar, 2024), meaning that in the past 5 years, these journals have each published at least 41 articles that have been cited at least 41 times (see Table 8).

The primary drawback of the h-index and h5-index is that their values are dependent upon the number of works published in the venue, which means that if a journal only publishes 10 articles per year, then its h-index could never exceed 10 (even if each article was cited thousands of times). For this reason, it is important to note that online journals that publish a higher volume of articles will often perform better than print-only journals or those that publish relatively fewer articles. Though there is no standard for evaluating h-indices in the field, a simple rule of thumb is provided in Table 9 for these various indices.

Table 8

LIDT Journals in the Top Quartile of Journal Citation Reports, in the Education and Education Research (SSCI) Category, With Comparison Metrics From Google Scholar and SciMago.

JournalJIF Google Metrics   H5SciMago
Computers & Education12.0               1473.68
Internet and Higher Education8.6                513.33
International Journal of Ed Tech in Higher Ed8.6                682.05
Distance Education7.3                431.88
Computer-Assisted Language Learning7.0                  641.75
International Journal of STEM Education6.7                591.67
British Journal of Educational Technology6.6                862.12
Learning & Instruction6.2                652.4
J. of Computing in Higher Education5.6                381.34
Education & Information Technology5.5                91N/A
Journal of Research on Tech in Education5.1                381.35
ETR&D5.0                661.52
Technology, Pedagogy, & Education4.9                391.26
Journal of Education Computing Research4.8                521.67
Revista Iberoamericana de Educación a Distancia4.6                470.99
Journal of Science Education & Technology4.4                401.28
International Journal of CSCL4.3                292.16
Australasian Journal of Ed Tech4.1                511.1
Educational Technology & Society4.0                491.05
Journal of Learning Sciences3.8                342.27
IEEE Transactions on Learning Technologies3.7                391.14


Table 9 allows us to compare citation metrics across three popular databases. Taking this data into account, we recommend the following in Table 10 as standards for high, medium, and low impact venues, according to citation metrics.

Table 9

Suggested Standards for Evaluating Publication Venues According to Citations

Column 1      GS h5 Standard      SciMago           StandardQuality Claim
> 3.5> 40> 0.7High Impact Venue
> 1.5 & <=3.525 & <= 40

> 0.2 & <= 0.7

Medium impact venue
< 1.5 or none< 25 or none

< 0.2

Low impact venue


Individual Scholar Metrics

A variety of metrics provided by Google Scholar and other services show the overall impact of an individual scholar over time. The most common metrics that these services provide include citation counts, h-indices, and i10-indices.

Citation Count

Just as the impact of individual scholarly works is sometimes determined by raw citation counts, these counts are sometimes aggregated for scholars across all of their works to evidence overall scholarly impact either over all time or for a five-year window. Similar to how citation counts of individual works are influenced by a variety of factors the same is true for evaluating an individual scholar. For this reason, care should be taken in comparing scholars’ overall citation counts when they work in different fields, study different topics, or publish in different venues. Also, scholar citation counts can be heavily influenced by outlier works with high citations. For instance, a scholar might average 10 citations per article across 10 articles they have published (100 citations), but if they write a piece that is picked up by a national news outlet, for instance, this might cause that article to be cited 1,000 or more times (thereby increasing the author’s overall citation count eleven-fold due to one article). Therefore, we recommend exercising caution in interpreting citation counts too rigidly in comparisons of scholars.

h-index and h5-index

To address such variability, the h-index and h5-index can be used to evaluate scholarly impact, wherein h equals the number of works authored by the scholar that have received at least h citations. This reduces drastic variability in metrics from singular works and may be seen as a more stable, long-term way of evaluating scholars. In the case of the previous example, the overall citation count for the scholar jumped eleven-fold (from 100 to 1,100) from a single article, but their h-index would only increase from 10 to 11 if all of the other articles received at least one more citation. In this way, the h-index requires citation counts to be interpreted through a lens that uses the scholar’s expanding volume of work as a controller and resists giving too much strength to a single work.

i10-index

One challenge with h-indices is that each subsequent increase on the scale is more difficult than the previous, which may reduce variability among scholars too strictly for some applications. For this reason, the i10-index has also been proposed and simply represents the number of works published by the scholar that have received at least 10 citations, either since its publication or a five-year period. The assumption and compromise that the i10-index makes is that a scholar’s impact should be evaluated in terms of the volume of work (i.e., number of publications) that has at least some impact and that the impact of every work ever produced by the scholar should not have to increase in order for new work to be valued as adding impact (as the h-index requires).

Scholar Profiles

Some databases provide scholars the opportunity to create a profile, and then see how their scholarship can be compared to others within the field. This normative approach provides the benefit of comparative judgment, which adjusts for rising “citation inflation” over time. However, they can also be erroneously used to compare a scholar within a very broad field, like education, to a very niche field, like educational technology or cultural studies, which may disadvantage the niche scholar. Some examples of these services include Web of Science, which uses data within their databases, or Exaly.com, which draws data from Google Scholar, Web of Science, Publons, Scopus, Crossref, and ResearchGate combined.  Other examples are TopResearchersList.com that uses data from Elsevier and ScholarGPS which has algorithms that categorize the “entire universe of scholarly research into 14 Fields, which are subdivided into 177 distinct Disciplines.”


Figure 3.

Author summary from Exaly.com

Prestige

Prestige refers to a feeling of respect or admiration toward the quality of a scholarly work or venue (Rich & West, 2018; West & Rich, 2012). This indicator can be more qualitative or subjective in nature than rigor or impact and may be discerned from a variety of indicators, such as type of publication, professional recognizability of a venue, reputation of a venue’s editorial board, inclusion in indexing services, or association with respected professional organizations. Sometimes there are studies that document the prestige of various journals by surveying professors or professionals in a field about what they feel are the most important journals to read (see for example, Wilson & Ritzhaupt, 2024). Typically, a scholar makes a claim about how prestigious a venue is based on their judgment of the qualitative data. This is then confirmed through a review by peers/outside experts, such as through external scholar letters during a tenure review or by colleagues within a department. 

 

Figure 4.

Prestige of Some Example Venue Cases




Predatory Journals

With the increased demands on scholars to publish, it is not surprising to see a host of new journals and publishers emerge. Some of these new journals are run by legitimate scholarly organizations, but many seek to make a profit by taking advantage of overly anxious scholars trying to publish. As new scholars, you should be particularly wary of potential “predatory” publishers. Beall (2016) described predatory journals as, “Predatory journals are scholarly open-access journals that exploit the gold (author-pays) model of scholarly publishing for rapid and easy profit” (p. 473) and that “Because predatory publishers want to earn as much money as possible, they typically perform a weak or fake peer review so they can accept articles and earn the fees from the authors” (p. 474).

Many organizations are now blacklisting predatory journals, which means they may not count towards grant funding in some countries (Chawla, 2023) and they may not be indexed in common databases (see this list, for example, of recently deindexed journals from Web of Science). Because these journals and publishers are being delisted, and not “counted” as legitimate scholarship, they have low prestige. Similarly, other journals from the same publishers, such as Frontier, MDPI, and Hindawi, among others, are viewed with skepticism by many scholars and would be considered low prestige in many disciplines.

How can you identify a predatory journal? Leonard et al. (2021) provided 10 rules that can be synthesized into our Rigor, Impact, Prestige categories:

Predatory as Lack of Rigor

  • Be skeptical of unsolicited invitations to submit to a journal. This indicates the journal may not have the strength to curate quality articles on its own without aggressive marketing.
  • Evaluate the journal’s peer review process to see if it employs anonymous peer review, and if this process is defined in a transparent way. Does the journal list who the editorial board and/or reviewers are? Do you recognize these scholars as experts in the field?
  • Review the qualifications and CV of the editor of the journal, and see if there is a description of the editorial process. How strong of a scholar mans the helm?
  • Does the journal charge for the opportunity to be reviewed or published? This would indicate a lack of rigor and a pay-to-publish model.

Predatory as Lack of Impact

  • How accessible are the articles in the journal? Are back issues easy to find? Is the journal indexed in major databases?
  • Verify claims to citation statistics, as some predatory journals may falsify claims.

Predatory as Lack of Prestige

  • Identify the publisher, and research the prestige of this publisher online. Do other scholars consider this publisher predatory? When in doubt, ask your university librarian for your discipline, as they often are up to date on which journals are considered predatory.
  • Review articles in the journal. Do you find them to be of good quality? Do you see articles from this journal cited by other authors you respect?

Quality as Rigor + Impact + Prestige

With this foundational understanding of Rigor, Impact, and Prestige, we can now consider Quality as a construct of these three components.

Example Cases of Quality

Table 11 provides a few simple cases of how these three constructs might be jointly considered in evaluating the overall quality of a scholarly work. However, it is important to note that how Rigor, Impact, and Prestige look and should be weighted may vary in different cases (e.g., rigor in a journal article might mean something quite different than rigor in a grant proposal). In the next section, we will provide brief explanations of how different types of works might be evaluated using this framework.

Table 10

Quality of Some Example Cases

CaseRigorImpactPrestige   Quality
Highly-cited journal article in the flagship journal in a disciplineHighHighHigh  High
Journal article in a leading practitioner journalModerate-HighModerateModerate  Moderate
Book chapter in a widely-read, edited handbook endorsed by a major organizationModerateModerateHigh  Moderate
Highly-cited, highly-read chapter in a peer-reviewed open textbookModerate-HighHighLow  Moderate
Presentation at an international conventionLow-ModerateLow-ModerateModerate  Low-Moderate
Highly-viewed YouTube videoLowHighLowLow-Moderate
Blog PostLowLowLowLow

Openness

We believe it is worth mentioning briefly the role that open scholarship may play in judging the quality of scholarship. Ultimately, we argue that being an open publication does not inherently make scholarship of high or low quality. There is sometimes bias against open publications as being of less quality than scholarship from pay-walled publications, but that is without merit or understanding of what constitutes actual quality in scholarship. Using the Rigor, Impact, and Prestige criteria in this chapter can help a scholar make the case for their open scholarship by asking the questions that really matter, such as was this publication outlet rigorous and selective, what was the impact of the scholarship, and what do others in the field think about it? 

While being open does not inherently make the scholarship high quality, we do believe it can be helpful for various reasons. First, openly accessible articles, chapters, and other forms of scholarship are more easily discoverable by a wider variety of readers, thus positively creating more impact. Huang et al. (2024), for example, looked at citation counts for 19 million outputs from 2010-2019 and found that open publishing led to (a) higher citations to a significant degree and (b) greater diversity of citing institutions, countries, and regions. Third, openly licensed scholarship can be translated and repurposed, thus increasing its potential impact and usage. Fourth, openly licensed scholarship can be modified by other scholars, leading in many cases to higher quality work that might be considered more rigorous as it is revised and repurposed by other scholars, showing that they not only approve the research, but also reuse it in their own scholarship. Fifth, open scholarship can also be more easily updated, making it more relevant and current.

For all of these reasons, we believe adopting an open-first mentality toward research and publication will encourage progress and increase the overall quality of scholarship in our field. In terms of the Rigor, Impact, Prestige framework, the biggest hindrance in our opinion to open scholarship is in the area of prestige. However, previous biases against open scholarship are shifting, and many universities are beginning to see open scholarship as a worthy end goal. The University of British Columbia is one example, as they have published their rationale for encouraging open scholarship (see https://pose.open.ubc.ca/home-page/getting-started/why-open-scholarship-matters/).

As open scholarship becomes more accepted, our discipline will need criteria for determining what counts as high, mid, or low quality openness. We propose that at a minimum these criteria should address the level of access, licensing, open science practices, and localization provided. For access, a journal that is free access without a paywall or author fees would be considered high quality, whereas requiring authors or users to pay for access would be low quality. Mid-tier quality may be free access, but through a link or other method of limiting discoverability. For licensing, high quality would be publishing with a creative commons license—we recommend CC-BY to allow for translation and maximum reuse. A system that allows the author to retain the copyright could be mid-tier, and a journal that retains the copyright itself would be low quality. For open science, the journal would require adherence to open science practices, including archiving protocols and instruments (for example, in something like the  Open Aims book), and archiving of data for future replication/analysis. For localization, high quality could involve authors collaborating with local scholars to localize content for their disciplinary area, geographic region, or population. Mid-tier quality would be a situation where the author strives to localize content without this collaboration, and low quality would be unlocalized content. There may be additional criteria to consider as well, and we welcome scholarly dialogue to explore the possibilities of increasing quality openness in the scholarship of our field.

References

Beall, J. (2015). Predatory journals and the breakdown of research cultures. Information development, 31(5), 473–476.

Boyer, E. L. (1990). Scholarship reconsidered: Priorities of the professoriate. (PDF), Carnegie Foundation for the Advancement of Teaching

Chawla, D. S. (2023, October 17). Malaysia won’t pay for researchers to publish in certain journals. Chemical & Engineering News, 101(35).

Kimmons, R., & Johnstun, K. (2019). Navigating paradigms in educational technology. TechTrends, 63(5), 631–641.  https://doi.org/10.1007/s11528-019-00407-0

Huang, C. K., Neylon, C., Montgomery, L., Hosking, R., Diprose, J. P., Handcock, R. N., & Wilson, K. (2024). Open access research outputs receive more diverse citations. Scientometrics, 1–21.

Hutchings, P., Huber, M. T., & Ciccone, A. (2011). The scholarship of teaching and learning reconsidered: Institutional integration and impact (Vol. 21). John Wiley & Sons.

Kimmons, R. & Larsen, R. (2021). Research Impact Metrics: A 50-Year Analysis of Education Research Article Feature Effects on Citation Counts. In  R. Kimmons &  J. Irvine (Eds.), 50 Years of Education Research Trends. EdTech Books. https://edtechbooks.org/50_years/impact_metrics

Leonard, M., Stapleton, S., Collins, P., Selfe, T. K., & Cataldo, T. (2021). Ten simple rules for avoiding predatory publishing scams. PLoS Computational Biology, 17(9), e1009377.

Neff, B. D., & Olden, J. D. (2010). Not so fast: Inflation in impact factors contributes to apparent improvements in journal quality. BioScience, 60(6), 455–459.

Rich, P.  J., & West, R. E.  (2018). Rigor, impact, and prestige: A proposed framework for evaluating scholarly publications. In R. West (Ed.), Foundations of Learning and Instructional Design Technology. Edtechbooks. Available at https://edtechbooks.org/lidtfoundations/qualities_in_academic_publishing 

West, R. E. (2011). About this article and new series. Educational Technology, 51(4), 60.

West, R. E. (2016). Insights from looking at 22 journals: Conclusion of Educational Technology Journal Series. Educational Technology, 56(1), 41–45.

West, R. E. (2018). Foundations of Learning and Instructional Design Technology (1st ed.). EdTech Books. https://edtechbooks.org/lidtfoundations

West, R. E. & Rich, P.  J. (2012). Rigor, impact, and prestige: A proposed framework for evaluating scholarly publications. Innovative Higher Education, 37(5), 359–371. Available at https://link.springer.com/article/10.1007/s10755-012-9214-3.