Methodology

How Did We Conduct Our Analyses of Historical Trends in Education Research?
The methodology employed for this book involved collecting titles, abstracts, and citation counts from the Scopus API of all articles in the top 20 journals (as identified by Google Scholar) from the four targeted subdisciplines of education research (Teaching and Teacher Education, Educational Technology, Educational Psychology and Counseling, and Higher Education) from 1970 to 2020. Based on citation counts for each article, we identified the top 20 articles for each decade in each subdiscipline and conducted qualitative coding and synthesis of the articles into a coherent narrative for each decade. We also conducted keyword analyses for all articles to identify most common words in titles and abstracts for each decade to assist the narrative.

Education has changed drastically over the past century as laws, cultures, technologies, societies, and people have evolved in unpredictable ways. Less than a century ago, there were still states in the U.S. that did not have mandatory education laws and public schooling opportunities for children. Less than 70 years ago, Brown v. Board of Education declared that racial segregation in schools in the U.S. was unconstitutional, leading to the gradual dismantling of Jim Crow laws and policies into the 1960s and 1970s. In 1957, Sputnik was launched, and in 1961, Yuri Gagarin became the first human to leave Earth’s atmosphere, leading to a lengthy and expensive space race between world powers and a surge of investment into STEM fields at all education levels. The first personal computer was created by IBM in 1981, and the invention of the WorldWideWeb by Tim Berners-Lee shortly followed in 1983. In 1989, the Berlin Wall fell, paving the way for the dismantling of the Soviet Union and a democratic resurgence of knowledge and idea-sharing between people from culturally divergent and historically antagonistic countries across the world. In the 1990s, the internet became commonplace in households across the U.S. and many other developed countries. Wi-Fi was created in 1991, and the first smartphone was released by IBM in 1994. The first Apple iPhone was released in 2007, allowing laypeople to have more computer processing power in a single pocket than was available to all of NASA while landing men on the moon less than 40 years earlier. Throughout these shifting decades of technology-enabled globalization, most countries’ populations became increasingly heterogeneous and divergent with increased diversity of represented races, ethnicities, languages, and various other identity constructs.

Education and education research have evolved alongside these social and technological changes, with needs evolving, paradigms splintering, methodologies adjusting, learners changing, and institutions adapting to ever-fluctuating contexts and demands. In conjunction with increasing population sizes, growing numbers of scholars, and increased research and communication efficiencies, this has also led to an increase in the amount of education research being produced, with more research articles published each year than in previous years. Taken together, this means that the past 50 years in particular have given rise to a heretofore unprecedented explosion of education research that is both qualitatively and quantitatively different than anything the world has ever seen. In part to deal with this, the education field proper has further fragmented into a variety of subdisciplines, representing discrete social sciences and professional communities, such as teacher education or educational technology, allowing it to better grapple with continually refined educational problems with increasing sophistication, accuracy, and emphasis.

One problem that this explosion has posed for modern education researchers, practitioners, and policymakers, however, is that in such a wide sea of fragmented research, it is often difficult to get a handle on what we know, where we are as a field, and where we are going. As thousands of new studies are conducted annually, how can any single person manage to understand what we are collectively learning, and how can we translate this into improved practice and an enlightened vision for future research?

This book attempts to tackle this complex problem by providing a bibliometric-driven synthetic history of the past 50 years of education research across a variety of education-related subdisciplines. In compiling a 50-year synthesis for each subdiscipline, our aim is to help the reader to recognize the dominant trends and topics within the subdiscipline across time, the current state of the subdiscipline, and how these takeaways relate to education proper. By identifying the most impactful research articles and topics within subdisciplines and synthesizing them in a cohesive manner, we attempt to show how research in education subdisciplines has evolved across the past five decades and what trends emerge within and between subdisciplines to help us better understand the current state and trajectory of education research.

Synthesizing research across an entire discipline or even a subdiscipline for any amount of time is a daunting task, but doing so across 50 years would be altogether impossible if we did not employ some strict parameters to allow us to focus our efforts on what would be expected to be most beneficial. For instance, each subdiscipline represented in this book might consist of hundreds of professional venues publishing tens of thousands of publications each year. These venues might include either aged or new peer-reviewed research journals, predatory (pay-to-publish) journals, white papers, trade magazines, conference proceedings, or professional blogs. Additionally, even within a particular venue, publications might differ with even the most prestigious research journals often publishing research articles alongside potentially less-interesting publications, such as errata, book reviews, or editorials.

To assist in identifying publications that should attract our attention, bibliometric researchers have long utilized impact indicators to help readers determine the relative importance of both venues and publications to a scholarly community. In the case of venues, metrics such as impact factor (Finardi, 2013; Garfield, 2003), h-indices (Bornmann & Daniel, 2007), and other citation indices are often used as rankings of quality, ostensibly equating a venue’s “impact” on its professional field with numbers of times publications within the venue are cited by other publications in specific ways (e.g., ignoring self-citations). Though metrics like these are not perfect representations of a venue’s value and have been widely critiqued for their vapidity (Seglen, 1997) or ability to be distorted (Alberts, 2013) or gamed (Huang, 2016; PLoS Medicine Editors, 2006), such factors have been shown to correlate with some other expected measures of venue quality, such as professionals’ perceptions of a journal’s value (Saha et al., 2003), level of research evidence (Amiri et al., 2013), and likelihood of future scientific achievement (Hirsch, 2007). They are also able to be measured with relative ease and are widely used, making them more workable to more sophisticated alternatives.

Central to this notion of impact is the assumption that the number of times a particular publication is cited has meaning, and the research articles, for instance, that are having the most impact are also the ones being cited most often. Though citation indices are by no means a perfect measure of impact or value, they may nonetheless play an important role in more holistic interpretations of venue and article quality that deserves our attention (West & Rich, 2012). Thus, though citation counts certainly do not tell us the whole story of a publication’s impact, they seem to tell us something and at least give us something worthwhile to work with for moving forward.

To balance this tension between measurability and holistic understanding, this book represents a combination of bibliometric study and literature analysis, where we use bibliometrics to identify venues and articles for analysis but then rely upon both qualitative and machine analyses of articles to draw conclusions. To scope our analyses via bibliometrics, we focused on venues that were identified by Google Scholar to be the most impactful in each subdiscipline, as calculated by h5-indices (Google Scholar, n.d.). The h5-index “is the h-index for articles published in the last 5 complete years … [and is] the largest number h such that h articles ... have at least h citations each.” In other words, if the journal Educational Technology Research and Development has an h-index of 41, this means that in the past 5 years, the journal has published 41 articles that have been cited at least 41 times. Such a metric helps to deal with outlier articles (e.g., one that has thousands of citations) or outlier years and allows us to identify journals that have been fairly stable in the citation counts of their articles over time.

For each subdiscipline, we utilized the Elsevier Scopus API to return all article titles, citation counts, and other information for the top 20 venues provided by Google Scholar for the years 1970 to 2020 inclusive. For a complete list of included journals, please refer to the Journal List in the appendix. We stored relevant data in a relational database for further query and analysis, resulting in roughly 20,000–30,000 articles per subdiscipline for the 50-year span. This large historical perspective introduced additional challenges for using metrics like citation counts because the lifespan of an article significantly impacted its citation count (e.g., an article published in 2010 had 10 years to garner citations, while an article published in 2019 only had two years). To account for these lifespan differences, we converted citation counts to a citations per year count by dividing the raw citation count by the elapsed time (in days, but normalized to years) since the article was published.

Since the goal of this book is to understand research trends, articles labeled by Scopus as being of other types (e.g., book reviews, errata) were removed. We then organized articles based on year, decade, and citations per year counts to identify (a) the most cited articles per decade and (b) the most cited articles per year. Researchers then reviewed the top 20 most cited articles per decade plus any most cited articles per year that were not included. Researchers compared, contrasted, and synthesized each article’s questions, methods, and results with other top articles from the same decade, allowing for a series of snapshots to be created for the 1970s, 1980s, 1990s, 2000s, and 2010s. Articles for the year 2020 were treated as a separate decade and were also synthesized for the purpose of showing current trajectories in research.

To assist in the creation of these decade-framed snapshots, a bag-of-words approach to natural language processing was used to identify dominant keywords and bigrams in article titles for each decade represented. Keywords were single words represented in titles, such as “school,” “teacher,” or “intervention.” Bigrams were double word pairs represented in titles, such as “higher education,” “elementary school,” or “policy perspective.” Stopwords, such as “a,” “an,” “the,” etc., were removed, and words were truncated to their roots using an asterisk to signify truncation (e.g., “school,” “schools,” and “schooling” were all truncated to “school*”).

For each subdiscipline, we will now proceed by providing some background on the subdiscipline and previous attempts at long-term synthesis, sharing snapshots of each of the five decades and the year 2020, and synthesizing results in terms of (a) most important issues, topics, and trends, (b) missing links, topics, and trends, and (c) discussion and implications. We will then conclude the book by providing some additional synthesis of our findings across all subdisciplines to shed light on the history and state of education proper as a macrodiscipline.

References

Alberts, B. (2013). Impact factor distortions. Science, 340(6134), 787. doi:10.1126/science.1240319

Amiri, A. R., Kanesalingam, K., Cro, S., & Casey, A. T. (2013). Level of evidence of clinical spinal research and its correlation with journal impact factor. The Spine Journal, 13(9), 1148-1153.

Bornmann, L., & Daniel, H. D. (2007). What do we know about the h index? Journal of the American Society for Information Science and Technology, 58(9), 1381-1385.

Finardi, U. (2013). Correlation between journal impact factor and citation performance: An experimental study. Journal of Informetrics, 7(2), 357-370.

Garfield, E. (2003). The meaning of the impact factor. International Journal of Clinical and Health Psychology, 3(2), 363-369.

Google Scholar. (n.d.). Top publications. Google Scholar. https://edtechbooks.org/-viUu

Huang, D. W. (2016). Positive correlation between quality and quantity in academic journals. Journal of Informetrics, 10(2), 329-335.

Hirsch, J. E. (2007). Does the h index have predictive power? Proceedings of the National Academy of Sciences, 104(49), 19193-19198.

PLoS Medicine Editors. (2006). The impact factor game. PLoS Med, 3(6), e291.

Saha, S., Saint, S., & Christakis, D. A. (2003). Impact factor: a valid measure of journal quality? Journal of the Medical Library Association, 91(1), 42.

Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. British Medical Journal, 314(7079), 497.

West, R. E., & Rich, P. J. (2012). Rigor, impact and prestige: A proposed framework for evaluating scholarly publications. Innovative Higher Education, 37(5), 359-371.

This content is provided to you freely by EdTech Books.

Access it online or download it at https://edtechbooks.org/50_years/methodology.