A Look at the Future of Open Educational Resources

Artificial Intelligencelicensingclouddecentralized networkscontent addressable resources for education (CARE)Interplanetary File SystemGitHubOpenAIAIJupyter Graffiti
Open Educational Resources (OER) have been traditionally defined as educational contents that reside in the public domain or have been released under an open license that permits no-cost ac- cess, use, adaptation and redistribution. As the nature of educational content changes with new technology, however, so does the nature of OER. This paper explores the impact of four major types of technology on our understanding of OER: cloud infrastructure, open data, artificial intelligence, and decentralized networks. It is argued that these technologies result in a model of dynamic and adaptive resources that will be created at the point of need and will draw on constantly changing requirements and data sources. They will be created through distributed community-based processes, and they will support a pedagogy based on supporting student experiences rather than content transmission. As a result, the emphasis on content publication and licensing will decrease, while questions of access and interoperability will move to the fore.

People running in a crowd with data streams flowing with them.

Introduction

Online and distance education have been from the outset dependent on the design and distribution of learning resources. Absent the traditional face-to-face instruction offered by a teacher or professor, it was necessary to develop what were called ‘course packages’ containing readings, quizzes and exercises, and guidance to help the students manage their own learning in the absence of a classroom.

Traditionally these packages were proprietary to the institution offering the course; each institution would create its own course package.

Additionally, materials would be created by publishers for use in both distance education and traditional classrooms. Gradually, however, there emerged a desire to make use of new Internet technologies, to pool resources, and to be able to share the cost and benefit of learning resources between teachers and institutions. This practice became widespread, and ultimately included high-profile examples such as MIT’s OpenCourseWare.

Concurrently, in the field of computer technology a similar desire led to the creation of a type of com- puter program intended for sharing. Originally, programs were distributed as ‘shareware’ and were free to use but could not be sold. Operating systems such as GNU/Linux were distributed as ‘free software’ where the right to use and redistribute the software were restricted by what Richard Stallman called the “four freedoms”: the freedom to run the program, the freedom to read the source code, the freedom to modify the program, and the freedom to redistribute the program under the same license.

These ideas come together in the form of ‘open educational resources’ (OER). The idea was that educational content could be ‘free’ in the same manner as free software by licensing it using an open content license. Around the same time, an organization called Creative Commons introduced a set of licenses designed for this purpose. Thus, OER came to be defined (by organizations such as UNESCO) in terms of its licensing: “Open Educational Resources (OER) are teaching, learning and research materials in any medium—digital or otherwise—that reside in the public domain or have been released under an open license that permits no-cost access, use, adaptation and redistribution by others with no or limited restrictions” (UNESCO, 2002).

The development of the concept of the OER raised at the same time the question of the sustainability of OER. Course packages can be expensive to produce, and the expectation among advocates of OER was that students would not pay for them. Initial OER projects were supported by government, institutional and foundation support, but generally with the expectation that these projects would become self-sustaining over time. The development of OER thus began to focus on commercial viability, and models of OER distribution came to include bundling (where an OER is combined with a commercial product for sale, thus making access to the OER contingent on purchasing the commercial content), enclosure (where access to OERs is limited by the requirement to pay tuition or subscription fee), or conversion (where a free resource is converted to a commercial resource, for example, by changing it from digital form to paper-based form).

Additionally, the nature of digital resources, and of online learning generally, began to change. The early web was dominated by pages and documents, but the later web (often referred to as web 2.0) focused on social interactions and user-generated content. This change impacted online learning as well, and the focus shifted from course packages to online interaction. The development of the MOOC beginning in 2008 led to a model where students created and distributed their own educational resources and participated in learning networks.

In the present day, the model whereby publishers create and distribute openly licensed static content is drawing to a close. A ‘web page’ today is actually a dynamic resource, connected to live data generated by cloud services. The contents can change minute by minute, and these changes are often driven by the activities of people using the page. The ‘design’ or ‘content’ of an OER may actually be designed by the page design, or the pedagogical practice it supports, rather than the content cre- ated and transmitted by its users.

The concept of the OER is in flux. The purpose of this article is to focus on how these technological changes are changing the nature of OER. It will look at the impact of four key technologies—cloud technologies, open data, artificial intelligence, and content-based addressing. It’s true that in discussions of educational resources we don’t necessarily want to begin by focusing on technology, but in this case understanding the technology is important because the technology is going to create some affordances for us that will change the shape of open educational resources within ten to twenty years.

In the final two sections, we will return to the pedagogical question and examine the impact of these changes and discuss how we in the educational sector, will need to adapt in response to that impact, in order to shape it in the future.

New OER Technologies

Cloud

Access to content that is stored on the cloud requires an Internet connection. It’s true that a lot of people, and especially people in the global south, cannot easily access cloud-based resources, but more and more as time goes by, access will improve and we will be looking at cloud environments and cloud technologies in order to support open educational resources.

By ‘cloud’ hosting, we mean storing and accessing our content on computers accessible through the Internet. What’s important about these computers is not simply that they are hosted and managed by Internet service providers, but also that the resources are not on any particular computer, and indeed, might be spread across a number of computers.

What that means, is a shift from resources created by content providers or publishers to resources created collaboratively or cooperatively.

For example, Figure 1 depicts a web-based article about open educational resources. On the screen, we see what looks like an ordinary website, but this website is actually hosted on a site called GitHub (https://github.com/). What's important about this website is that it isn't just a website. It's something that multiple people can contribute to.

GitHub enables people to create their own copy, or ‘clone’ the website in question. Or they can start editing the document to create a new version, known as a ‘fork’ of the original article.

GitHub was originally designed for cloud-based collaborative authoring of software, but sites like this demon- strate that it can be used for any sort of content.

Jupyter.png
Figure 1. OER located on GitHub.

This changes the dynamics of open publishing and open educational resource publishing because it removes the divide that exists in the traditional environment between the author and publisher and the consumer. It makes the consumer equally a part of the creation.

In addition to creating and reading documents in the cloud, we can create and run full applications on these remote computers. These applications are encased in virtual machines or ‘containers’. We can run them and interact with them through a web browser, or, just like the contents of a cloud-based document, we can download these ap- plications to our own computer and run are them on our own computer. Services like Vagrant, Docker and Kubernetes make this possible today.

“Open Data is an umbrella term describing openly-licensed, interoperable, and reusable datasets which have been created and made available to the public” (Atenas & Havemann, 2015).

What this means is that the types of resources that we will be working within the future as open educational resources will not simply be documents, will not simply be textbooks, but will actually be functioning programs and even fully functioning virtual computers that people can work with, manipulate, use to create things like videos or audio or new applications of their own, develop their own content, and share them over the cloud.

Open Data

In addition to cloud hosting, and partially as a result of it, people are beginning to think about open data as a new type of open learning resource.

For example, in Canada's open data portal (located at https://edtechbooks.org/-vQs canada.ca/en/open-data) readers can browse by subject. Under a topic like ‘law’, for example, they can research the law of monetary penalties, statistics, questionnaires that members are asking people to fill, etc. This is all part of open government. But it's also a whole set of resources that are accessible as educational resources.

Figure 2. Government of Canada Open Data Portal
Figure 2. Government of Canada Open Data Portal

Because it's data it's not really usable directly as a learning resource—it’s not structured with educational outcomes in mind. However, when open data are made available through an application programming interface (API) it can be integrated into learning resources. The Government of Canada has created a new ‘API Store’ (at https://edtechbooks.org/-HzHZ api.canada.ca/en/homepage) which hosts and publish APIs which allows developers to access and leverage government datasets and services for integration into apps or other services.

An example of this is an application called Jupyter Notebooks (https:// jupyter.org/). Jupyter Notebooks are online text-based notebooks containing computer programs such that you can use Jupyter Notebooks to run the computer programs it contains on your own computer. The programs allow the reader to change the program from inside the notebook and then run it again, producing a new result. Readers can either download a Jupyter Notebook application to run on the desktop, or they may access a service called Binder (https://mybinder.org/) to read and use a Notebook through a web browser.

Additionally, because the Notebook is running an actual computer program, it can access live data as it runs. For example, a notebook might address an analysis of housing in Eastern Canada. It may contain a program that displays housing data in a graph or diagram. Each time the program is run, this data is accessed anew from the API and the presentation of information in the Notebook is fully current (Hirst, 2018).

The potential is enormous. For example, Naughton (2019) takes a student “from an idea for a protein all the way to expression of the protein in a bacterial cell, all without touching a pipette or talking to a human." The post includes embedded computer code and interoperates with a ‘cloud lab’ to actually manipulate the instruments and create the protein samples.

Additionally, there is a program called Jupyter Graffiti that enables an instructor to animate a Jupyter notebook, in other words, to display the operation of the program as though it were a video. "Jupyter Graffiti are recorded, interactive demonstrations that live inside your Notebooks …. Since a Graffiti ‘video’ is a live replay of the instructor’s interactions, you can pause it any time—and when it’s paused you can dive in to play with the instructor’s work right in the Notebook (execute it, copy it, change it, execute it again)—and then resume playback when you’re ready."

Graffiti thus blends the instructor role, which is to model and demonstrate, with the learner role, which is to practice and reflect.

So the document isn't just a document anymore, it's a computer program that we can change and run again, thereby learning both about the subject matter and learning about computer programming. These computer programs can use open data such as the data that we just looked at on the government of Canada website as their input. So we can be working with open data using a Jupyter notebook that I'm running either on my browser or running on my local desktop.

This changes the conception of an educational resource from something static to something that's interactive, to something that can be used to create, as well as to consume. An educational resource isn’t a single resource that’s served from a static web server. It is part of an environment sometimes called a ‘headless website’ or ‘decoupled CMSs’ (Koenig, 2018). The database is located in one place, the web page is located in another place, the program- ming environment is in another place, and these can be either in the cloud, or on a local area network, and users can switch back and forth from Internet to cloud as they wish.

AI will be used to facilitate learning processes, provide student support, assessment and feedback, manage business processes, and help with identity and security.

Artificial Intelligence

Open AI and open artificial intelligence algorithms are already becoming available and are beginning to be used in online learning. For example, the OpenAI project (https://openai.com/) offers “open-source software tools for accelerating AI research, and release blog posts to communicate our research.” Related projects include the Open AI Gym (https://gym.openai.com/docs/) and various cloud AI projects offered by companies like Google and Micro soft. Additionally, many resources are available through Jupyter Notebooks to help people learn about artificial intelligence.

What is relevant to open education is that the services offered by these programs will be available as basic resources to help build courses, learning modules, or interactive instruction. For example, Figure 3 illustrates a simple case. It takes the URL of an image, loads it, and connects an online artificial intelligence gateway offered by Microsoft as part of its Azure cloud services using an API key generated from an Azure account.

The Azure AI service automatically generates a description of the image, which is used as an alt tag, so the image can be accessible; the alt tag can be read by a screen reader for those who aren't able to actually see the image. In this case, the image recognition technology automatically created the text “a large waterfall over a rocky cliff,” along with a more complete set of analytical data about the image.

Azure-analyze-image.png
Figure 3. AI-based image captioning with Azure https://edtechbooks.org/-WJF

This may appear to be a trivial example, but it addresses a clear need in the creation and use of open educational resources. It reduces the need for humans to create image metadata, thereby making the images much more discoverable, and much easier to use to create open and accessible resources.

The widespread availability of AI will make these capacities available not only to instructors and developers, but to everyone, greatly enhancing the capacity of people to create their own learning resources without relying on publishers.

Artificial Intelligence has wide application in education. A recent survey (HolonIQ, 2019) projected the use not only of artificial vision and image recognition technology, but also a similar impact for voice and language processing, algorithms and hardware.

What’s important is not simply that artificial intelligence exists, but that it will be easily accessible as a service to the population as a whole. For example, some journalists created a facial recognition machine for only USD 60 (Chi- noy, 2019). It uses input from publicly accessible web cameras showing people walking on the street, and compared the faces to images of people on nearby corporate websites. The facial recognition software is a service (on theoreti.ca Geoffrey Rockwell suggests it might be Amazon’s Rekognition). This is some thing almost anyone could do.

While to date, most applications of AI discussed in relation to education and learning have been in the areas of learning analytics and automated course generation, it is arguable that in the future the more useful applications will actually support interactivity and community-based creation of open educational resources. For example, Cognii (http://www.cognii. com/) is "enabling personalized deeper learning, intelligent tutoring, open response assessments, and pedagogically rich analytics", Magpie (https://edtechbooks.org/-wDXT filtered.com/magpie) "provides learning opportunities based on challenges" such as tests or quizzes and X5GON (https://www.x5gon.org/) "fully automates the creation of OER courses." AI technologies will provide people with ways to interact with remote services in a way that helps them create new multi- media artifacts to be used for teaching, for art, or for business, and it might help them create these by creating alts tabs, it might help them create them by criticizing their text, or it might help by generating some text for them (deWaard, 2019).

Content Addressable Resources for Education

To introduce the concept of Content Addressable Resources for Education (CARE) we need to look more deeply at some of the technologies previously discussed. Supporting these are technologies sometimes categorized under the heading of ‘blockchain’. But the word ‘blockchain’ is not really a good descriptor, because it shifts the focus to crypto-currencies and financial networks. The wider term ‘distributed led- ger technology’ is more appropriately applied to the methods being used to store and access digital resources on distributed and decentralized networks.

An example of such a network is called the Interplanetary File System (https://ipfs.io/). The idea is this: instead of accessing an online resource using a URL the way web browsers work now, we access the resource based on its content using what is called ‘content-based addressing’. (Benet, 2014). The URL used on the web today references the location of a web resource; that is, it is associated with the Internet address of a specific web server. So, someone accessing Uber.com is getting that from a very specific service hosted by one specific server.

640px-DHT_en.svg_.png
Figure 4. Distributed Hash Table

This system has already been modified to a considerable degree to address weaknesses in the concept. A single server might be far away. It might be a single point of failure. So, a system of load balancing and content distribution networks treat the URL as a virtual address and redirect requests to where the content is actually located. Despite these improvements, location-based access protocols are still based on a single point of failure, so that if the resource is not at that location, it cannot be found at all, except through indirect means such as a web search, and if the address is ‘spoofed’, it can result in people downloading unwanted content.

With content-based addressing the user is essentially asking whether anyone has some specific content. This content might be located anywhere on the network. It is expected that it may be in multiple locations on the network. In the case of blockchain technologies like BitCoin, every node in the network has the content being requested, so the nearest node can respond. In the case of IPFS, a subset of the nodes will have the content, and so the request may be passed from one node to the next until the content is found. In the case of GitHub, individuals can have copies of their own subsets of the content stored locally, and use content addressing for version control and updating.

Content-based addressing is important because it allows us to have multiple copies of a resource out there on the Internet, and once a resource is created and published in this way, it is permanently open. It is permanently open because there are multiple independent copies of this resource. So, things like licensing and that become less and less important.

To make content easier to identify, instead of relying on the entire content, content-based networks generate a ‘hash’ of the content. This is a cryptographic version of the content, that is, the output of an encryption algorithm, such that for any given resource there's a unique hash value, and this value maps to that resource, and only that resource. So, the search is based on the hash value, and anyone who has a resource matching that value can send the resource. For security, the recipient can apply the hashing algorithm to any content they receive to check whether the hash from what they were sent matches the hash they were asking for? If yes, they know they've been sent the real resource.

Consequences

These new technologies provide the basis for speculation about the future of open educational resources.

First, the creation and the use of open educational resources will merge. In traditional educational publishing a resource is first created by an author and then later consumed by a reader. The purpose of the resource is to transmit information from the author to the reader. Even collective models of content creation, such as the wiki, operate in this way. The reader of a wiki expects to learn from content that has been created by the authors. Such a resource, while it may change from time to time, is generally static, and the flow of information is generally one-way, from producer to consumer.

However, new models of open educational resources will be more like tools that students use in order to create their own learning content, which they will then consume or use for some other purpose. For example, the educational use of a Jupyter Notebook, say, is not to present a certain body of content to the reader, but rather, to allow the reader to select their own source of open data, to manipulate that data by manipulating the algorithms provided, and then to use the results of that manipulation for their own purposes.

We see this, for example in the development of the Creative Commons open educational strategies that is being authored by multiple people and shared on GitHub. The development of educational strategies is an ongoing process. It is not a process that needs to converge toward a single outcome; people will want to develop different strategies for different purposes and different environments. So the process is not (or should not be) based on collectively writing a single document, but rather, collectively working within a common environment for the production of documents as needed.

Thus, in an environment like GitHub, individuals can access this document, clone it, and have their own copy on their own computer. They can make changes to that copy and then recommend those changes back to the original authors, who are free to accept them or reject them. They can use what has been created as a starting point, and diverge from that point, or combine it with other content from other repositories, to create something completely unique.

From the pedagogical perspective, the learning happens not through the consumption of the content but through the use of the content. People learn to write computer programs, for example, by using GitHub to copy programs from other repositories and manipulate those programs (just as a person might borrow a tool and work with that tool).

Second, licensing issues fade into the background. This should be seen as a welcome development. Laws governing content licensing and copyright differ from jurisdiction to jurisdiction around the world, and the interpretation of even common licensing standards, such as Creative Commons, is often unclear and requires litigation to resolve (Harris, 2018, p. xi). The complexity of licensing content has prompted Creative Commons to create and offer a Certificate course in the subject (Creative Commons, 2019).

One reason licensing fades to the background is that most resources are created and used only once. The resource taps into current data and may be localized or adapted to the content consumer. The tools employed to manipulate the resources are adapted from a common ‘pattern language’ of open access algorithms and tools; proprietary tools simply aren’t useful in a one-off context such as data-driven online resources.

An additional reason is that the static components of the learning resources are distributed through decentralized networks. The nature of these networks is such that all nodes of the network participate in content distribution, and therefore, the contribution of content to the network grants de facto a license to reproduce the content. Access restrictions on content are therefore government not by licensing, but rather, by access restrictions on the network as a whole, for example, through authentication.

Finally, access conditions previously stipulated by licensing are embedded in the resource itself. Technologies such as encryption, hashing and blockchain create a record of ownership and provenance of any resource, and the conditions related to access of the resource are recorded either indirectly, through means of access controls, or directly, by means of a smart contract (Bodó, Gervais, & Quintais, 2018).

Third, the form of learning changes with the use of next-generation open educational resources. Developers are now able to use live data for real world applications, or local or downloaded data for training or for simulations. This shifts the locus of learning from the content—which will change on a day-to-day basis—to the use or application of the resource. For example, if an educational resource consists of a Jupyter Notebook containing an averaging algorithm, ‘learning’ will not consist of remembering the algorithm, but rather, it will consist in the use and modification in order to adapt to novel scenarios.

Because students are learning through practice and use, the learning ‘content’ (that is, the tools and algorithms) can be the same in the classroom or learning environment as they are in an actual work environment. It is, for example, like learning architecture by using the same computer-assisted drawing (CAD) software as is used by professional architects, using data drawn from open architectural drawing data networks (OPSHub, 2018).

What’s Needed?

What do we need, what do we need to know, what do we need to master, in order to get to this?

The first, and perhaps most important, is to change our mindset a bit. We need to change our framing, and in particular, we need to start thinking in terms of data and networks rather the documents, to get away from the idea that we're publishing course packages, chapters, and modules. The existing system of learning and publishing is designed around static and unchanging resources, however, in this future, resources will need to be created as-needed to address current data and current contexts.

The focus of instructional design, therefore, shifts from a foundation of content-based learning objectives to one based on (perhaps less-well defined) capacities and skills. These capacities and skills will themselves be fluid and adaptive to current environments, and learning to work in these environments will be more like achieving a fluency rather than remembering specific sentence structures or even vocabularies.

Instructional designers should be thinking in terms of environments and experiences. These environments will need to be fit for purpose—that is, they will need to generate real outcomes, whether they are used to design a building or to pilot a ship. Designers will also need to focus on the experiences learners have in these environments. It's not about the contents of the resource anymore, but rather the contents coming from open data, and this data might be anything possible within the constraints of the system.

Second, it will take some time for instructors and designers to learn how to think this way. GitHub, for example, requires a huge learning curve (GitLab, 2017). There is a change of perspective required in order to see works (whether software or content or other media) as dynamic, as branched, as modular, and as interoperating. Instructors and designers will require user-friendly interfaces that assist in this change of perspective. This will take something like the content management system of next-generation interactive cloud technology. In the early years of the web open educational resources were really difficult to create until things like Blogger and Facebook and Twitter and some publishing services like Rice’s Connexions came along. This is what will be needed for this next generation as well.

Again, it’s a shift in focus from the content to the interactions and operations. It's about how to merge this data with this application or this capac- ity or this bit of artificial intelligence to create a learning experience for a person. This is a very different way of thinking about instruction and instructional design than what instructors and designers may be used to, and it will require practice and application on new leading design systems in order to support this transition.

Third and finally, designers and developers will need to learn to co-create cooperatively. This is not the same as collaboration, where small or large teams work on a certain product or outcome. Cooperative work involves multiple individuals and groups working within a common environment or infrastructure, and helping support that network or infrastructure for mutual benefit, while working on different objectives or outcomes.

Part of this involves building and sharing resources in common. But an equally large part of it involves be- ing able to work in the open, or as it is sometimes called, ‘open working’. Examples exist in, say, the philosophy of ‘open science’, where “many of the benefits envisaged for open methods relate to how far they enable not only access but active participation in a research community by newcomers and outsid- ers, and maintain low barriers to this participation.” Internships, co-op student placements, apprenticeships and sport development leagues all embody the same principle.

Concluding Remarks

Students today face the challenge of complex and rapidly changing work and study environments.

These challenges, and the affordances enabled by new technologies, are driving a new generation of learning resources. These resources will be dynamic and adaptive. They will be created at the point of need by AI-assisted learning design systems and will draw on constantly changing requirements and data sources. These resources will not teach by means of content transmission, but rather, will require that students interact with both the data and algorithms, modifying the resource and creating solutions to real-world challenges. They will work using the same tools as people already working in the field, adapting to changes in the tool alongside the experts, working with and alongside them in a cooperative open working environment.

In this scenario, our understanding of the concept of the ‘open educational resource’ changes from a definition based on the concepts and metaphors of textbooks and libraries, and toward one based on the concepts of data-processing networks, cloud services and applications, decentralized encryption-based ledgers, and AI-assisted design and information processing. OERs will no longer facilitate learning by means of content transmission, but rather by constituting parts of, and working within, distributed cooperative networks, supporting the student experience as they become fluent in new challenges and new technologies.

References

Atenas, J., & Havemann, L. (Eds.). (2015). Open Data as Open Educational Resources: Case studies of emerging practice. London: Open Knowledge, Open Education Working Group. Retrieved from https://edtechbooks.org/-piom9.figshare.1590031.

Benet, J. (2014). IPFS - Content Addressed, Versioned, P2P File System. arXiv. Submitted on 14 Jul 2014. Retrieved from https://edtechbooks.org/-vqyB.3561

Bodó, B., Gervais, D., & Quintais, J.P. (2018). Blockchain and smart contracts: the missing link in copyright licensing? International Journal of Law and Information Technology, Volume 26, Issue 4, Winter 2018, Pages 311–336, Retrieved from https://edtechbooks.org/-WLa ijlit/eay014

Chinoy, S. (2019). We Built a (Legal) Facial Recognition Machine for $60. New York Times. April 16, 2019. Retrieved from https://edtechbooks.org/-RKy interactive/2019/04/16/opinion/ facial-recognition-new-york-city.html

Creative Commons. (2019). Creative Commons Certificate. Retrieved from https://certificates.creativecommons.org/

deWaard, I. (2019). Artificial Intelligence in Education focusing on the Skills3.0 project. Elearning Fusion conference in Warsaw, Poland. April 10, 2019. Retrieved from https://edtechbooks.org/-CYoh www.slideshare.net/ignatia/artificial- intelligence-in-education-focusing-on- the-skills30-project-140285138

GitLab. (2017). 2016 Global Developer Report. Retrieved from https://edtechbooks.org/-BSj gitlab.com/rs/194-VVC-221/images/ gitlab-enterprise-survey-2016-report. Pdf

Harris, L.E. 2018. Licensing Digital Content: A Practical Guide for Librarians, Third Edition. American Library Association, Dec. 18, 2018. Retrieved from https://edtechbooks.org/-BEHN ala.org/content/licensing-digital- content-practical-guide-librarians- third-edition

Hirst, T. (2018). Jupyter Notebooks Seep into the Everyday… OUseful. Info, the blog…, September 28, 2018. Retrieved from https://edtechbooks.org/-hIXW info/2018/09/28/notebooks-seep-into- the-everyday/

HolonIQ. (2019). Adoption of AI in education is accelerating. Massive potential but hurdles remain. HolonIQ Website. March 31, 2019. Retrieved from https://edtechbooks.org/-hPDs ai-potential-adoption-and-barriers-in- global-education/

Kessler, W. (2019). ‘Jupyter Graffiti’ Interactive Screencasts Make Their Debut in Our New C++ Nanodegree Program. Udacity, Apr 17, 2019. Retrieved from https://edtechbooks.org/-vVIV com/2019/04/interactive-screencasts-

j u p yt er -g ra f f i t i-c-p l u s-p l u s- nanodegree.html

Koenig, J. (2018). Headless Websites: What's the Big Deal with Decoupled Architecture? Pantheon. November 01, 2018. Retrieved from https://edtechbooks.org/-xJGo io/blog/headless-websites-whats-big- deal-decoupled-architecture

Naughton, B. (2016). Engineering Proteins in the Cloud with Python and Transcriptic, or, How to Make Any Protein You Want for $360. Boolean Biotech, 21 March 2016. Retrieved from https://edtechbooks.org/-iLC genetic_engineering_pipeline_python. html

OPSHub. (2018). Enterprise Architect Integration with Jama and GitHub. Retrieved from https://edtechbooks.org/-iwNr com/enterprise-architect-integration/ enterprise-architect-jama-github- integration/

White, A. & Pryor, G. (2011). Open Science in Practice: Researcher Per- spectives and Participation. The International Journal of Digital Curation, Issue 1, Volume 6 | 2011, pp. 199-213. Retrieved from https://edtechbooks.org/-MSIv researchgate.net/profile/Angus_ Whyte/publication/273053065_ Open_Science_in_Practice_Research- er_Perspectives_and_Participation/ links/55bb558208aec0e5f43fb6e4.pdf

 
CC_Downes.png

Previous Citation(s)
International Journal of Open Educational Resources, Vol. 1, No. 2, Spring/Summer 2019.
Stephen Downes

National Research Council of Canada

Stephen Downes is a specialist in online learning technology and new media. Through a 25 year career in the field Downes has developed and deployed a series of progressively more innovative technologies, beginning with multi-user domains (MUDs) in the 1990s, open online communities in the 2000s, and personal learning environments in the 2010s. Downes is perhaps best known for his daily newsletter, OLDaily, which is distributed by web, email and RSS to thousands of subscribers around the world, and as the originator of the Massive Open Online Course (MOOC), is a leading voice in online and networked learning, and has authored learning management and content syndication software.

Downes is known as a leading proponent of connectivism, a theory describing how people know and learn using network processes. Hence he has also published in the areas of logic and reasoning, 21st century skills, and critical literacies. Downes is also recognized as a leading voice in the open education movement, having developed early work in learning objects to a world-leading advocacy of open educational resources and free learning. Downes is widely recognized for his deep, passionate and articulate exposition of a range of insights melding theories of education and philosophy, new media and computer technology. He has published hundreds of articles online and in print and has presented around the world to academic conferences in dozens of countries on five continents.

This content is provided to you freely by EdTech Books.

Access it online or download it at https://edtechbooks.org/open_education/a_look_at_the_future.