Artificial intelligence (AI) has become a hot topic within many professions in terms of considering the transformational implications for practice. Interest in AI has grown dramatically with the emergence of ChatGPT (AI conversational chatbot) – a generative AI tool that can be trained and capable of creating content. In some areas such as medicine it is perhaps easier to see the benefits of a conversational tool that can access vast amounts of information and assimilate a coherent representation and possible diagnosis. However, in professions such as education the implications are not so clear. A core question many educators are asking is what impact will AI and tools such as ChatGPT have on deep and meaningful learning and specifically educational communities? The focus here is to explore the potential benefits and threats in creating and sustaining communities of inquiry that support critical thinking, collaborative discourse and deep learning dynamics.
Generative AI raises the question as to what constitutes a quality educational experience that goes beyond individuals assimilating information without critical analysis. Accessing information efficiently is not the issue in a connected digital world. What has not been explored sufficiently are issues of critical thinking and collaboration (Bozkurt & Sharma, 2023). This is a crucial point as true knowledge is constructed and confirmed through reflection and critical discourse guided by collaborative inquiry dynamics using shared metacognition awareness and strategies. This also speaks to facilitating insight and creativity in the educational process. Therefore, the question I have is what effect will AI have on online learning grounded in communities of inquiry? More specifically, what influence will a powerful information access tool such as generative AI have on collaborative-constructivist approaches in dynamic online learning environments? The practical corollary is how do we integrate conversational AI into a collaborative learning dynamic? Beyond immediate support in terms of information retrieval, the challenge is using this resource to enhance critical thinking and learning collaboratively.
Deep learning in AI has a very different meaning from deep and meaningful learning at the core of the Community of Inquiry (CoI) framework. AI deep learning uses vast amounts of information to train algorithmic functioning. For individuals in a community of inquiry, the process of deep learning is dependent upon collaborative discourse and the discipline of scientific inquiry. While AI tools can filter massive amounts of information to help learners identify and explore possible connections that otherwise could be missed, there is no easy way to validate AI results as sources are not transparent. AI is only as good as the data sets it is trained on. This necessitates employing reflection and discourse to consider plausibility as well as alternate perspectives and ideas. Generative AI alone does not assess and manage inquiry by providing metacognitive awareness and strategies to direct the learning process. The challenge for educators is to understand the enormous power of AI in the context of facilitating and directing collaborative inquiry that includes imagination stimulation and creative discovery.
There are articles emerging that address the issue of AI and online learning. One of these articles argues that ChatGPT has the “ability to engage in dynamic, context-aware conversations that can facilitate a more engaging and interactive learning environment, thereby enhancing students' critical thinking and problem-solving skills” (Kilinç, 2023, p. 206). We are inherently vulnerable to confirmation bias and the seduction of conspiracy theories; but in addition, we are now open to assault from the power of AI misdirection and misinformation (https://www.thecommunityofinquiry.org/editorial10). For this reason, educators must double down on critical thinking and discourse. Much work is required to appreciate exactly how best to achieve engaging learning environments using tools such as ChatGPT. To this end, Kilinc (2023) explores in detail where integrating ChatGPT could contribute to distance science education.
Most importantly, the Kilinc article also focuses on exploring the benefits and limitations of ChatGPT assisted learning. In this regard the article concludes that “Limitations and hazards associated with using ChatGPT in education include the potential for perpetuating biases, producing, and spreading misinformation, positioning itself as the ultimate epistemic authority without sufficient evidence, …” (p. 207). All of which leads Kilinc to suggest that educators need to “emphasize the need to harness technology, cultivate a sense of community, and encourage educators to pursue continual professional development” (Kilinc, 2023, p. 230). This speaks to the essence of this editorial. From my perspective the greatest educational risk is to reduce the transaction to accessing easily digestible information without purposeful discourse focused on critical analysis of biases and disinformation that is necessary to lead to truthful understanding. We must not undermine critical reflection and discourse by over-emphasizing the power of information assimilation using AI at the cost of the challenging process of constructing and confirming knowledge. The overarching risk is AI slowly taking over important decision making from educational professionals responsible for deep and meaningful learning processes and outcomes. I would argue that the best prophylactic is the community of inquiry approach.
In a review of online learning research, it was noted that intelligent tutoring systems have played the most important role (Hwang, Tu & Tang, 2022). From the perspective of traditional distance education grounded in independent study, there could be a strong argument for intelligent tutoring systems. That is, in the context of independent study, admittedly efficiently personalizing learning could offer benefits. However, personalization and efficiency do not address the crucial role for constructive collaboration. This concern is evident in another article exploring the boundaries of AI when it states that “… generative AI requires enhancing the scope of current educational roles or adopting new ones such as facilitators of learning, curators of learning resources, designers of learning experiences, and assessors of learning” (Bozkurt & Sharma, 2023, p. i). These responsibilities must be clear as we move forward in adopting AI tools. A collaborative constructivist approach to a community of inquiry speaks to maintaining teaching presence starting with professional development and instructional design emphasizing engagement and critical inquiry.
Another topic intimately connected to the power of AI worth considering is that of learning analytics. It would seem to me there is considerable overlap of AI and learning analytics regarding the facilitation of cognitive presence in an educational community. In this regard, a study found that performance assessment and prediction were two of the main functions of AI in online higher education (Ouyang, Zheng & Jiao,2022). Moreover, it was found that several studies “reported positive effects of AI application in improving online instruction and learning quality, including a high quality of AI-enabled prediction” (Ouyang, Zheng & Jiao,2022, p. 7908). They speak to the potential of learning analytics to assess learning processes and precisely predict results that can improve engagement. It would seem to me that AI could be used to train learning analytic tools to identify metacognitive strategies and provide guidance in the monitoring and management of collaborative inquiry. For background on learning analytics and the CoI framework you may want to visit previous editorials on this topic (https://www.thecommunityofinquiry.org/editorial33; https://www.thecommunityofinquiry.org/editorial14).
The final, (more rhetorical) question I pose is how might the role of generative AI contribute to the scholarship and research of online learning? I say rhetorical as I am not one who believes in the ability to accurately or usefully predict too far into the future. However, it would seem to me that the first obvious prediction is that AI will greatly assist in collecting and integrating current information and enhancing the ability of educators to provide new ideas and insights theoretically and pragmatically. Many are concerned that future AI developments and scenarios will begin to replace human decision making and input. Hopefully this will not be where AI will ultimately lead us. To avoid such an apocalyptic scenario, we all must become skeptics and challenge questionable options.
On the risk side of AI, what worries me is reflected in an article that proposes a model of a new university using AI where the role of academic staff changes "dramatically" (Kabashkin, Misnevs & Puptsau, 2023). The authors argue that AI could take on the functions of a teacher in terms of developing the structure and content of programs and courses; organize learning; evaluate and provide feedback; and provide recommendations for personal learning (p. 215). Such ceding of academic direction greatly concerns me. While some of this may provide valuable decision making information for a competent instructor and engaged learners, it seems to me that this gets too close to the edge (if not crossing it) in terms of decision-making and educational control. In the extreme, this speaks to the existential threat of AI where human decision making is replaced by AI. Goal directed, autonomous AI may become existential but my immediate concern is manipulated audio and video (deep fakes) that can undermines larger societal norms and values.
Notwithstanding the serious threats and challenges, AI cannot be ignored. There is no question that generative AI will change how we learn. AI has the potential to support online learning communities by curating resources and presenting results in natural language. However, there is a risk of over reliance on non-transparent AI technology and its ability to assess disinformation. In this regard, an immediate concern is associated with identifying manipulated audio and video. What happens when databases are populated by deep fakes? Hopefully AI will be used to identify deep fakes and reduce disinformation at the outset. As AI becomes embedded in the educational process and decision making, I also wonder if we will become too reliant and trusting of AI to identify disinformation and deep fakes? Regardless, we must also engage in the process of critical analysis regarding ever present implausibility and misinformation.
Conclusion
The challenge from my perspective is that we continue to create the conditions where educationally we strive to go beyond information assimilation and facilitate collaborative inquiry and critical thinking. The challenge is to critically assess the strength and limitations of generative AI from the perspective of a worthwhile educational experience that must be grounded in critical reflection and discourse. This inevitably goes beyond the capabilities of generative AI to access and summarize enormous amounts of information. Learners must be obligated to test the plausibility and accuracy of AI algorithmic summaries and interpretations. Educators must be aware of the quality of AI databases and direct students to AI tools based on somewhat reliable data sets. Ultimately, however, we as educators are charged with the development of skeptics to immunize us as best we can from mis- and dis-information. A primary reason is that it is very difficult to educate individuals after they have become trapped in a disinformation bubble or seduced by conspiracy theorists. We must recognize the place for a skeptical mindset and the recognition of the value of communities of inquiry to facilitate critical reflection and discourse.
Educators are being challenged to rethink the essence of an educational experience considering the inevitable influence of generative AI tools. We must constantly question the educational value of adopting powerful AI tools. Generative AI has the potential to free learners largely from the labor of basic research and allow them to focus on core and essential questions that facilitate insight and understanding. In this way AI could make the educational process more creative and productive. However, at this point we have many more questions and concerns than answers. The bottom line is that we do not know the full extent of the range of AI capabilities and threats. Powerful technologies bring great possibilities but inherently also bring severe risks. The ultimate risk, however, is to cede human thinking and decision making to AI. Educators must be ever present and be ready to intervene and direct the inquiry process in thoughtful ways.
Bozkurt, A., & Sharma, R. C. (2023). Challenging the status quo and exploring the new boundaries in the age of algorithms: Reimagining the role of generative AI in distance education and online learning. Asian Journal of Distance Education, 18(1), i-iii. https://doi.org/10.5281/zenodo.7755273
Hwang, G.-J., Tu, Y.-F. & Tang, K.-Y. (2022). AI in Online-Learning Research: Visualizing and Interpreting the Journal Publications from 1997 to 2019. International Review of Research in Open and Distributed Learning, 23(1), 104–130. https://doi.org/10.19173/irrodl.v23i1.6319
Kabashkin, I., Misnevs, B., &Puptsau, A. (2023). Transformation of the University in the Age of Artificial Intelligence: Models and Competences. Transport and Telecommunication Journal, 24(3), 209-216. https://doi.org/10.2478/ttj-2023-0017
Kilinç, S. (2023). Embracing the Future of Distance Science Education: Opportunities and Challenges of ChatGPT Integration. Asian Journal of Distance Education, 18(1), 205-237. Retrieved from https://www.asianjde.com/ojs/index.php/AsianJDE/article/view/721
Ouyang, F., Zheng, L. & Jiao, P. (2022). Artificial intelligence in online higher education: A systematic review of empirical research from 2011to 2020. Education and Information Technologies, 27, 7893–7925. https://doi.org/10.1007/s10639-022-10925-9
Professor Emeritus, University of Calgary
Community of Inquiry Research: Two Decades On
D. Randy Garrison
May 1, 2024
A decade after the publication of the seminal article describing the Community of Inquiry (CoI) framework (Garrison, Anderson & Archer, 2000), we provided a personal perspective concerning its development and
New book: The Design of Digital Learning Environments: Online and Blended Applications of the Community of Inquiry
Stefan Stenbom
January 31, 2024
Shared Metacognition and the Emergence of AI
D. Randy Garrison
November 1, 2023
Artificial intelligence brings increasing attention to critical thinking and discourse. From an educational perspective, my rationale is that the community of inquiry framework, whose
Social Presence Reconsidered
D. Randy Garrison
October 3, 2023
My previous editorial addressed the generic nature of the CoI framework. Given the relevance and validity of the CoI framework in face-to-face settings, this editorial considers the
CoI Framework in Face-to-Face Environments
D. Randy Garrison
August 1, 2023
I think it is safe to say that the general perception of the Community of Inquiry (CoI) framework is that it is specific to an online or at best blended learning environment. The reality