Artificial Intelligence and Institutional Research: Reflecting on the Role of IR in the Ethical and Responsible Use of AI Technologies

 

Our world has experienced many technological innovations; perhaps rivaled in number only by the diverse range of human reactions to them. Education about the features, strengths, limitations, and appropriate uses of innovations can help provide a blueprint for effective implementation. One source defines Artificial Intelligence (AI) as “technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.” The term artificial intelligence has existed for several decades; evoking reactions including uncertainty, enthusiasm, opposition, and fear. Educators have the opportunity to alleviate fear, dispel misconceptions, and influence how we collectively use AI. Within the field of education, Institutional Research is uniquely positioned to help inform safe, ethical, and appropriate AI data use that can help model how educators, students, and society leverage these resources.

Earlier this year in a conversation with colleagues about artificial intelligence, I declared that “we do not want to be the Stanford Prison Experiment”. As a previous Institutional Review Board chairperson, the actions in that case helped initiate many of the human subject protections present in research today. As an Institutional Researcher and educator, I have maintained a steadfast commitment to ensure the privacy of student education data under the Family Educational Rights and Privacy Act (FERPA) and the North Dakota Student Data Privacy Bill of Rights. Use of AI technologies without considering these student data privacy safeguards, could jeopardize students and their data. The NDUS Institutional Research office follows the Code of Ethics for Institutional Research professionals and Ethical and Responsible Use of Analytics in Reporting code to ensure that we are being steadfast in our responsibility to ensure the privacy of student data, including when using AI tools.

Eager to learn more about managing artificial intelligence resources, I attended Minot State University’s inaugural AI and Data Summit on April 29 of this year. Student teams showcased research projects conducted using artificial intelligence and data science to increase task efficiency and allowing humans to accomplish endeavors that were previously not deemed practical. This event also featured interdisciplinary speakers reflecting upon the implications of artificial intelligence tools, especially within education and medicine.

Generative AI (Gen AI) may be defined as “deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.” Framing artificial intelligence in this way focuses emphasis on what these technologies can help accomplish or produce, instead of utilizing technology for its own sake. When viewed as merely another resource or tool, it becomes easier to build overall literacy about these technologies, thereby reinforcing best practices for use and reducing fear of potential abuses.

To further explore Generative AI capacities, I enrolled in the June 2024 MIT Professional Education three-week session of “Applied Generative AI for Digital Transformation”. Course content featured a useful blend of technical, theoretical, and ethical considerations for implementing Generative AI tools aimed at using artificial intelligence to create something new. Course activities provided opportunities to reflect upon appropriate uses of AI technologies and experiment with AI tools for completing specified tasks such as video transcription, translation, summarization, and content transformation into various formats. Experts discussed the impact of artificial intelligence in technology sector leadership with greater emphasis upon skills such as relationship formation, strategic planning, and project management. Ethical considerations for using AI technologies in a range of personal assistant capacities were explored, along with the appropriate placement of humans within the loop. The course examined the development and training of Large Language Models (LLMs) upon which artificial intelligence tools rely, along with the implications of various biases that may be present within the models. The course also highlighted the value of prompt engineering in thoughtfully designing interaction with AI tools in order to get the best possible results.

I also had the opportunity to attend the inaugural North Dakota AI Conference: “Being Human and Working in the Age of AI” held on September 26 at Valley City State University where leaders from education and other fields discussed the relationship between humans and artificial intelligence tools. Topics included the impact of AI on the structure of postsecondary learning, incorporating AI into curriculum design, the use of AI in research with student data, managing the implications of non-institutionally sanctioned information technology resources, and leveraging artificial intelligence tools to work more efficiently.

These learning opportunities have underscored the challenges and opportunities of artificial intelligence tools. We have witnessed many artificial intelligence examples in recent decades, including internet search functionality, navigational systems, predictive text in writing, and talk-to-text technologies. While comfort with these technologies has increased as they have become more readily accessible, fear and uncertainty remain. Similar to the impact of increasing data literacy for our work in institutional research, education about how artificial intelligence systems work can help reduce fear and uncertainty. Another way to help reduce fear and uncertainty with these tools is through creation of policies and best practices helping to ensure ethical and responsible conduct in the model creation, training, and interpretation of the outputs achieved through artificial intelligence tools. Each data breach incident can increase anxiety over the potential for future incidents and decrease the willingness of individuals to use tools that require their personal information. Likewise, readily apparent bias, ethically questionable responses, and blatant errors in the results of artificial intelligence prompts reduce the confidence of individuals in the tools producing the outputs. To achieve the full potential for artificial intelligence tools for enhancing lives, we must do whatever we can to avoid these negative outcomes.

As trusted representatives and data stewards with knowledge of data and research methodologies, Institutional Research practitioners have a unique opportunity to influence education policy makers regarding practices for ensuring data privacy and conducting ethical research. Likewise, Institutional Research staff are typically actively engaged in data governance conversations pertaining to data collection, storage, access, and reporting. Strong data governance procedures can go a long way to helping ensure appropriate role-based data access controls and establishing accountability for responsible data use that prevents data from being used in inappropriate ways or falling into the hands of individuals who should not have access to it. Together, we can advocate and help establish best practices ensuring data security and advocate for appropriate ethical safeguards for the use of artificial intelligence tools. Through these actions, we can help ensure that we do not become the next Stanford Prison experiment.

============================================================================

   Dr. Gregory Carlson is an Institutional Researcher – Special Projects with the North Dakota University System. Working closely with the Department of Public Instruction and Information Technology Department, primary responsibilities include project management, data framing, and ensuring compliance with requirements for statewide K-12 public school Every Student Succeeds Act (ESSA) accountability reporting through the Insights interactive public dashboards. He assists in addressing accountability data questions from education stakeholders and facilitating use of educational data to facilitate continuous improvement of student learning within North Dakota’s public K-12 schools.
Tags:
, , ,