5 Key Policy Considerations for Regulating AI in Classrooms
When crafting AI-use policies for an education institution, consider these key issues
Institutions across all industries are grappling with how to address the continuing growth of artificial intelligence (AI) tools in their environments. Educators are no different as AI tools have already impacted the fabric of the classroom and faculty office.
The current edtech milieu is about as fluid as one could imagine with new AI tools emerging daily. Since ChatGPT, the poster tool for generative AI, was unveiled about 14 months ago, some educators have tried to ban the use of it. For example, NYC Public Schools originally banned AI, which didn’t work well, and the policy was rescinded after only a few months.
The Biden Administration outlined AI guidance for schools in an executive order in October 2023, asking the U.S. Dept. of Education to provide guidance for classrooms and considerations about equity and privacy issues, as well as recommending AI tools include watermarks to identify AI-generated content. The U.S. Dept. of Education did publish guidance in May 2023, addressing the need to ensure human decision-making within automated processes and ensuring equity and that quality data is used to train AI tools.
At this stage, it is important to ensure that extant institutional policies address issues raised by the use of AI tools. Teachai.org provides a sample set of recommendations about AI for educators that can be used to inform policy development. It is best to revise extant policies in light of AI as opposed to building a new AI policy that may or may not fully align with other policies.
5 Key AI Policy Considerations
- Do not simply ban the use of AI in the development of assignments. Tools such as MS Office and Grammarly have AI embedded, so to do this would unnecessarily ban the use of many common tools.
- Make sure that any AI tools comply with FERPA and ADA regulations. When using AI tools to develop student-specific items, such as personalized learning plans or IEPs, do not include personally identifiable information.
- Require instructors to be clear about when and how students can or cannot utilize AI tools. For instance, an instructor might allow students to use AI to develop an outline but not to draft the narrative. At the course level, instructors should be clear about what level of use they would like to see within the course, or assignment by assignment, if appropriate. This is especially important if the institution does not have up-to-date policies that address AI use. Joel Gladd offers some sample syllabus language for educators to consider at several levels of AI integration.
- Ensure that there is a human decision-making step in any AI automated processes. Clear evidence exists that AI detectors tend to discriminate against non-native English speakers, often identifying their work as AI derived. Make sure that the use of AI does not detract from creating an equitable environment for all.
- Consider how to make sure the use of AI throughout the institution is as transparent as possible. Also, consider identifying AI-generated materials from outside of the classroom.
As educational institutions navigate the rapidly evolving landscape of AI, a thoughtful approach is required to harness the benefits while mitigating potential risks and doing so transparently and responsibly. The journey toward AI integration in education is complex, but with careful policy development and consideration of key factors, it can lead to a more efficient, equitable, and innovative learning environment for all.
- 5 Things AI Can and Can Not Do For Students
- My Student Was Submitting AI Papers. Here's What I Did
- Developing AI Pedagogical Practices
Tech & Learning Newsletter
Tools and ideas to transform education. Sign up below.
Steve Baule served as a technology director, high school principal, and superintendent for 20+ years in K-12 education. He is currently the director of Winona State University’s online educational doctorate program in Minnesota.