As artificial intelligence tools become increasingly integrated into everyday life, higher education is now navigating a new frontier—professors using AI to prepare lectures, generate feedback, and even grade assignments. What began as a concern about students relying too heavily on AI has now come full circle, with students scrutinising how their instructors are using the same technology.
In February, a final-year student at Northeastern University, Ella Stapleton, discovered a curious note in her professor’s lecture slides on leadership models. The material included an unedited prompt to ChatGPT asking the AI to “expand on all areas” and listed various leadership traits with basic definitions and examples—typical signs of generative AI usage. The revelation triggered a closer look at the course material, which further revealed visual and textual inconsistencies, suggesting deeper reliance on AI.
Ethical Concerns and a Tuition Refund Request
Feeling betrayed by what she perceived as academic hypocrisy—given that the syllabus strictly prohibited the use of AI tools by students—Stapleton lodged a formal complaint with the business school. Her demand included reimbursement of the course tuition, which made up a significant portion of her semester fees. Though the university acknowledged the incident, her refund request was denied after a review process.
Stapleton’s experience underscores a broader issue now rippling through higher education institutions. While students initially faced scrutiny for using AI to complete assignments, they are now the ones raising concerns about the very educators who once enforced strict rules against it.
Professors Turn to AI to Manage Workloads
Many educators argue that the use of AI tools such as ChatGPT, Perplexity, and Gamma is not about replacing instruction but enhancing it. For instance, some lecturers use AI to structure lesson plans or provide prompt feedback. With ever-increasing class sizes and administrative demands, AI becomes a practical tool to manage time more effectively.
According to a survey conducted by Tyton Partners, the number of higher education instructors who frequently use generative AI nearly doubled in the past year. Universities are beginning to take notice, with AI enterprises like OpenAI and Anthropic offering dedicated platforms tailored for academic settings.
While this shift is helping to alleviate faculty workload, it is also leading to inconsistencies in educational delivery. Students now pay closer attention to common AI-generated phrasing or formatting quirks, prompting them to challenge the authenticity of their learning experiences.
Mixed Reactions from Students and Faculty
One Southern New Hampshire University student discovered her professor had asked ChatGPT to both grade her paper and provide pre-formulated positive feedback. Although the professor defended the practice by stating the school permitted AI assistance, the student felt the interaction devalued her academic work. She eventually transferred to a different university after encountering similar situations with other instructors.
Universities are beginning to implement frameworks to regulate AI usage. Southern New Hampshire University, for example, advises faculty not to use tools like ChatGPT to replace personal feedback. It maintains that AI should only enhance—not supplant—human judgement and interaction.
Some professors, such as those at Harvard and the University of Washington, have customised AI tools to support students with basic queries or writing assistance. These tools free up professors for more meaningful student engagement, including mentoring and hands-on problem-solving sessions. However, questions remain about the long-term implications of such practices on the development of future educators and researchers.
Policy Catch-Up and the Push for Transparency
Despite the increasing adoption of AI in academia, clear policies have been slow to emerge. In many institutions, the line between ethical use and overreliance remains blurry. Some educators believe that using AI-generated content for teaching is no different from relying on third-party resources or educational publishers. Others stress the need for transparency and proper review to ensure that AI-enhanced materials meet academic standards.
Northeastern University has since issued formal guidance requiring the attribution of AI usage and a thorough review for factual accuracy. The move comes too late for Stapleton, who graduated without a refund but may have contributed to an institutional reckoning.
The case raises vital questions: Should students be informed when AI is used in their education? Is it acceptable for professors to use tools they ask their students to avoid? As AI continues to redefine teaching methods, universities must strive for clarity, fairness, and above all, transparency.