AI Use in College Teaching: New Data

Source: npr.org

Published on October 2, 2025

AI in Higher Education

G. Sue Kasun, a Georgia State University professor, leveraged generative AI during the summer to aid in brainstorming for her new course. As a language, culture, and education professor who teaches future language instructors, Kasun employed Gemini—Google's generative AI chatbot—to generate ideas for readings and activities for a course focused on integrating identity and culture in language education. She said that the AI provided suggestions for activities such as generating an image or writing a poem. Kasun also uses Gemini to develop grading rubrics, and verifies the AI's outputs for accuracy and alignment with her learning objectives. She notes that it saves a lot of time.

Kasun is among a growing number of faculty in higher education who are integrating generative AI models into their work. According to a recent national survey by Tyton Partners, approximately 40% of administrators and 30% of instructors use generative AI on a daily or weekly basis. This is a significant increase from the spring of 2023, when the figures were just 2% and 4%, respectively.

AI's Impact on Curriculum

New research from Anthropic, the company that created the AI chatbot Claude, indicates that instructors across the globe are utilizing AI for different tasks, including curriculum development, designing lessons, conducting research, writing grant proposals, managing budgets, grading, and creating interactive learning tools. Drew Bent, the education lead at Anthropic, noted that education was one of the top use cases for Claude. These findings led to a report on how university students use the AI chatbot, as well as recent research focusing on how professors use Claude.

Anthropic's report analyzed around 74,000 conversations between users with higher education email addresses and Claude over an 11-day period. An automated tool was used to examine these conversations, and it was found that 57% of the conversations related to curriculum development. Bent highlighted that professors were using Claude to develop interactive simulations for students, such as web-based games.

AI for Research and Admin

The second most common application of Claude by professors was for academic research, accounting for 13% of conversations. Educators also utilized the AI chatbot to perform administrative duties, including creating budget plans, drafting recommendation letters, and developing meeting agendas. According to their analysis, professors tend to automate routine tasks, such as those that are administrative or financial in nature. Bent stated that the use of AI in teaching and lesson design was more of a collaborative effort.

Anthropic published its findings, but did not release the full data behind them, including the number of professors involved in the analysis. Anthropic's research represents a specific period in time, and Bent suggests that analyzing a different time, such as a period in October, could yield different results. Grading student work was the focus of approximately 7% of the conversations analyzed by Anthropic.

Bent mentioned that educators often automate significant parts of the grading process when using AI. Anthropic collaborated with Northeastern University, surveying 22 faculty members about their experiences with Claude. Survey feedback from university faculty indicated that grading student work was the task the chatbot was least effective at. It remains unclear if the assessments produced by Claude were actually factored into student grades and feedback.

Concerns and Considerations

Marc Watkins, a lecturer and researcher at the University of Mississippi who studies the effects of AI on higher education, expressed concern about Anthropic's discoveries. He fears a scenario where students use AI to write papers while teachers use AI to grade those papers, questioning the purpose of education in such a case. Watkins is also concerned about AI's use devaluing professor-student relationships, saying that using AI to automate tasks like writing emails, letters of recommendation, grading, or providing feedback is something he opposes.

Kasun, the Georgia State professor, also believes that professors should not use AI for grading. She wishes that colleges and universities would offer more support and guidance on how to effectively use this technology. Bent suggests that tech companies should collaborate with higher education institutions, adding that companies telling educators what to do is not the right approach.

Educators and those in the AI field, such as Bent, acknowledge that the decisions being made now about incorporating AI into college and university courses will have long-term effects on students.