Letters

Cultivate moral values to ward off AI concerns in academia

LETTERS: Researchers have expressed apprehension about artificial intelligence (AI) affecting academic integrity in universities.

They are concerned about the misuse of AI tools by students doing assignments or the inappropriate use of AI in research data.

Instructors with limited experience in assessing academic papers or detecting academic dishonesty may encounter difficulties in discerning instances where students use AI to cheat.

Several universities have acknowledged the potential of the ChatGPT programme, as seen by the University of Tasmania's declaration regarding the use of AI by its students and staff.

"You can use generative AI to learn, just like you would study with a classmate or ask a friend for advice.

"You are not permitted to present the output of generative AI as your own work for your assignments or other assessment tasks.

"This constitutes an academic integrity breach. In some units, a unit coordinator may explicitly allow or require the use of AI in your assessment task."

On the contrary, the University of Hong Kong has prohibited the use of AI by its students.

To address cases of cheating or misuse of ChatGPT, as well as other AI technologies, it is necessary to foster moral character in students.

One way to cultivate those with strong moral values and critical thinking skills is by employing good teacher role modelling, providing leadership development chances and offering training in self-awareness, ethics and decision-making.

This approach offers a good response instead of focusing only on prevention.

ASSOCIATE PROFESSOR DR 
NURKHAMIMI ZAINUDDIN

Faculty of Major Language Studies, Universiti Sains Islam Malaysia


The views expressed in this article are the author's own and do not necessarily reflect those of the New Straits Times

Most Popular
Related Article
Says Stories