AI and assessment

Jens Tofteskov

Ida Marie Malmkjær

Learning consultants from Teaching & Learning, Jens Tofteskov and Ida Marie Malmkjær, discuss to what extent the use of generative AI will play a role in the assessment of a written product, after it has been allowed to use generative AI for final assignments at CBS, submitted after February 1, 2024. According to CBS guidelines, it is permitted to use generative AI in a written final assignment if it is declared and referenced correctly – just as it is allowed to use AI in courses where it is listed in the course description as a permitted tool – and therefore, it is not allowed to negatively assess an assignment solely based on the use of AI. But where does the distinction lie, and when is it a proof of independent, original work or the opposite? Is it okay to rate an assignment higher if AI is not used? Or are we moving towards an embrace of generative AI, where we will assess the use of AI in students’ work higher than if it were not used?

Note that this is solely an expression of Jens’ and Ida’s subjective opinions, and the following discussion should not and must not form the basis for the assessment of exams where AI is included. CBS guidelines for the use of AI must be adhered to at all times, by students as well as teachers and assessors.


Jens: Should a thesis, where students have used AI, be assessed according to different criteria than theses where AI has not been used? The general answer to this question is no. If the students have used AI as described in the guidelines, it is a matter of them having used an allowed work tool in the same way as if, for example, they used a statistical program. However, it is important that, in the same way as with the use of a statistics program, the use of AI is not “neutral” in terms of assessment. If AI is used directly, without commentary and reflection, it should have a negative impact on the assessment. The same is true if students use results from a statistical program without interpreting and evaluating the data produced. Furthermore, it is obviously crucially important that in the thesis’ methodology section, it is clarified where and how generative AI has been used. If the use of AI is not fully and adequately described and revealed, it will be considered as an attempt at cheating on exams and will be subsequently handled by CBS’ Legal department.

Ida: Of course, I fundamentally agree that the students’ work should be assessed according to the applicable rules – and if it is allowed to use AI in, for example, master’s theses, then as an examiner or censor, one should not let the use of AI influence the assessment. But whether it works that way in practice I can get nervous about – we must admit that artificial intelligence has the potential to reform the way we educate, and that it has stirred the pot. It quickly becomes a question of how highly one values originality and independence in problem-solving. Is a task independent if Copilot has generated part of the content? And if not – is that a problem? For some, it may seem redundant to have to fetch factual data from books and journals about a certain theory or academic area, when one can delegate that task to AI tools such as Copilot. But isn’t there some learning in that gathering and transfer to an actual text that is lost if artificial intelligence has generated something that one would have formulated oneself in the past? Even though I am aware that as an assessor one legally must not place (negative) emphasis on the use of AI (if it has been used correctly and referenced correctly), I believe that it will be difficult not to let it influence your professional opinion if, for example, one is faced with two assignments, where one has written a good product without the use of AI and another has also written a good product with the use of AI. You say that if AI is used directly, with no commentary or reflection, it should have a negative influence on the assessment – but surely not if it is allowed to use generative AI and you refer to it in the correct manner? If the rules are broken and, for example, not referenced correctly, then it is cheating on exams and should be sanctioned accordingly. In this case, we’re not just talking about a poor assessment of the task.

Jens: I would think that many think in the way you have described. But I would think that it is unavoidable that AI should not be considered differently than other elements that can be used to enrich a thesis. But if you use quotes un-commented and un-reflected, it should have a negative influence on the assessment. If you use data un-commented and un-reflected, it should have a negative influence on the assessment, – the same way with AI. Therefore, it is also important that it is explained in the method section of the assignment how AI has been used as well as what influence it has had on the process and result. If the use of generative AI has been referred to according to the rules, the correct assessment must be based on the learning objectives, and no account should be taken of how much and to what extent generative AI has been used. That must be the conclusion.

Ida: It is of course important that it is cited and declared correctly, and I agree that the assessment must not be influenced by the use of generative AI if it has been used according to the current rules. Additionally, I hope that in the future we might look at how AI can enrich our educations and the students’ learning – and hopefully move away from considering the use of AI as something immediately negative and an independent piece of work.

Ida Marie Malmkjær
Ida Marie Malmkjær
Articles: 2