Artificial Intelligence (AI) in Curriculum and Assessment Hub

Following the increased awareness of the role of artificial intelligence (AI) in education and specifically in curriculum design, the AI in Curriculum and Assessment Working Group has drawn together expertise from across the university to develop a position statement and guidance on the use of AI for staff and for students at USW.

This hub provides information and signposting to resources created internally for colleagues and for students, as well as curated guidance from external sources, to enable colleagues to understand the approach to AI at USW and to consider the impact of AI in their curriculum and assessment design.

In providing these resources we aim to support educators to feel confident in using AI effectively in promoting learning, and to encourage colleagues to view AI as a tool that can be used to support all students, independent of background and socio-demographic characteristics, to be successful in their studies and in their future work.

What is GenAI (such as ChatGPT)?

Generative Artificial Intelligence (AI) can automatically generate multi-modal content (e.g., text, music, images) using minimal prompts. The most well-known current example of this is ChatGPT (OpenAI), however, there are tools we often do not realise are AI: Grammarly, Bing Chat, Google Translate, and Google Bard, and these are becoming more and more embedded in our everyday lives.

The rapid development of these tools presents substantial challenges for Higher Education, specifically in terms of assessment and maintaining academic integrity. They also offer valuable opportunities to enrich and personalise the learning experience.

A screenshot of an example of a generative AI chatbot

What can it do and what are the benefits?

A good way to learn about this is to try out an AI tool yourself.

A lot of these tools offer free trials. Simply start typing your request or question and the AI will answer back. Be vigilant in not sharing any personal or institutional data with these tools as that would breach GDPR and data protection policies.

ChatGPT and other Generative AIs are useful tools that, if used appropriately, can save time - aiding, information, and ideas for many work or study tasks.

Guidance for academic colleagues

Generative AI creates new opportunities to develop educational content and teaching activities, provide students with an interactive and engaging learning experience, and improve learning outcomes through adaptive and personalised means of teaching.

AI may be used to:

  • generate ideas and drafts for curriculum design, module outlines, lesson plans and teaching activities.
  • create assessment questions, topics for group projects, essay and exam questions, and sample answers at different performance levels.
  • produce teaching materials, such as case studies, code samples, article summaries, and translations.

USW develops graduates who are ready to meet the challenges of the real world. As such you will need to consider whether there is a need to change your curriculum design to teach knowledge and skills that are relevant in an AI-driven economy. You should reflect on your module content, learning objectives, teaching plans, and learning activities and where necessary adapt these to consider the use of generative AI. Activity design could focus on higher-order cognitive skills, such as concept acquisition and application, synthesis of evidence, critical and creative thinking, systematic decision-making, and solving complex and practical problems. This can be leveraged most effectively through active learning pedagogies including:

  • Team projects
  • Scenario-based learning
  • Challenge-based learning
  • Simulation
  • Co-creation
  • Cross-curricular and inter-disciplinary working

Where possible the role of AI in an assessment should be considered at the outset and the assessment designed in such a way to make the best use of AI to support student learning and the development of digital skills.

Reconsidering your assessments

Some assessment types are more susceptible to the inappropriate use of AI than others, for example, an essay that focuses primarily on students demonstrating knowledge is at greater risk than an assessment comprised of teamwork activities, critical analysis, personal reflection, and the application of learned skills. As such engaging students in assessment through active learning is the best way to design real-world ethical use of AI and design out opportunities for AI misuse.

Students must understand that to maintain academic integrity their use of AI must be transparent and made clear to the person grading their work.

Certain courses may require students to use AI-generated content for certain assignments. In these cases, it will always be explicitly stated in the assignment brief that you may use AI, and the brief will give guidance on how to write a proper declaration and acknowledgement of your use of AI.  You should use this declaration to aid your academic judgement of the extent to which students have met the assessment criteria.

In practice, detecting AI-written text may prove to be challenging. If you suspect a student of inappropriate use of AI tools, then you must proceed as you would in any other example of misconduct.

AI is a valuable tool to support an inclusive curriculum, however, it is important to consider what barriers your students might face in accessing generative AI and design your curriculum and assessments accordingly. This includes considering your students’ digital literacy and whether they have the knowledge and skills to access AI to support their learning in the first place.

We need to be aware that generative AI tools are not equally accessible to all users. Although ChatGPT is free of charge, it remains unavailable in certain countries and could be temporarily inaccessible at peak times. GPT-4, a more powerful and multimodal AI chatbot, is restricted to fee-paying subscribers. The range of AI apps available is also expanding at a rate beyond our capacity to explore them for teaching and learning purposes, so there needs to be a way that we can level the playing field for students needing to engage with these rapidly developing technologies.

When planning for the use of AI tools in your teaching, consider beforehand what barriers students might face and design these out.

  • Choose free tools.
  • Identify and point students towards subject-specific AI tools.
  • Provide guidance on the expected use of the AI within the learning or assessment activity.
  • Scaffold the use of generative AI so that all students are supported to develop the skills to use AI within your context.
  • Ensure that all students can use the AI in the context and country where they are based.

Benefits for students

Generative AI can interact with students conversationally, providing personalised learning support and feedback. By adapting to students’ performance and adjusting the learning paths accordingly, AI tools can respond to individual learning needs and improve learning progress.

  1. 24/7 Availability: Generative AI is always ready to assist, ensuring students can access support whenever they need it, be it during late-night study sessions or on weekends.
  2. Scalability: Generative AI can allow for learning activities and assessments to be scaled easily by accommodating scalable groups.
  3. Self-Paced Learning: Students can progress at their own pace, ensuring no one feels rushed or left behind.
  4. Enhanced Engagement: Conversational AI interfaces can make learning interactive and engaging, keeping students motivated and interested.
  5. Adaptive Content: Generative AI recommends supplementary learning materials tailored to each student's unique learning style and pace, ensuring they receive the right resources at the right time.

Guidance for students on the appropriate use of AI for learning and assessment

Using AI tools to help with such things as idea generation or essay planning, may be an appropriate use, though the requirements of the assessment must be considered. Where the use of generative AI tools is permitted within an assessment, students may be asked to explicitly share and reflect on the prompts they have used within a generative AI tool, the resulting outputs, and any modifications they have made before final submission.

Whilst AI can benefit students in planning and structuring work, it is not acceptable to use these tools to write the entire essay or any other assessment from start to finish. Students should also be aware that generative AI tools do not fact-check the information generated, therefore they cannot rely on the accuracy of such tools. If generative AI is used in any part of assessed work, it is the student’s responsibility to check all outputs generated by the AI to make sure that the information produced is current and correct and that it is acknowledged with its source appropriately.

To make sure student work complies with the standards expected by the university, you may find it helpful to use the table provided below, and the guidance on complying with University policy following this table:

Question Logical route
1 Did you create the submitted content yourself? If Yes: Go to Question 2.
If No: Go to Question 6.
2 Did you use technology to check spelling/grammar? If Yes: Go to Question 3. 
If No: Acceptable practice.
3 Did you do anything else? If Yes: Go to Question 4. 
If No: Acceptable practice.
4 Did you ask someone/use technology to proofread your work? If Yes: Go to Question 5. 
If No: Acceptable practice.
5 Other than correcting spelling/grammar errors, did someone else/the technology you used to edit your work?   
For example: 
Inserted references to sources that you have not checked for accuracy, relevance, and validity. 
Made significant changes to the structure and content of your work without your input, and which you have not critically evaluated. 
Experimented with different writing styles or debugged code.
If Yes: Might constitute academic misconduct. 
If No: Acceptable practice.
6 Did a piece of technology write, or provide guidance on writing, aspects of your assignment? If Yes: Go to Question 7. 
If No: Go to Question 8.
7 Have you cited any content generated by technology within your assignment appropriately? If Yes: Acceptable practice. 
If No: Might constitute academic misconduct.
8 Did a person/essay writing service contribute to or complete your assignment for you? If Yes: Might constitute Academic misconduct. 
If No: Acceptable practice.

Issues around training a text GPT (Generative Pre-trained Transformer)

The use of AI technologies has ethical implications that should not be ignored. For example, ChatGPT builds economic value by making use of data generated by others, including its users. OpenAI, the company that makes ChatGPT, has received billions of dollars of investment funding (Forbes, 2023) from a range of investors, including Microsoft, Elon Musk and others.

At the same time, there are ethical implications of not using (or stopping others from using) AI technologies, such as ChatGPT. Such technologies are already being embedded into everyday software and devices (e.g., within Microsoft Office, Google Maps, and SMS) and it is becoming difficult if not impossible to avoid them. Educators have a responsibility to support students to learn how to navigate the present and the future.

There are no simple answers to how AI should be dealt with in learning, teaching and assessment. It is also important for staff and students to inform themselves about the opportunities and risks of these technologies and, where possible, to discuss them with students about their particular unit and educational context.

For GenAI to generate text, it first must be “trained”. This involves the tool being provided with, and processing, huge amounts of data scraped from the internet and elsewhere. It is reported, but not confirmed by OpenAI, that the training of GPT4 involved a million gigabytes of data. Processing this data involves identifying patterns, such as which words typically go together (e.g., “Happy” is often followed by “Birthday”).

Training AI requires huge amounts of power and indirectly generates huge amounts of carbon, having negative consequences on climate change. For example, it is estimated that the training of GPT3 (the GPT used by the first version of ChatGPT made available to the public) consumed 1,287 megawatt hours of electricity and generated 552 tons of carbon dioxide, the equivalent of 123 cars driven for one year.

Another concern is that when future GPTs are trained, the data they ingest will likely include substantial amounts of text generated by previous versions of GPT. This self-referential loop might contaminate the training data and compromise the capabilities of future GPT models – in other words, is the information provided to us valid, correct and up to date?

Before it is used, information is often checked and refined by people – in a process known as Reinforcement Learning from Human Feedback (RLHF). In RLHF, text GenAI responses are reviewed and validated by human reviewers. These human reviewers ensure that the GenAI responses are appropriate, accurate, and aligned with the intended purpose. Sometimes the provider of the GenAI then sets up what are known as ‘guardrails’ to prevent the GenAI from generating objectionable materials.

Sadly, the ethics surrounding AI are put into question. During the development of ChatGPT, underpaid workers in global south countries (e.g., Kenya) were paid less than $3 an hour to review the outputs of ChatGPT and identify any objectionable or inappropriate materials. This exploitative work has a massive negative impact on those involved.

AI and Data Protection

This guidance aims to ensure that we use these generative AI tools lawfully, and appropriately while upholding our commitment to safeguarding data and respecting privacy. To ensure the lawful and appropriate use of generative AI tools, it is recommended that you follow the guidelines provided. These guidelines are extensions of existing University policies and are designed to safeguard institutional data, which everyone in the university is legally and ethically obligated to protect.

USW has not procured an enterprise generative AI tool or service. Currently, no publicly available AI tool has been assessed to have met USW’s security, privacy, and compliance criteria for handling personal data. Colleagues wishing to use generative AI tools with such data must not do so unless this use has been approved following a Data Protection Impact Assessment (DPIA).

Colleagues are prohibited from entering any personal data controlled or processed by the university into a generative AI model unless a Data Protection Impact Assessment (DPIA) has been carried out and the necessary authorisation received.

This approach does not differ to USW’s stance on all new software and new processing activities.

Colleagues are also prohibited from using generative AI to make a judgement or a decision about an individual unless a DPIA has been carried out and the necessary authorisation received. The accuracy of generative AI cannot be relied upon. Such decision-making could result in an unfair outcome for an individual.

Our Information Classification Policy prohibits any information classified as ‘private’ or ‘confidential’ being shared or processed unless it is absolutely necessary for a business purpose. The Information Classification Policy sets out the specific security measures that need to be adhered to when processing this type of information. Entering any ‘private’ or ‘confidential’ data relating to the university’s business or security posture into an unapproved generative AI model could be in breach of this policy and may place the university at risk.

Any data entered into a publicly available unapproved generative AI model could effectively amount to a third party disclosure. Some generative AI tools gather and retain data from users as they learn and improve. Any information you input into such a tool could become a part of its training data, which might be shared with users beyond the university. For example, if a user asked the AI model about the University of South Wales business plans or security measures, this confidential information could then be revealed.

In terms of personal data, data subjects may not know that their data is being used for this purpose and they may not be able to exercise their rights. There may not be a legal basis for the processing and their data will no longer be secure. This would result in a breach of the UK GDPR and the Data Protection Act 2018 and the University could be subject to enforcement action, including financial penalties. This is why we must complete a Data Protection Impact Assessment before any third-party generative AI is approved for use.

Colleagues must remind themselves of the definition of personal data. It is data that relates to an identified or identifiable individual. Remember that it may be possible to identify someone from the data, even when a name is not included. It may be possible for the model to identify an individual by combining the data with data it already has access to.

This includes employee data, student data (including but not limited to grades, names, student numbers, email addresses, data focusing on a specific individual), and potentially unreleased research data. Some funding bodies have specific policies against the use of generative AI tools in grant applications or proposals.

Sharing information with publicly available generative AI tools may expose sensitive data to unauthorised parties or violate USW data use agreements.

Some examples of data leakage through generative AI tools:

  • Uploading sensitive emails about an individual and asking the AI model to re-write the content
  • Asking generative AI to summarise meeting notes, accidentally sharing future business plans with the provider
  • Uploading a recording of a meeting into a personal assistant AI model
  • Pasting lines of confidential code into an AI model


  • Does this information relate to an individual who could be identified if this data were combined with data from another source?
  • Does this information include confidential information relating to the University?

Support and Resources

The USW AICAWG has combined the expertise of colleagues from across USW to review and understand the role of AI at USW, and to produce guidance for colleagues and students.

If you would like to either be involved in any of our work regarding Artificial Intelligence (AI) in Curriculum and Assessment or know more about what we are doing, please contact [email protected].

The University of South Wales (USW) is committed to leveraging generative AI for the benefit of students and staff, and promotes the fair, ethical, professional, and responsible use of generative AI tools. The University is committed to preparing our students for an increasingly AI-enabled future and acknowledge digital fluency as a key USW graduate attribute.

View the USW position statement on the use of AI in teaching and assessment.

Generative Artificial Intelligence (AI) in Higher Education Awareness Raising Webinars

The integration of Generative Artificial Intelligence (AI) in Higher Education presents a wealth of opportunities and challenges. On the positive side, Generative AI can personalise learning experiences, catering to individual student needs. It can offer predictive analytics to identify at-risk students, enabling timely intervention and support. Additionally, AI-powered virtual assistants and chatbots can provide instant responses to student queries, enhancing efficiency and accessibility. However, these opportunities come with challenges. There are ethical concerns surrounding data privacy and bias in AI algorithms and, of course, the ever-present fear of student plagiarism.

These awareness raising sessions aim to give staff an introduction to Generative AI, with a specific focus on HE, hopefully dispelling some of the fears propagated by the media and introducing attendees to the opportunities of using Generative AI effectively in curriculum and assessment to prepare students to meet real world challenges as graduates.

Bookable online sessions will run on the following times/dates:

Friday 20th October 11-12.30

Monday 13th November 12.00-1.30

Wednesday 6th December 10-11.30

To book a space on one of these sessions, please visit iTrent.

Take a look at our Library skills guide guide which will help and support you to find the information you need with tutorials, video guides and more.