Artikkelit

AI in Action – empowering public services and education in the UK

18.09.2024

AI in Action – empowering public services and education in the UK

The UK government has commissioned an AI Opportunities Action plan to kick-start economic growth and improve the UK's public services. The government has also announced its plans to introduce new AI legislation. AI and AI-generated content creates opportunities and  challenges for the education sector, as well. While making the best use of AI, it is important to protect pupils, students and teachers. This article introduces one example of using government data to train AI tools to create better teaching materials and tools for school administration. 


The UK does not currently have any AI-specific regulation, and has relied on existing legislation and the application of non-binding principles applied across sectors. Several domestic laws affect the development and use of AI and provide protections to those affected by its use, such as data protection laws, consumer laws, intellectual property laws, competition laws, and human rights laws, particularly anti-discrimination laws. 

However, in July 2024, the new Labour government announced its plans to introduce new AI legislation. This is a departure from the principles-based approach of the previous government. The Labour Manifesto outlines their plan to introduce regulation targeting those companies developing "the most powerful AI models", suggesting systems considered ‘general-purpose AI'. Alongside the EU, USA and Israel, this September the UK government signed the first legally binding international treaty regarding safe use of AI. The framework agreed by the Council of Europe commits parties to implement measures that protect the public from the misuse of AI, and monitor its use and development in both the public and private sector.

The UK government has not yet released details regarding future legislation; however, the UK is expected to remain more lenient in its regulation than the EU. The current government has promised to drive research and innovation in the field, and support diverse business models. Despite a move towards legislation contrasting the approach of the previous government, the change is not unwelcome by regulators, researchers, and other actors in the field. Though AI-specific legislation was previously viewed as a question of regulation versus innovation, clear regulatory framework can strengthen not only the trust of the public, but also the confidence of those businesses and public sector bodies deploying AI technology.

AI is a key component of the government's 5 key missions 

UK Science Secretary Peter Kyle has commissioned an AI Opportunities Action plan, which aims to set out a roadmap using AI to kick-start economic growth and improve the UK's public services. Questions regarding data protection and ethical issues still cause some hesitancy in the adoption of AI technologies, particularly in public sectors handling vast amounts of sensitive data, such as education and healthcare. 

Data protection rights are central to the use of AI technology - in order to be accurate, AI tools require the use of large data sets. The more data processed, the more useful the tool is likely to be. The GDPR was implemented in the UK as the UK GDPR, and as such remains a part of domestic law in the UK. Organisations must adhere to these principles, even when processing data via AI technology. The GDPR contains provisions that apply to automated decision making and profiling - including AI-based decisions. Where these decisions include the processing of personal data, the provisions of UK GDPR apply, including the principles of security, fair and transparent processing and data minimisation. The inherently complex nature of AI systems can make the application of GDPR complicated, and the UK GDRP does not address AI-specific issues such as algorithmic bias. Transparency, education and training in the use of AI, the involvement of experts as well as regulation and AI-specific legislation can mitigate these risks and help build trust in AI technologies when deploying it in environments such as schools. 

Education content store, first-of-íts-kind approach to processing government data for AI 

By November 2023, 42% of primary and secondary teachers had used GenAI, a significant increase from 17% in April, 2023. The Responsible Technology Adoption Unit (RTA) within the Department for Science, Innovation and Technology (DSIT) commissioned a research in partnership with the Department for Education (DfE) to understand how parents and pupils feel about the use of AI tools in education. The research engaged 108 parents and pupils across three locations in England in a mix of face-to –face and online activities. 

The key findings of the research showed that both parents and pupils frequently share personal information online, often without considering the implications. While awareness of AI among both parents and pupils was high, understanding did not run deep. Views on the use of AI in education were initially sceptical – but there was openness to learning more. Parents and pupils agreed that there are clear opportunities for teachers to use AI to support them in their jobs. They were largely comfortable with AI being used by teachers, though more hesitant about pupils interacting with it directly. The main concerns regarding AI use centred on overreliance – both by teachers and pupils. Participants were worried about the loss of key social and technical skills and reduced human contact-time leading to unintended adverse outcomes. The research also showed that opinions on AI tools are not yet fixed. 

Participants stressed the importance of human involvement in AI use at every step of the process. Both parents and pupils felt that they should be enabled to make free and informed decisions about how pupil work and data is used. The participants also felt that AI use in schools should only be through standardized and strictly regulated tools to ensure quality control and equity of access. The use of AI tools should be restricted and there was a general consensus that AI tools would be best used by pupils in secondary education. 

Soon after the publication of the above mentioned research, the UK Government launched a new £4 million project that pools government document including curriculum guidance, lesson plans and anonymised pupil assessments. The content store can be used by AI companies to train their tools to create better teaching material as well as assessment and administration tools. The content store is a first-of-íts-kind approach to processing government data for AI and could be one of the test beds for forthcoming AI legislation. 

More information:
Labour Party Manifesto 2024: https://labour.org.uk/change/
Ministry of Justice press release 05.09.2024: https://www.gov.uk/government/news/uk-signs-first-international-treaty-addressing-risks-of-artificial-intelligence
UK Government press release 28.08.2024: https://www.gov.uk/government/news/teachers-to-get-more-trustworthy-ai-tech-as-generative-tools-learn-from-new-bank-of-lesson-plans-and-curriculums-helping-them-mark-homework-and-save
AI Opportunities Action Plan: https://www.gov.uk/government/publications/artificial-intelligence-ai-opportunities-action-plan-terms-of-reference/artificial-intelligence-ai-opportunities-action-plan-terms-of-reference
Information Commissioner's Office's guide on AI https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/ 
Generative AI in education, policy paper 26.10.2023 https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education 
Research on public attitudes towards the use of AI in education https://www.gov.uk/government/publications/research-on-parent-and-pupil-attitudes-towards-the-use-of-ai-in-education

Text:  Intern, Sophia Macnamara, sophia.macnamara@gov.fi and TFK special adviser in the UK and Ireland, Birgitta Vuorinen, birgitta.vuorinen@gov.fi