What is this AI stuff anyway
As Artificial Intelligence (AI) becomes more widespread in daily personal and corporate use, it is incumbent on us all to have some guidelines on what AI actually is, how it works in a generic sense and what impacts it might have on daily operations within public facing bodies.
The AI world is dominated by hype and grandiose claims about what different aspects of AI can do but, just about the only thing we can be certain about, is that AI is here, it will continue to become more pervasive, and it is not going away.
This Blog (while hopefully not adding to the hype) points readers to some relevant sites, and maybe makes things a little clearer to the humans who might read it.
Probably the first thing to be clarified about AI is that AI is in fact a tool and nothing more at this time - perhaps it’s a bit more functional than a hammer, but it’s a tool.
It’s common to distinguish between two broad categories of AI:
Weak AI and
Strong AI.
Weak AI, also known as Narrow AI, is artificial intelligence designed to perform a specific task exceptionally well. It excels within a limited domain and doesn't possess general intelligence. Unlike strong AI, which aims to replicate human intelligence in its entirety, weak AI focuses on solving a particular problem.
Examples of Weak AI:
Virtual Assistants: Siri, Alexa, and Google Assistant are classic examples. They can understand and respond to voice commands, set alarms, provide information, and control smart home devices, but their capabilities are limited to these tasks.
Recommendation Systems: These systems, used by platforms like Netflix, Amazon, and Spotify, suggest products, movies, or music based on user preferences and behaviour. They are excellent at their task but cannot perform other cognitive functions.
Image Recognition: AI-powered systems can accurately identify objects, faces, and patterns in images. Applications include facial recognition, medical image analysis, and self-driving car technology.
Spam Filters: Email spam filters utilize weak AI to distinguish between legitimate emails and spam, protecting users from unwanted messages.
Medical Diagnosis: AI algorithms can analyse medical data, such as X-rays or patient records, to assist doctors in diagnosing diseases.
Strong AI, also known as general AI, refers to AI systems that possess human level intelligence or even surpass human intelligence across a wide range of tasks.
Strong AI would be capable of understanding, reasoning, learning, and applying knowledge to solve complex problems in a manner similar to human cognition.
Development of strong AI is still largely theoretical and has not been achieved to date.
However, Weak AI, in differing formats, is all around us. For example, as this document is being typed, predictive text is suggested, thus speeding up the preparation of this blog. This leads us nicely to Machine Learning.
Machine Learning focuses on the development of algorithms and models that enable computers to learn from data and make predictions or decisions without explicit programming. The predictive word suggestions mentioned above are more explicitly examples of Machine Learning.
Public Policy:
The Irish Government has made a commitment that AI tools used in the civil and public service must comply with seven key requirements. These are:
Human agency and oversight
Technical robustness and safety
Privacy and data governance
Transparency
Diversity, non-discrimination and fairness
Societal and environmental well-being
Accountability
Full detail on each of these requirements can be found in the OGCIO document referred to below, so will not be repeated here.
However, it is worth noting the risks of using AI, and specifically areas that OGCIO consider High Risk.
These are:
The use of AI systems in contexts where the system output could impact:
o fundamental human rights (e.g. human dignity, privacy, non-discrimination, fair trial, safety, freedom of expression): or
o well-being (e.g. job quality, health, civic engagement, education, the environment, social interactions)
More specifically, the draft EU AI Act sets out the following contexts as high-risk use cases for AI:
Biometric identification
AI Systems intended for use as safety components in the management and operation of critical [digital] infrastructure
Education and vocational training (in particular for determining access, assessing student performance, assessing the appropriate level of education for the individual) Employment, workers management and access to self-employment
Access to and enjoyment of essential private services and public services and benefits
Risk assessment and pricing for life and health insurance
Systems intended for use by law enforcement to assess risk of a person reoffending, for emotion recognition purposes, to evaluate the reliability of evidence, to profile natural persons
Migration, asylum and border control management
Administration of justice and democratic processes
Under the regulations, high-risk AI systems will be subject to strict obligations before they can be put on the market, including adequate risk assessment and mitigation; appropriate human oversight measures to minimize risk; and logging activity to ensure traceability of results.
Data:
Data used in an AI model must comply with GDPR requirements.
Under GDPR, permission must be sought to use personally identifiable information. This includes facial images and voice.
Data should not be used in AI models in a way that breaches intellectual property rights.
Where the system is procured from a third-party vendor, they must confirm that their data is GDPR compliant and does not breach the intellectual property rights of others.
Additional reading:
The OGCIO (Office of the Government Chief Information Officer) has issued guidelines on the use of AI in the public service.
The document is titled "Interim Guidelines for Use of AI in the Public Service" and was published in February 2024 and is available here:
https://assets.gov.ie/280459/73ce75af-0015-46af-a9f6-b54f0a3c4fd0.pdf
The National Cyber Security Centre has recently published two relevant guidance documents including a guidance document on the cyber security considerations when procuring ICT products and services and specific guidance on the use of Generative AI. These are:
1. Guidelines on Cyber Security Specifications (ICT Procurement for Public Service Bodies):
Link: https://www.ncsc.gov.ie/pdfs/Guidelines_on_Cyber_Security_Specifications.pdf
2. Cyber Security Guidance on Generative AI for Public Sector Bodies:
Link: https://www.ncsc.gov.ie/pdfs/Cybersecurity_Guidance_on_Generative_AI_for_PSBs.pdf
These documents provide valuable insights into ensuring cyber security in ICT procurement and the use of Generative AI within the public sector.
However, the following should be noted:
Most of the current GenAI models available for use are public cloud-based systems. The information you put into the model through prompts will be visible to the company that owns the model.
The queries that are put into the model will almost certainly be used to train future iterations of the model. Therefore, it is not unreasonable to assume that information you put into the model will at some point surface in the future when others are querying the technology.
It is therefore imperative that data that public facing bodies do not want in the public domain is never entered into a public GenAI model. This includes classified information, personal data, commercially sensitive data, private Government business etc.
In the context of cyber security, any information around the network topology, software source code, assets lists or details of deployed hardware or software etc. would be information which should never be inputted to a GenAI model
Some suggested DOs and DONTs from the NCSC (National Cyber Security Centre);
DO
Do validate all generated output for accuracy, copyright infringement and bias.
Do switch off chat history or regularly delete interactions to limit potential data breach exposure.
Do ensure you have selected a legitimate site or an official mobile device app. There are many unofficial sites and apps available that could be malicious.
Do understand the limitations in responses due to incomplete or insufficient data available to the platform.
Do thoroughly validate all computer code outputs for bugs and security issues.
Do treat your account security as a priority: Use a strong unique password and enable multi-factor authentication.
DON’T
Do not use public versions of GenAI services for business purposes.
Do not create accounts with corporate email addresses, unless you are using an enterprise version for which you have an approved business case.
Do not rely on GenAI to directly create, design or draft Public Facing Body policy.
Do not use GenAI to generate responses to representations made to Ministers.
Do not enter any sensitive information such as personal data, business data, propriety information (like software source code) or any government information.
Do not enter data that you would not normally want to be made publicly available.
Summary
AI is a tool. However, like most tools, it should be used carefully and under human supervision.
AI tools should not be used by themselves, but they can be used to enhance productivity. They are a way to automate repetitive and mundane tasks.
There are risks to using AI (specifically machine learning) tools, primarily around data confidentiality, but once the risks are weighed against the rewards, the use of AI tools in any organization can be actively considered.
All new technology, such as GenAI should only be adopted based on a clearly defined business need following an appropriate risk assessment. Each department or business will likely have a different view in terms of the business use case of GenAI tools and platforms, as well as the risk appetite for such use.
The NCSC recommends that access is restricted by default to GenAI tools and platforms and allowed only as an exception based on an appropriate approved business case and needs. It is also recommended that its use by any staff should not be permitted until such time as Departments have conducted the relevant risk assessments, have appropriate usage policies in place and staff awareness on safe usage has been implemented.
However, provided that the data security guidelines above are taken into account it might be useful to think of AI tools (such as ChatGPT) as Google Search on Speed!
It is clear that AI tools can be useful and do (and will) have their place within any organization, but care will be needed in their introduction.
One final point worth noting:
Creation of this document was facilitated by the use of a weak AI tool! – Thank you Gemini