The importance of businesses implementing an AI Policy?

February 12, 2025
Lisa Logan

Partner

View profile

Businesses are increasingly demanding their suppliers protect their data and agree not to upload it to AI platforms across many industries from medical, advertising, TV, pharmaceuticals and a wide range of tech companies. They are already sacking suppliers for breaches of use of sensitive commercial information. Private Equity investors also look to compliance in this area since it can affect key customers or lead to data protection breaches with significant penalties. An AI Policy and staff training is essential.

According to a survey by the Office for National Statistics in May 2023, 18% of people reported using Artificial Intelligence (AI) for work. However, relying on these tools without implementing an effective AI usage policy could see businesses paying large fines for failing to adequately protect the personal data these tools use to operate.

Data protection authorities have received an increasing number of notifications for data breaches caused by employees sharing confidential information with AI tools, resulting in the data being leaked or replicated. Under ChatGPT’s data policy, unless users explicitly opt out, all user-entered prompts are used to train its models, and once data is entered into the algorithm, there is no way to delete prompts from a user’s history save for completely deleting the account, a process that can take up to four weeks

For example, if a doctor entered a patient’s name and medical details into ChatGPT to have it draft a letter to the patient with the results of an examination, if a third party subsequently asked ChatGPT “what medical problem does [the patient] have?” ChatGPT could accurately answer using the information entered by the doctor.

This vulnerability can easily be exploited by hackers seeking to steal personal data, and for serious breaches of the UK data protection regulations, the Information Commissioner’s Office (ICO) has the power to issue fines of up to £17.5 million or 4% of the offender’s annual worldwide turnover, whichever is higher. In 2020, British Airways was fined £20 million for a data breach which affected more than 400,000 customers. Given the large volumes of data processed by machine learning systems, an absence of rigidly enforced company policies to control the data an AI system has access to could lead to similar breaches.

Additionally, many suppliers of AI products prefer to position themselves as a ‘data processors’ rather than ‘data controllers’, pushing responsibility for keeping the data safe onto the customer.  Commercially, this can cause significant problems with key customers. As such if the supplier were to suffer a data breach, the customer may be held responsible for failing to ensure that there were adequate measures in place to keep the data secure.

Likewise, the level of access granted to the provider of an AI tool could present a potential risk as well. In 2017 the ICO ruled that the Royal Free NHS Trust had unlawfully shared the data of over 1.6 million people after it shared data from its patients with DeepMind to develop a new AI diagnostic tool for kidney disease.[7] As such, companies seeking to implement some form of AI into their operations have a heightened responsibility to examine the data protection implications of any contract with an external vendor when incorporating an AI model into their business.

As well as potentially leaking personal data, failing to implement a proper company AI policy could result in commercially sensitive information being leaked as well. According to a survey by network security firm Cyberhaven, 8.6% of employees have pasted company data into ChatGPT since it launched.[8] In the same way as the medical example above, a cleverly worded set of prompts from a third party could get the program to divulge commercially sensitive information.

In April 2023, just 20 days after allowing its engineers to use ChatGPT at work, Samsung’s semiconductor division experienced three separate incidents whereby confidential corporate information was leaked by employees. Two of the incidents occurred after employees entered source code used in the facility’s operations into ChatGPT while attempting to use the AI to identify and fix errors with the code. The third leak occurred when an employee recorded a meeting on their phone and entered the discussion into ChatGPT in order to have it generate minutes of the meeting.

Thus, one of the best ways a business (seeking to take advantage of the opportunities presented by AI) can avoid a avoid a breach of data protection regulations is to implement a clear company policy, to guide how employees and the business engages with AI. An effective policy should strictly limit and regulate the data which can be entered into an AI program, in order to prevent any personal or special category data (data relating to an individual’s health, sexuality, religion, ethnicity, etc), as well as any sensitive commercial information, from being potentially being leaked. Creating a list of approved AI products which have been vetted by the company also significantly mitigates risk, by ensuring that only safe systems compliant with the company’s obligations are being used.

An effective internal AI policy should also seek to incorporate the principles enshrined in AI-relevant legislation, such as those set out in the EU’s new AI Act (lawfulness, fairness, transparency, accountability and respect for human dignity and autonomy), in order to encourage usage practices in line with the legislation, and, in the case of multi-national businesses, to allow AI tools to be effectively utilised across multiple legal jurisdictions.

As part of such a policy’s implementation, it is also critical that employees are kept up-to-date with policy guidelines and are aware of exactly how AI is being used by the company. Not only does this ensure compliance with the guidance, but also promotes transparency, which is particularly important if any AI tools are being used to process employee data.

In conjunction with guidance for employees, Data Protection Impact Assessments for any AI projects that involve processing personal data should also be conducted on a regular basis, especially at the start of any new AI projects, and appropriate technical and organisational measures should also be implemented for security. Encryption, pseudonymisation, data minimisation, and regular testing play a critical role in ensuring that data is kept secure and to no greater extent than in necessary, in compliance with data protection regulations.

Written with the support of Michalis Grouzis

To receive all the latest insights from gunnercooke to your inbox, sign up below