As most of us now recognise, Artificial Intelligence (AI) is here to stay. Usage is rising meteorically and in a recent international survey across 47 countries, almost half of employees are already using AI at work. AI is all over the press, and it is readily available in many forms through our smartphones and other devices.
As it continues to touch on our daily lives, it offers incredible opportunities for businesses to increase efficiency and innovation in the workplace. However, it also presents significant legal and business risks if not managed correctly and proactively.
Used in the wrong way, AI can potentially lead to damage to reputation, loss of customers, and to legal issues such as breach of confidentiality and data privacy rights and claims based on negligence and discrimination.
In a recent case which involved a law firm failing to check the accuracy of AI generated research used in legal proceedings (Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank) the presiding court commented (amongst other notable conclusions) that in respect of the solicitors in the Al-Haroun case, “there was a lamentable failure to comply with the basic requirement to check the accuracy of material that is put before the court”.
So, what should your organisation be considering at this critical time? Below are the key steps that we recommend:
Risk Audit
Identify the risks in your organisation, including when and how is AI currently being used (or likely to be used) by your staff or management. This may include:
- Are employees using AI such as Chat GPT in carrying out their roles? What are the risks associated with this use and what is beneficial?
- Is there a risk of AI usage by employees involving the use or disclosure of sensitive or confidential information or of personal data?
- Does your organisation (or that of any third-party service provider) use AI systems which present a risk of bias or discrimination, e.g. in recruitment or in other processes?
- Do any rules or guidelines exist across different business areas for AI usage (whether formal or informal)? Are they fit for purpose?
Policies & Procedures.
Effective policies and procedures are key to managing AI usage. Many organisations are now implementing AI usage policies. A good AI Usage Policy should include (as a minimum):
- Permissible tools and how to use them
- Data security and confidentiality measures
- Unacceptable use including breach of IP, confidentiality and bullying and harassment.
- The consequences of breaking the rules
- Who employees should speak to if unsure about AI use
- A process for reporting breaches including security incidents.
- A regular review mechanism.
Updating existing policies and contracts is also strongly recommended. This is likely to include your recruitment policies, disciplinary procedure, EDI policies, privacy notices, data protection policies and impact assessments, social media rules, IT and electronic communications policies, and confidentiality / IP use clauses in contracts.
If AI systems are used to make decisions (e.g. in recruitment), organisations should put in place a human intervention process to regularly monitor and audit the impact of AI algorithms and datasets to ensure fairness and equality. Consideration should also be given to your stance on the use of AI by candidates in recruitment processes, so as to ensure you can properly assess if a candidate is right for the job.
Training
Employees using AI tools may not have a clear understanding of how the tools work and how decisions are made by them. Educating employees on ‘AI literacy’ in line with your AI Usage Policy will help to manage these risks, and ensure staff get the best use out of AI tools. Guidance for managers to ensure their team are transparent on their AI use and that their work is properly checked is also likely to be critical.
Culture
Above all else, a top-down culture of responsible (and ethical) AI use will help to embed your policies, procedures and training. Involving employees in discussions about new AI initiatives will help to promote a healthy AI culture. Use of AI is clearly no substitute for critical thinking and the importance of checking the accuracy of AI generated content cannot be overstated.
Addressing concerns is equally important, as employees may be wary of AI use and see it as a ‘slippery slope’ to job losses.
Conclusion
Effective and responsible use of AI is set to become a key aspect in many workplaces. Putting measures in place now will enable your organisation to use AI to improve efficiency whilst managing the risk of issues.
Our team are happy help with the practical steps outlined in this briefing. Please contact Jo Tindall or Kate Smith for an initial free consultation on getting AI ready.
This article contains guidance only and does not constitute legal advice.