Generative AI- The Essentials

December 6, 2023
Tim Heywood

Partner

View profile

It’s hard to judge precisely how quickly the legal profession is adopting AI.

There is, however, plenty of evidence that, across most professional services, from medicine to accountancy and architecture, AI tools will quickly gain traction and, as a result, the familiar working practices of the last couple of decades (at least) will change significantly.

As with most technological advances, organisations with the biggest budgets will be the first to create, market and utilise them.

The marketing message is aimed at employers. The message is that AI will reduce cost by speeding up internal business processes. An easy sell.

For employees the ‘sell’ is perhaps less clear. Many vendors claim that AI will remove the burden of ‘repetitive’ or ‘boring tasks’ and free us up to do the ‘really interesting’ stuff instead. But this is surely too simplistic, and what one lawyer finds ‘boring’ may be the very thing that most excites another lawyer. Everyone is different and so far the messaging from the AI sector is not very persuasive, at least from the employee’s point of view.

We shall see. What seems sensible at this stage though is to try to educate professionals in the essentials of AI so that the concept becomes a bit less opaque. Happily, the Law Society has done just that by publishing their new Guidance – ‘Generative AI- the essentials’.

Generative AI (ais ‘a sub-set of AI that uses deep learning algorithms to generate new outputs which are based on large quantities of existing data or synthetic (artificially created) input data’.

In short, Generative AI is technology that learns from huge, typically online, data sets (the web) and then uses this ‘intelligently’ to create new material. Think of Chat GPT, (which will write you some text on almost any subject you care to mention to it) but the output could just as easily be photographic images, abstract images, audio or video or a mixture of these.

What makes this technology ‘new’ and disruptive is the ability of these tools to generate outputs that mimic human responses in a very convincing way. It blurs the distinction between documents that are created by identifiable professionals and materials that are produced by machines, possibly with minimal human involvement beyond having posed the initial question or setting the initial task.

‘Boundless opportunities’ are perhaps being created, but the technology has already ‘spawned new risks’ says the Guidance.

Among them it cites potential risks and issues that potential adopters should consider when investigating  these new tools:

  • Intellectual property infringements;
  • Unauthorised disclosure of confidential information;
  • Data protection compliance;
  • Cyber security;
  • Built-in bias (biased data in; biased data out);
  • Reputational risks and
  • Ethical concerns that should be addressed.

Because the tool has learned stuff before it reaches us and because we have no idea where or who it learned that stuff from, we need to be alert to the danger of inheriting unauthorised (third party) IP. Perhaps it will be enough to ensure that the contract contains an appropriate warranty and indemnity in our favour. Perhaps not.

Because the tool will continue to learn stuff after we’ve subscribed to it, we also need to be alert to the likelihood of the licensor (it’s on a subscription model) getting free access to our know-how. It will use that know-how to improve its products. That’s fine unless you’d like to share in the commercialisation of your own know-how.

The Guidance offers some useful information, especially for smaller firms and smaller in-house teams, it is aimed primarily at professionals who are not at the forefront of technology but who want to understand the opportunities and risks a little better.

It might prove especially useful for firms that are about to go to the market for tools that might include Generative AI. It includes a checklist of things to think about such as making sure the firm has a clear understanding of what the tool is for and how it will improve current working practices; what clients might be asking you for; what assurances to seek from the provider; security standards, accuracy of the output, pre-implementation risk assessments and the establishment of rights in the training data and outputs.

As will be apparent, some of these actions are things you can determine within your business, where you have control. Other actions are heavily dependent on your ability to negotiate with the provider. This is more problematic, as we all know that the Big Tech companies are not keen to negotiate their standard terms and conditions.

All in all, this Guidance is a very welcome Generative AI ‘primer for lawyers’. The overall message must be that we should all be learning more about the technology, at least the basics. Only then can we all become intelligent, fully informed purchasers and users of these incredibly powerful tools.

Tim Heywood FRSA is a Solicitor and Partner in Gunnercooke llp specialising in Technology, Procurement and Data Protection. He is also a member of the Law Society’s Technology and Law Committee. The views expressed here are his own. Contact him here.

To receive all the latest insights from gunnercooke to your inbox, sign up below