The Business of Law 5.0 - Generative AI, ChatGPT4 and what to do about it (12 May 2023 Edition)

We appear to be entering the era of 5th generation law (analogue, 1st gen technology, new business models, digital and now AI). We are now seeing the explosive growth of ChatGPT4 and Generative AI. ChatGPT4 is already one of the fastest adopted technologies of all time with the take up being 100 million users within two months and around 1.2 billion users by March 2023. Yes, technologies come and go, and we know all about the Gartner Hype Cycle and Trough of Disillusionment but from everything we are seeing Generative AI could give rise to one of the most seismic changes in the business of law that we have ever seen.

We have been working in this space for some years now and currently are engaging with a good cross section of clients. We have formed our own views but also have several very respected peers in the market who we openly debate with. The general consensus from most savvy law tech veterans is that this could be the biggest thing we have ever seen since the Internet. Some think this will be even bigger. As for the Gartner Hype Cycle, I don’t doubt it’s validity although in this case I can imagine it being hugely compressed with the Slope of Enlightenment being reached much quicker than it is normally. Let’s face facts, recent announcements alone have caused share prices to tumble, huge businesses such as Google to potentially rethink and accelerate and for over 1800 of the richest tech founders in the world to sign a letter calling for a pause in AI development for 6 months.

The effects could be far and wide, but it is easy to see how the business of law could be materially impacted. Law is a high margin, knowledge and content-based business involving a lot of documentation and content generation. For years clients have thought documentation could be produced “at the press of a button.” Perhaps now it can or will be shortly. Importantly for law though it can’t just be any content or documentation - it must be correct and accurate.

A lot has been written about ChatGPT4 and indeed some of this has been produced by ChatGPT4 itself. I have no desire to replicate this, but I am writing this article with a view to giving law firm leaders and in-house lawyers some practical advice as to how they might move forward. We need to be a little “Donald Rumsfeld” in approach in that anyone with any sense needs to acknowledge that there is a lot we don’t know yet and this is going to evolve but, in this article, I am trying to set out what I think people should be doing now. This advice will inevitably evolve by the day as we see more developments.

First some basics.

What is it?

If you have not used it or had a demo yet, we recommend you do so urgently as to see the technology in action is staggering. The following is ChatGPT4’s own description of Generative AI. In short you can ask it to generate or review almost anything, and it does so in seconds (albeit there are some challenges which we try to cover off in this article).

“Generative AI, also known as generative artificial intelligence, refers to a class of AI models and algorithms that are designed to generate new content, such as images, text, audio, or video, which resembles or exhibits certain characteristics of the training data it has been exposed to. These models can generate novel, creative outputs rather than simply recognizing or classifying existing data.

Generative AI models are typically based on deep learning techniques, particularly generative neural networks such as generative adversarial networks (GANs) and variational autoencoders (VAEs). These models learn patterns and structures from large datasets and then use that knowledge to generate new data that shares similarities with the training examples.

For example, a generative AI model trained on a dataset of cat images could generate new cat images that have realistic features and appearances. Similarly, a generative AI model trained on a collection of text can generate coherent and contextually relevant sentences or even entire stories.

Generative AI has applications in various domains, including art, design, entertainment, and research. It has been used to create realistic images, generate human-like speech, compose music, and even assist in drug discovery. However, it's important to note that generative AI models can also be vulnerable to biases and may generate misleading or inappropriate content if not carefully trained and supervised.”

What does it cost and how do I get it?

No doubt this will not be the position forever, but you can trial generative AI via the OpenAI website (an account is needed) free of charge. There is a premium option which is $20 a month and now there are specific private instances selectively being offered by Microsoft. The rumours are the prices will be around 10x that of the general version for a private instance. Obviously as with all of these things if you are not paying you probably are the product and once the world is hooked no doubt the price will increase. That being said the value is high.

Microsoft are already heavily invested in ChatGPT4 but have their own Copilot solution currently being “tested” with some specific organisations (The Microsoft 365 Copilot Early Access Programme will have an initial 600 customers (by invitation)). Again, nobody knows the costs yet, but one can only imagine the power when these tools are available and accessible in the MS365 stack given its dominance of the market (e.g. 365 million plus users worldwide with Microsoft making $136 billion dollars in gross profit and a huge investment in R&D).

Microsoft have also set out their immediate plans:

“In addition, the following new capabilities will be added to Microsoft 365 Copilot and Microsoft Viva:

  • Copilot in Whiteboard will make Microsoft Teams meetings and brainstorms more creative and effective. Using natural language, you can ask Copilot to generate ideas, organize ideas into themes, create designs that bring ideas to life, and summarize Whiteboard content.

  • By integrating DALL-E, OpenAI’s image generator, into Copilot in PowerPoint, users will be able to ask Copilot to create custom images to support their content.

  • Copilot in Outlook will offer coaching tips and suggestions on clarity, sentiment and tone to help users write more effective emails and communicate more confidently.

  • Copilot in OneNote will use prompts to draft plans, generate ideas, create lists and organize information to help customers find what they need easily.

  • Copilot in Viva Learning will use a natural language chat interface to help users create a personalized learning journey including designing upskilling paths, discovering relevant learning resources and scheduling time for assigned trainings.

To help every customer get AI-ready, Microsoft is also introducing the Semantic Index for Copilot, a new capability we’re starting to roll out to all Microsoft 365 E3 and E5 customers.”

What are the benefits of Generative AI?

The benefits are potentially enormous. ChatGPT4 for example can generate a document or content extremely quickly. Ditto creating code and this could potentially transform the no code market. Key players such as Fliplet and Kim are all over this and I recommend their webinars to you.

“A few specific examples of things it could do include answering the following questions: “drafting an employment contract”; “here’s an NDA”, “tell me what’s missing”; “do me a thousand-word essay on the contribution of Malcom X to the Civil Rights Movement”; “draft me a letter of complaint about the following lawyer and their bill.”

“Some unrelated experiments from other contacts include from an electronics engineer: “source me a particular specification of computer chip with four ports no larger than X for less than $9 dollars per component and produce a table with web links at the top for the top 10 results”. “From a doctor of physiotherapy”: produce me a medical report on X person who suffered whiplash when their Ford Focus was hit by a 5 series BMW on the M40 on the 19th July 2022.”

You can produce content quickly, you can amalgamate content e.g., producing comparative tables of results, you can help manage risk management with key points to look out for. Some people now use it as their default search engine (albeit this is not without risk for the reasons below). To quote an electronics engineer contact “you should treat this as an unpaid junior assistant who is really quick at producing work but who you need to supervise carefully.” A job which may have taken 3 days in the past can be carried out within a fraction of the time, but the huge caveat is that it really needs a very experienced user to review the content.

What are the risks?

I am not going to repeat everything that has been said before but in headline terms there are challenges with ChatGPT4.

1.       Not all the content generated is correct. The software “hallucinates”. In the medical report produced above some information about the injury was “imagined”.

2.       No content has been added since Sept 2021. It does not for example know who the Prime Minister is.

3.       You have no idea about the data lineage in terms of content production (i.e., where has it come from, does it breach anyone’s IP and is it true?).

4.       You have no idea to the weighting given to different data sources (e.g. if you ask a question about a company, you would imagine it might go to the company’s website and/or authoritative sources such as Companies House. That has not been my experience and accordingly the answers are not as you might imagine, and you do not have any form of confidence rating to help you. For example, Hyperscale Group is not a business involved in the design and build of data centres  and our website is clear on this but generative AI says differently.

5.       Being based on a large language model, the software has been trained upon literally billions of data sources. It is speculated that little regard will have been given to terms of usage of websites, privacy policies etc and accordingly it is possible data has been consumed which should not have been. Many anticipate material litigation.

6.       There is a strong possibility that you could be breaching someone else’s copyright or intellectual property issues and you are of course indemnifying ChatGPT for this under its terms of use.

7.       The content definitions in the terms of use are “interesting” and your working assumption should be that you are training the model. Users may inadvertently be sharing confidential information which is being ingested and which resurfaces in the future.

8.       Given how prolifically the tool can generate content, if this is incorrect you start to move into an “echo chamber” scenario where incorrect content is generated on the back of further incorrect content, and this grows exponentially due to the rate of production. Fake news will grow as will the challenges in detecting it.

9.       We need to be prepared for more incorrect information flooding the web which will wash through to the reliability of search. Corroborated content will become more valuable.

10.   Senior people will be able to vet the content, but junior people will be less equipped to do so. People’s output will be less consistent making it hard to judge people’s true ability.

11.   The algorithm will already have been trained on biased information. Much of the content of the web has been produced by the Western World and historic content may have been produced on the back of non-diverse norms at the time which are then perpetuated by the tool.

12.   By using the ChatGPT tool you are confirming you are authorised to enter into the contract and are providing indemnities which will be governed by the Laws of the State of California. You may or may not be comfortable with this but obviously many organisations have people blindly signing up binding their organisations. The implications could be serious which has caused some firms to ban the tool.

You should also start thinking through your risk management processes in relation to business processes. Not all people using the tool will have good intentions as this quote from the Times shows: 

"Gary Marcus, of New York University, an expert on AI systems, said: “They can and will create persuasive lies on a scale humanity has never seen. Outsiders will use them to affect our elections, insiders to manipulate our markets and our political systems. Democracy is threatened. What criminals are going to do here is create counterfeit people. It’s hard to even envision the consequences.”

Richard Blumenthal, the Democrat who chairs the committee, opened the hearing with a recorded speech that sounded like his voice but was, in fact, a clone that had been trained on his speeches to Congress. The clone recited an opening address written by ChatGPT after Blumenthal asked it: “How I would open this hearing?”

The impersonation by the clone was impressive, Blumenthal said, but he added: “What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or Vladimir Putin’s leadership?”

What should I do now?

This is a thorny question. I am hearing a range of expressions used such as “the genie is out of the bottle” or “the toothpaste is out of the tube” i.e., this is here to stay, and we can’t turn the clock back. It is only going to get faster and more capable and so we all need to get our heads around this now and continue to evolve our position.

Some people feel there is nothing they can do i.e., they think their businesses might be too small, or they might not have the answers and so are taking no action. This is absolutely the wrong approach and inactivity is not an option. To be clear what to do is also not a one-off set of actions: it is a continual evolving process over the next few years and needs to be very high on every Board and department’s agenda (including Business Services).

Many people think that the real turning point will be when a firm or inhouse team can leverage a private instance on its own content and corpus. This is not without its issues (see below) but just imagine say an Employment Law generative AI tool trained on your own precedents and email exchanges. The more specialised the corpus the more accurate the output. The potential to create a virtual lawyer is perhaps increasing.

Also imagine just for a moment the power of content-based businesses once the dust has settled and IP rights have been exercised. Already Lexis Nexis have made inroads into this space, and I have no doubt we will see more in the future.

Please also don’t forget the points made in my last MS365 article, just like MS365 Generative AI is a huge leveller. Every firm and inhouse team may have the same capability. The only difference may be the corpus on which it is focused. Never before has the playing field been this level which is a huge opportunity for smaller businesses.

Key Actions

These are some of our basic recommendations – as you might imagine we are evolving these daily:

1.       Try it, experiment and show your people. I am to this day amazed how many people have not done this. As a minimum team leaders and their people need to see the tool in action and start thinking about how they would use it, how it would be a threat and opportunity to them etc. Computer World and McKinsey follow this approach Prepare for generative AI with experimentation and clear guidelines | Computerworld

2.       Engage with your people. This is linked to the above and just to be clear you really need to think carefully about how you do this. Some will be enthused whereas others will be scared. In doing this you need to have worked out your draft policy and position and need to evolve that with your people. Your people will want to know that you have a position, and you have a plan detailing what they are allowed to do.

3.       Addressing the reality. You need to form a decision as to whether you are happy for your people to be using this and if so how. Inhouse legal teams need to ensure clear guidance is given to the wider businesses. Some business users may see this as a short cut to getting legal advice and will be unaware of the challenges detailed in this article. That being said the core content produced for them could in some cases be of better quality. It is highly likely that you will have employees who have already signed up for terms that you may or may not be happy with. Some firms have issued a blanket ban while others are considering leveraging a separate unlimited company as a contracting mechanism (i.e., a ringfenced sandpit approach if you like). Whatever you do you also need to educate your people about the risks as well as have conversations about potential creative use cases. “Golden Use Cases” as McKinsey might refer to them.

4.       Risk Management. Yes, there are a lot of risks with Generative AI but there are some phenomenal risk management use cases (e.g., “tell me the top 30 points to get right when drafting an earn out”; “what is missing from this agreement etc?”). One approach is to allow usage for specific purposes such as this.

5.       Client data and information. In my view at the moment, you should absolutely ban the use of any client information or any form of documentation that is proprietary or confidential within the public tool. Also ban the use of ChatGPT for any external facing content. You simply will have no idea about where the generated content has come through and whether you are breaching anyone else’s intellectual property.

6.       Microsoft - stay close to everything they are doing in this area, particularly relating to MS365 and Copilot. The same applies for other key suppliers but we must acknowledge that Microsoft will already host the data of many firms with a huge R&D expenditure and so is well placed to succeed in this area.

7.       Supplier roadmaps. Engage with all your suppliers to understand what their view is and how they are leveraging these tools and products. What is their position and where are they going?

8.       Organisational scenarios. Do some analysis of the areas of your business where you feel that generative AI could have a meaningful impact. Run some scenarios, work out what this could mean for restructuring your business if the rate of adoption of these tools continues to grow at the same rate.

9.       Product. We have been delighted to see a number of engagements in recent years where law firms have wanted support in developing products. Generative AI presents huge opportunity in this space. Start that thinking now. Imagine for example a virtual contracting product trained on your own corpus.

10.   Competitor analysis. I have always regarded this as a really key area that has not been addressed as much as it could. If I was running a law firm or a team in a law firm now, I would absolutely want to understand what every one of my competitors and new competitors were doing in this space. This is the time for real agility. What will your competitors be showing your clients on pitches?

11.   No code and dev work. Really think through what can be done in this space and how things could be automated much more quickly. If you consider for a moment how for example the set-up of a portal or a project room could be automatically generated in the future without the need for any human intervention bar to say yes or no. What suppliers will be leveraging these tools and what will it mean for resource?

12.   Learning and Development strategy. How will you educate your people on how to use generative AI? As well as risks and use cases the commands you use are key. Slaughter & May have provided some real good examples around how to use the tool, to teach, transform, summarise  and adopt personas using context and style guidance. You can also link prompts and iterate beyond the first response Slaughter & May-Generative AI: Practical Suggestions for Legal Teams.

13.   Learning and Development II. People need to reuse the strategies for training and educating their people more widely. The tool will create gaps in people’s education that needs to be addressed. Tools like The Professional Alternative will come into their own.

14.   Supplier Engagement/Private Instances. Engage with Microsoft and other key suppliers about private instances of Generative AI. These are not widely available, but firms are doing this. Once you have it, experiment with corroborated content from your own corpus. For law firms sadly this is not straightforward as they may be under obligations to keep client information secure for clients they deal with a range of matter types but unless large may not have enough data of any sort to train the model. They also need to bear in mind that they will never be able to penetrate the opaque nature of the algorithm. Even if they can expose it to a private corpus many of the risk issues will remain. In house teams by contrast will have greater volumes of documentation but won’t have the same level of confidentiality issues to navigate.

15.   Business Services. Think of how you are using ChatGPT in areas like marketing for internal uses (e.g. create me a list of 50 of the largest care companies in the world together with the locations of their headquarters).

16.   Data and Content Hygiene. Going forward, having reliable content that you can access will be key. Start this process now (i.e., lack of hygiene with file opening will hold you back as will having conflicting data in multiple systems). Given we live in a data driven world now is a time to develop and execute your data strategy. Where is your core content? In a DMS? Can you access this via APIs? If not what do you need to do to enable this?

17.   Engage with emerging Suppliers. Engage with the emerging suppliers in this market (e.g., Allen & Overy and PwC are engaging with Harvey, others are dealing with Fastcase. There are also many systems in this space such as ChatSonic, Ernie, Casetext Co-Counsel, Anthropic, AI21, Stability AI, Cohere and Character AI, just to mention a few. If this tool becomes embedded in Outlook, how will you manage these? Who is monitoring this in the firm, as someone needs to. They also need to monitor the impact of Generative AI agents that will be able to carry out tasks on other sites.

18.   APIs et al. There are a growing number of new generative AI apps companies that build on LLM models and offer APIs (ChatGPT Premium is doing this too). This means that firms don't have to use ChatGPT or Gen AI tools directly. This may eliminate some of the risks listed and change the way you use the tools.

19.   Industry Standards. Stay close to key developments such as The Legal Operating System being developed by Daniel Katz.

20.   Consider client and supplier resilience. These technologies could have a material effect on some businesses (both clients and suppliers), and you need to be prepared.

21.   International. For those firms doing international work, carefully monitor those jurisdictions who are banning or putting restrictions on the use of generative AI. These currently include Italy, China, Cuba, North Korea, Iran, Syria and Russia.

22.   Regulators. We need to stay very close to guidance and restrictions from our own regulators as well as the regulators in other industries in which our clients operate. We will have already seen the launch of a review into the AI market by the Competition and Markets Authority. This is likely to be only one of a number of initiatives by other regulators globally.

23.   Look around corners. What impact will this tool have in other areas? Will you or your clients be subject to volume attacks with multiple claims being issued by generative AI as it is now easy to do this? Will you need to develop tools to respond to the content produced by other tools i.e. generative AI “Claimbots”? If this tool becomes embedded in Outlook, how will you manage this? Will your business need to develop a plan to deal with incoming communications? How will job applications be checked to verify they are real? Will you have new roles such as a Prompt and Corpus Manager.

Conclusion

To conclude, we are almost certainly entering a key period of evolution in the business of law. Generative AI albeit littered with problems presents both huge threats and opportunities that could turn some legal business models on their heads. In my view the potential upsides for inhouse teams are potentially greater. Many may use generative AI instead or before instructing lawyers once the risks have been navigated.

Client expectations about how long it takes to deliver work may alter in the future too. The skills, structure and make-up of law firms and in-house teams will need to alter. Lawyers will perhaps be put under greater pressure to deliver more matters and outputs in shorter time frames by collaborating with these tools i.e. “Man/Woman and Machine is normally quicker than Man/Woman alone”. In some ways whilst navigating these changes we need to be constantly cognisant of bringing our people with us and managing the key risks as well as developing a strategy to take advantage of the upside that we may see in the tools. Content and e-learning businesses, contrary to what many think may be entering a golden era too and some published legal texts and knowhow providers may even become law firm competitors. If they have commentary, the law and precedents already online they have really strong foundations to take advantage of these emerging technologies.

Addendum - Other helpful announcements which have been published following the date of this article:

Legal Tech Rundown, ILTACON Edition: DLA Piper Launches AI-Based Service, DISCO Unveils AI Timelines and More | Legaltech News (law.com)

Consulting giant McKinsey unveils its own generative AI tool for employees: Lilli | VentureBeat

Introducing the Kelvin Legal DataPack, the Largest Legal Training Dataset — 273 Ventures

AI is Reinventing the Legal Industry (nfx.com)

ChatGPT owner OpenAI to go bankrupt in 2024: Report - Arabian Business

What if Generative AI turned out to be a Dud? (substack.com)

Apple coming up with ChatGPT and Bard rival? Company starts hiring for Generative AI roles - India Today (ampproject.org)

(1) Post | LinkedIn- Microsoft Strategy

Generative AI will upend the professions | Financial Times (ft.com)

Legal innovation and generative AI: Lawyers emerging as ‘pilots,’ content creators, and legal designers (mckinsey.com)

http://russellgroup.ac.uk/news/new-principles-on-use-of-ai-in-education/

Microsoft Copilot - https://www.youtube.com/watch?v=y7wnoFUm8m8

Thomson Reuters announces integration with Microsoft 365 Copilot amid suite of gen AI updates  - Legal IT Insider (legaltechnology.com)

LexisNexis® Announces Launch of Generative AI Platform, Lexis+® AI™

https://www.pepperminttechnology.co.uk/news/peppermint-technology-leverages-microsoft-co-pilot-and-azure-openai-for-enhanced-user-experience-and-productivity

Hyperscale Group are a Technology, Digital and Operational Advisory and Implementation Business with over 25 years plus deep market experience. We work for In-House teams and Professional Services Firms all around the world and support them developing and implementing their strategies. We help our clients to make the right things happen.

Our Experience: Click here - Our Services: Click here - Client Testimonials: Click here

How We Work: Click here

For more information, please contact Dereksouthall@hyperscalegroup.com


Derek Southall