DON’T LET YOUR HEAD EXPLODE ABOUT ARTIFICIAL INTELLIGENCE AS AN IN-HOUSE COUNSEL

Yes, it’s hard, but people (like me!) are here to help you :)

NEED HELP - LET’S TALK!

Artificial intelligence (AI) tools are gaining a significant presence in the day to day work of lawyers. These lawyers are also increasingly advising on wider use of AI systems in the business – from advising on the development and training of models inside the business to identifying and risk-analysing AI systems in the wider supply chain.

In-house counsel play a key role in supporting the trusted adoption of AI systems and must have the compliance knowledge and tools they need to use and advise on all forms of AI system in the technology landscape today.

Using AI brings benefits to the ‘day job’ of in-house counsel:

  • Efficiency gains & costs reduction in areas like document production and review, contract analysis and due diligence

  • Better risk management due to the volume and depth of analysis that can be performed by AI compared to teams of lawyers (in particular finding bigger issues that may ordinarily have been hidden in the ‘weeds’ of lower value contracts)

  • Enhanced decision making and strategy development through large scale data insights

All of these things help lawyers add value to a business beyond the day to day of technical/practical legal work.

But every in-house lawyer needs to think about the law and wider business and ethical considerations.

The compliance considerations for use of AI in the business are likely to be more serious if something goes awry. (In other words, things can get weird pretty quick!)

  • Based on the operational footprint of the business, in-house counsel must ensure various local laws that apply are complied with:

    • e.g. wider EU AI Act which allocates compliance tasks by type of AI system (banned, high risk, limited risk) and who in the AI ecosystem is using it (provider, deployer, distributor, importer) & a GDPR-like fine regime!

    • e.g. narrower laws being made to fit new uses of AI systems (e.g. the UK approach, where applicable laws include the UK GDPR, the Copyright, Designs and Patents Act 1988, the Equality Act 2010 and the Online Safety Act 2023)

  • Failure to comply can lead to scrutiny by a number of engaged regulators, most notably (in the UK): Information Commissioner’s Office, Ofcom, Financial Conduct Authority, etc. (And remember, they all talk to eachother.)

  • In-house counsel will need to guide the business through wider ethical and PR issues (in particular around quality of training data & IPR issues, as well as where AI systems use employee or customer personal data)

SO LIGHTEN THE LOAD AND SPEAK TO ME ABOUT MY AI COMPLIANCE WORKBOOK :)

THANKS FOR READING

NEED HELP - LET’S TALK!

Previous
Previous

HEAD’S UP: ICO AUDIT OF AI USE IN RECRUITMENT TOOLS

Next
Next

PONDERING ‘OPEN’ ARTIFICIAL INTELLIGENCE