GETTING TO GRIPS WITH THE EU AI ACT

The EU have (rightly) been paying attention to AI for years now. No surprise they were first out of the gate with a very comprehensive bit of legilation. It’s a gnarly one, so you really do need to pay attention NOW and not be all last minute like you were with the GDPR (go one, admit it!)

NEED HELP - LET’S TALK!

The EU AI Act is a really chunky document, hundreds of pages long. It is also very technical - in the way it talks about what AI is, and the rules it puts in place. It is trying to create an enviroment where innovation can thrive AND the humans can still keep their fundamental rights. Not an easy balance.

Don’t run around like a headless chicken - start with this …

  1. Are you building an AI system or general purpose AI?

    The definitions in the EU AI act are:

    “machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments” (AI system)

    “models that display significant generality, are capable to competently perform a wide range of distinct tasks and that can be integrated into a variety of downstream systems or applications” (general purpose AI)

  2. Are you using any products internally or externally which have an AI system component?

    Think about third products used internally in your business as well as components that might be part of your products or services.

  3. Are we based in the EU or is the AI intended for the EU market?

    The AI Act is likely to apply not only if we are located/established in the EU, but also if we make a system or model available on the EU market, or the output produced by the system is used in the EU.

  4. Is the AI excluded from the scope of the AI Act?

    There is a list of things that are specifically excluded from the scope of the AI Act, such as people using AI systems for purely personal, non-professional activities. If our AI activities come under this list, we may not need to comply with the AI Act (but check this carefully).

  5. Is the AI system/model prohibited under the AI Act?

    There is a list of things that are completely banned under the AI Act. If we are doing any of these things, we must phase them out in time. Again, this is important to check carefully – breaching these rules result in the biggest fines.

  6. What role are we playing in the AI supply chain (which ‘actor’ are we classified as under the AI Act)?

    Depending on what we are doing with the AI (for example, are we the ones creating the AI, or are we acquiring and deploying it), we might be a provider (or its authorised representative), deployer, importer, distributor, or product manufacturer. Just to confuse things, in certain situations we may be deemed a provider under the AI Act, even if we initially consider ourselves a deployer, distributor, importer or an authorised representative. The actor that we are classified as will influence our obligations under the AI Act.

  7. If the system isn’t excluded from, or prohibited by, the AI Act – is it considered ‘high-risk’ under the AI Act?

    If the AI is intended to be used as a safety component of a product (or is itself a product) covered by specific legislation (such as machinery, toys, medical devices) and required to undergo a third party conformity assessment, or if it is one of a specified list in the AI Act (including critical infrastructure, employment, education and biometrics), then it may be considered ‘high-risk’ under the AI Act. You will also need to ask whether the AI poses a significant risk of harm (even if it is in one of the lists, it may not be caught if it doesn’t pose a significant risk of harm). This is important because the AI Act places the more stringent obligations on high-risk systems.

  8. If the system isn’t excluded from, or prohibited by, the AI Act – does the AI Act consider it to have a transparency risk?

    Certain systems, whether high-risk or not, are considered to have some risk under the AI Act. The obligations placed on these systems are mainly GDPR-like transparency and accountability obligations. (Note that a system can be subject to these requirements as well as the high-risk requirements.)

  9. Now that we know what actor we are, and what category of system we are using, what are the corresponding obligations/restrictions?

    It is crucial to first determine how your business and the AI is classified under the AI Act. Once you have done this, which obligations are relevant become much clearer.

  10. When do we need to comply by (AI Act timeline)?

    The AI Act’s requirements will be applying in phases. So, your answers above will determine when you need to comply with (for example, prohibited systems must be phased out within 6 months of the AI Act coming into force).

NEED HELP - LET’S TALK!

THANKS FOR READING, WONDERFUL HUMANS

Previous
Previous

LONDON TECH WEEK 2024

Next
Next

HELLO AI DATA {DOT} LAW