Cybersecurity blog header

AI Act: Cybersecurity requirements for high-risk AI systems

 

High-risk AI systems must meet a number of cybersecurity requirements under the AI regulation

The Artificial Intelligence Act sets out what cybersecurity requirements for high-risk AI systems must be met in the European Union

Few regulations have generated more buzz in recent years than the Artificial Intelligence Act approved by the European Union this year. A pioneering act at a global level in the regulation of high-risk AI systems that seeks to:

  • Harmonize the legal system across the EU.
  • Protect citizens against improper practices in developing and using this disruptive technology.
  • Encourage research and innovation in a critical area influencing the productive fabric and society.

One of the central elements of the Artificial Intelligence Act revolves around the establishment of a series of cybersecurity requirements for high-risk AI systems. The obligation to comply with these requirements falls on companies that develop AI systems and those that market or implement them. Also, substantial financial penalties have been established to ensure that the regulation is complied with.

The Artificial Intelligence Act came into force on August 2. However, its obligations will not generally apply for another two years as of August 2, 2026. And some obligations will not even have to be fulfilled until 2027. So companies that develop, market or use AI systems have time to adapt to this regulatory framework.

Below, we will break down the cybersecurity requirements for high-risk AI systems that must be considered when developing this technology and throughout its lifecycle.

1. What is a high-risk AI system?

Before we dive into the cybersecurity requirements for high-risk AI systems, we need to be clear about which applications are considered as such under the EU regulatory framework. The regulation establishes two criteria to determine which systems are high-risk.

1.1. European criteria for determining which systems are high-risk

  1. Systems used as safety components of products such as machines, toys, means of transport (cars, planes, trains, ships…), elevators, radio-electric equipment, medical devices…
  2. Systems that may adversely affect the health, safety and fundamental rights of citizens or substantially influence their decision-making and that operate in these areas:
    • Biometrics.
    • Critical infrastructures (water, electricity, gas, essential digital infrastructures…).
    • Education and vocational training. Such as Artificial Intelligence applications used to assess learning outcomes or detect misbehavior during testing.
    • Work and management of professionals. For example, human resources AI systems are used to recruit workers.
    • Access and enjoyment of essential public and private services: health benefits, financial credits, health and life insurance, and emergency services (police, fire department…).
    • Enforcement of the law. For example, AI systems to evaluate evidence during a police or judicial investigation or to profile people or personality traits.
    • Migration and border control. Such as systems to assess the security or health risk of a person wishing to enter an EU state and applications to consider asylum applications or requests for residence permits.
    • Justice and democratic processes. For example, applications to help courts and tribunals interpret facts and the law. As well as tools to influence citizens’ votes.

In addition, if the system is used to profile people, it will always be considered high risk.

Accuracy, robustness and cybersecurity. Three Essential Pillars of Artificial Intelligence

The European regulation establishes a range of requirements that high-risk AI systems must meet before going to market, such as having governance practices for the data used to train the models or establishing a risk management system.

This requirements catalog includes ensuring the accuracy, robustness and cybersecurity of high-risk AI.

2.1. Adequate level of accuracy throughout the system’s life cycle

All high-risk AI systems placed on the EU market must have been designed and developed securely to ensure sufficient accuracy, robustness and cybersecurity. In addition, they must meet that level throughout their life cycle.

The AI Act mandates the European Commission to establish how to measure the levels of accuracy and robustness of the systems. To this end, it will have to rely on the collaboration of the organizations that develop this kind of technology and other authorities. As a result of this work, reference parameters and measurement methodologies will have to be established to objectively assess the accuracy and robustness of each high-risk AI system operating in the EU.

In addition, this pioneering standard establishes that the instructions for using a high-risk AI system must state its accuracy level and the parameters to measure it.

2.2. System robustness and failure prevention

When developing and marketing high-risk AI systems, it must also be considered that they must be robust throughout their life cycle. To avoid errors, failures and inconsistencies that arise:

  • In the AI systems themselves.
  • In the environment in which they are used, especially due to their interaction with humans or other systems.

To prevent incidents and achieve robust systems, the regulation indicates that the following should be implemented:

  • Technical redundancy solutions, such as continuous backups.
  • Prevention plans against AI system failures.

Also, the regulation considers one of the key characteristics of many AI systems: they continue to learn throughout their life cycle. This implies that it is necessary to:

  • Develop them to reduce as much as possible the risk that the output results of high-risk AI systems are biased and influence the input information to the system itself, causing feedback loops.
  • Measures should be implemented to reduce the risks of feedback loops.

2.3. Resistance to cyber-attacks

Within the cybersecurity requirements for high-risk AI systems, the European regulation pays special attention to cyberattacks that can be launched against this key technology for the future of business and society. Thus, this new regulatory framework establishes that systems must be able to resist attacks that seek to exploit their vulnerabilities to:

  • Alter the way they are used.
  • Manipulate the output results they generate.
  • Undermine their ordinary operation.

For this reason, technical solutions must be implemented, and cybersecurity services must be available to prevent incidents, detect them early, respond to them and restore normality.

In addition, the regulation emphasizes the different types of specific cyber-attacks that can be launched against high-risk AI systems and should, therefore, be taken into account when designing the cybersecurity strategy:

  • Data poisoning.
  • Model poisoning.
  • Model evasion and adversarial examples.
  • Attacks on data confidentiality.
  • Attacks that seek to exploit flaws in models.

The Artificial Intelligence Regulation seeks to ensure the robustness, accuracy and cybersecurity of high-risk AI systems

3. Who must meet the cybersecurity requirements for high-risk AI systems?

The AI Act states that the AI system vendors must ensure that an application complies with the cybersecurity requirements for high-risk AI systems.

In addition, vendors are also responsible for:

  • The systems they develop undergo a compliance assessment. This assessment ensures that the systems comply with all the requirements of the regulation before they begin to be placed on the market or become operational in the EU.
  • Draw up the EU declaration of conformity stating that the requirements of the AI systems, including those related to cybersecurity, are met. This document includes the system’s key features, and through it, the supplier ensures that all requirements for high-risk AI systems, including those directly related to cybersecurity, have been met.

In addition, importers of AI systems developed by other companies outside the EU are required to:

  • Check that the system meets the requirements of the regulation.
  • Ensure that the system’s conformity assessment has been carried out.
  • Incorporate a copy of the EU declaration of conformity set out in the regulation into the system before it is placed on the market.

Along the same lines, distributors will have to ensure that the EU declaration of conformity accompanies the system.

Finally, those responsible for deploying high-risk AI systems, i.e. companies that implement this technology in their organizations, must use them following the instructions for use, have them monitored by trained personnel and ensure that they function as intended and without generating any risk.

All stakeholders are obliged to report serious incidents involving AI systems to the authorities.

4. Million-dollar fines for non-compliance with cybersecurity requirements for high-risk AI systems

What happens if cybersecurity requirements for high-risk AI systems are not met?

The regulation determines that it should be up to the states to approve their respective penalty regimes. While, it does indicate the ceilings of administrative fines that can be imposed on suppliers, importers, distributors and those responsible for the deployment of high-risk AI systems:

  • €15 million or 3% of a company’s worldwide turnover in breach of its obligations based on its status as a supplier, importer, distributor or person responsible for deploying an AI. The limit is the major size figure. So, if a developer markets a system that does not comply with the cybersecurity requirements for high-risk AI systems, it can be penalized with this amount.
  • 7.5 million, or 1% of global turnover, depending on which figure is higher, for submitting inaccurate or incomplete information to the authorities.
  • The maximum penalty limits for SMEs and startups will be the same, but in their case, the maximum cap will be set by the smaller figure when comparing the fixed amount and the percentage of turnover.

5. Cybersecurity services to protect cutting-edge technology

The European Artificial Intelligence Act imposes many cybersecurity requirements for specific high-risk AI systems. Such requirements are in addition to security and resilience obligations included in other key EU regulations, such as the NIS2 directive or the DORA regulation.

In this way, the European Union is spotlighting the importance of ensuring that high-risk AI systems exhibit robust accuracy and robustness while ensuring their resilience against cyber-attacks.

Therefore, all companies developing AI systems must have advanced cybersecurity services tailored to the characteristics of this technology that help them to:

  • Assess that systems are compliant with European cybersecurity regulations.
  • Detect and remediate vulnerabilities from the design phase and throughout their life cycle.
  • Improve cyber-attack detection and response capabilities.
  • Safeguard models and the data they use.
  • Ensure the correct operation of high-risk AI systems.
  • Develop instructions for companies employing AI systems that take security into account to prevent them from being used in an insecure manner.
  • Securing the AI supply chain.
  • Avoid million-dollar fines and unquantifiable reputational damage.