LinkedIn Xing Facebook Instagram

EU AI Act: What companies need to know now about the new AI law

Yvonne Wicke | 30.04.2025

The most important facts in brief

The EU AI Act is the world’s first comprehensive law on the regulation of artificial intelligence. It aims to create uniform standards for the safe and responsible use of AI systems within the European Union. It is based on a risk-based approach that divides AI applications into four risk classes – from minimal to unacceptable risk. Depending on the classification, specific obligations apply to providers and users of AI, particularly with regard to transparency, documentation, human oversight and data quality. Companies must also ensure that their employees have sufficient AI skills. The AI Act is particularly relevant not only for European companies, but also for international providers whose AI systems are used in the EU. Violations of the regulation can be punished with fines of up to 6% of annual global turnover. Despite all the regulatory requirements, the EU AI Act also offers an opportunity: it creates legal certainty, strengthens trust in AI technologies and can act as a catalyst for innovation and competitiveness in the European AI sector.

Introduction to the EU AI Act

Objective and significance of the new EU regulation

With the EU AI Act, the European Union is setting a global milestone in the regulation of artificial intelligence. The aim of the law is to create uniform framework conditions for the development, use and marketing of AI systems – always in line with fundamental European values such as data protection, transparency and the rule of law. This should not only provide companies with clear guidelines, but also a reliable set of rules that combine innovation with responsibility.

The regulation addresses a key challenge: while AI applications are developing rapidly, previous legal regulations have lagged far behind. The AI Act closes this gap by formulating binding requirements – differentiated according to the risk posed by an application.

Role of the European Commission and the rule of law

The European Commission plays a key role in the implementation and further development of the EU AI Act. It not only defines the technical standards, but also monitors compliance with the regulations by the member states. At the same time, a new European body, the “AI Board”, is responsible for the coordination and consistency of application.

This centralized control is intended to ensure that the AI Regulation not only exists on paper, but is also implemented effectively. Particular emphasis is placed on the protection of fundamental rights, for example with regard to freedom from discrimination, freedom of expression and human dignity – cornerstones that every AI development in the EU must take into account.

Sie benötigen Unterstützung?

Vereinbaren Sie mit uns einen kostenfreien Beratungstermin.

Beratungstermin vereinbaren

High-risk AI systems: Definition, examples and requirements

High-risk AI systems are at the heart of the EU AI Act, as they are used in particularly sensitive areas – where decisions with far-reaching consequences are made. These include personnel selection, credit checks, medical diagnoses and biometric identification. In these cases, companies must not only expect higher regulatory requirements, but also assume special responsibility in order to avoid abuse, discrimination or incorrect decisions.

Legislation obliges companies to comply with a clearly defined catalog of measures:

✅ Use of high-quality data to avoid bias and discrimination

Technical robustness, such as protection against manipulation and failures

✅ Transparency of functionality through comprehensible documentation and disclosure

✅ Human supervision to be able to check and correct automated decisions at any time

✅ Duty to inform users when they interact with such an AI system

Only those who consistently implement these requirements are allowed to use or sell high-risk AI systems on the European market. For companies, this means that the technical requirements are increasing – but also the opportunity to stand out from the competition through quality, safety and ethical conduct.

Requirements for companies: What specifically needs to be done

With the EU AI Act coming into force, companies are faced with the task of realigning their AI strategy not only in terms of technology, but also from a regulatory perspective. The requirements apply not only to developers of AI systems, but also to companies that use AI in their daily practice – especially in the high-risk area. The higher the potential risk, the more extensive the obligations.

Companies must adapt their internal structures, processes and systems in a targeted manner in order to meet the new legal requirements. The challenge here is not just the technical implementation, but also the establishment of cross-functional AI governance.

Technical and organizational obligations

Regulation requires a comprehensive management system for the responsible use of AI. Technical security, traceability and data integrity must be taken into account as well as internal processes and roles.

The central organizational measures include

  • The appointment of persons responsible for the operation and monitoring of AI systems
  • The establishment of internal control and reporting procedures in the event of system failures or misconduct
  • The clear allocation of responsibilities along the entire value chain
  • The integration of data protection, IT, legal and compliance into a uniform governance model

This structure enables companies to respond to regulatory requirements at any time and strengthen the trust of customers, business partners and supervisory authorities.

Documentation, transparency and human control

Another key aspect is the obligation to provide detailed documentation. Companies must be able to demonstrate how their AI systems work, what data they are based on and what decision-making logic is used. This traceability is key for internal audits as well as external audits.

In addition, it must be ensured that decisions made by AI systems remain controllable by humans. Especially in sensitive application areas, it is crucial that automated processes can be stopped or adjusted – for example, in the event of incorrect output or unexpected behavior.

Training obligations and competence development

The introduction of AI systems brings with it new requirements in terms of expertise within the company. The AI Act obliges companies to build up the necessary expertise internally – not only among development teams, but also among project managers, compliance officers, data protection officers and managers.

Only if everyone involved has a basic understanding of the legal, technical and ethical aspects of AI can safe and compliant use be guaranteed.

Risk class Examples of applications Obligations for providers
Unacceptable risk Social scoring, psychological manipulation, real-time biometric monitoring Prohibition of development, provision and use
High risk Applicant selection, credit assessment, medical diagnostics
  • Conformity assessment
  • Documentation & logging
  • Transparency obligations
  • Human supervision
Limited risk Chatbots, text and image generators Transparency towards users (reference to AI use)
Minimal risk Spam filter, product suggestions No special obligations, voluntary self-regulation recommended

Impact on innovation and the economy

Opportunities for Europe and start-ups

The EU AI Act is often viewed critically as a bureaucratic obstacle to innovation – yet the regulation has great economic potential. Because what it actually creates is a reliable and uniform legal framework that offers companies in the EU clear guidance. For innovative companies, especially start-ups, this means predictability, legal certainty and clear guidelines for responsible AI development.

With the AI Act, Europe is strategically positioning itself as a global pioneer for ethical and safe AI. Companies that adapt to the requirements at an early stage can secure a significant competitive advantage – for example through faster market approval, greater trust among users and better access to funding. An international comparison clearly shows that a structured regulatory framework can become a locational strength.

Challenges for the development and use of AI solutions

Despite the advantages, the regulation also poses challenges: The implementation effort – especially for high-risk AI – is considerable. Companies must combine technical, legal and organizational expertise, which is particularly challenging for smaller players. In addition, ongoing technical change makes sustainable implementation more difficult: systems must be continuously tested, adapted and documented.

The market for AI services and certifications is also still in its infancy. Many practical questions – such as the specific interpretation of the regulations or national implementation by the member states – are still unanswered. This leads to uncertainty among companies as to when which obligations actually apply.

Strategies for compliance and innovation

In order to turn challenges into opportunities, companies should act strategically now. This includes

  • Integrating regulatory requirements into product development processes at an early stage
  • Close cooperation between technology, law, data protection and management
  • Building partnerships with specialized consultancies, certification bodies and technology providers
  • Continuous monitoring of regulatory updates and technical standards

Did you already know?
The EU AI Act is deliberately formulated to be open to all technologies. This means that companies can continue to develop innovations – as long as they comply with the defined protection mechanisms. The EU is thus sending a clear signal: AI yes – but safe, controlled and in line with fundamental rights.

Sie benötigen Unterstützung?

Vereinbaren Sie mit uns einen kostenfreien Beratungstermin.

Beratungstermin vereinbaren

How to get started with AI regulation

The EU AI Act may seem complex at first glance – but with a clear strategy, the introduction can be structured and efficient. The key is not to wait and see, but to take active action. Companies that begin implementation at an early stage not only secure regulatory compliance, but also valuable competitive advantages.

5 steps to implementation in your company

A practical approach for successfully dealing with the new requirements comprises the following steps:

  1. Take stock
    Identify all deployed and planned AI systems – including third-party solutions and individual developments.
  2. Carry out a risk assessment
    Classify your AI applications according to the four risk levels of the EU AI Act and analyze their impact on users and business processes.
  3. Establish compliant structures
    Implement internal processes for documentation, quality assurance, reporting systems and human supervision.
  4. Train employees
    Develop training concepts for the legal, ethical and technical qualification of your teams – adapted to roles and responsibilities.
  5. Integrate monitoring and adaptation
    Establish continuous reviews and updates of AI systems as well as active monitoring of regulatory developments at EU level.

Role of governance, standards and internal control

A key element for sustainable success is well thought-out AI governance. This includes not only compliance with legal standards, but also the internal anchoring of responsibilities, ethical guidelines and standardized evaluation processes. Companies should set up appropriate committees, GRC structures (governance, risk, compliance) or AI steering committees at an early stage.

The interplay between technology and business ethics in particular makes it clear that the AI of the future is not just a question of innovation, but of value orientation.

Support from external partners and tools

Hardly any company will be able to implement the AI Act alone. This makes it all the more important to rely on specialized partners – for example from the following areas

  • Compliance consulting
  • Technical audits
  • Data protection and ethics advice
  • Tools for automated documentation and monitoring

This collaboration not only saves time and resources, but also strengthens the professional quality of implementation – and increases the likelihood of remaining compliant and innovative in the long term.

Frequently asked questions about the EU AI Act

1 What does the Artificial Intelligence Ordinance regulate and when does it come into force?

The regulation on artificial intelligence (EU AI Act) defines clear rules for the development and use of AI systems in Europe. It aims to strengthen trust, security and fundamental rights. The first regulations will come into force in August 2024 – further obligations will follow in stages until 2027.

2 Which companies are affected by the EU AI Act?

All companies – from corporations to start-ups – that offer, integrate or use AI systems are obliged to comply with the requirements. There are simplified procedures for SMEs and start-ups, but no substantive exceptions.

3. what requirements apply to general purpose AI systems?

AI systems with a general purpose, such as large language or image models, are subject to specific transparency, documentation and risk management obligations. This also applies to AI providers whose models are reused by third parties.

4 What do providers and operators have to implement in concrete terms?

Providers and operators must, among other things:

  • determine the risk class of each AI system,
  • implement technical and organizational measures,
  • maintain comprehensive system documentation,
  • regularly check compliance with the rules.

5. what does a practical implementation of the requirements look like?

Companies should establish an internal AI page or governance structure to clarify responsibilities. This includes a summary of all systems used, their classification and the associated security and control mechanisms.

6 What role does the Federal Government play in the implementation?

The German government supports companies by providing information, training and promoting innovation in the field of AI. At the same time, it is responsible for national monitoring of compliance with the regulation in Germany.

7 What happens to sensitive data such as photos or biometric information?

Particularly strict regulations apply to AI applications that work with photos, images or personal data. These applications must be clearly labeled – a visible sign of transparency and trustworthiness.

Free advice

We will be happy to help you with corporate management and data analysis.

This might also interest you.

You might also be interested in

Error: Contact form not found.

Error: Contact form not found.

Error: Contact form not found.