Artificial intelligence (AI) is increasingly being used in the healthcare sector and the banking industry. Whether in the healthcare sector for evaluating image data, supporting medical decisions or in the banking industry for fraud prevention systems or managing large volumes of data: The technology has the potential to make processes more efficient and reduce the workload of specialists. At the same time, its use is in one of the most sensitive areas of all – the handling of patient and financial data.
The central challenge lies in the tension between technological progress and still unclear regulatory requirements. While developments are progressing rapidly, standards and guidelines are still being developed. Since the EU Artificial Intelligence Act (EU AI Act Germany for short) came into force, it is certain that the use of artificial intelligence will be regulated in the future. The EU AI Act came into force on August 1, 2024. The application and enforcement of its individual provisions will be staggered: Some key provisions – for example, the ban on AI systems with unacceptable risk – will apply from February 2, 2025, while most other provisions will be binding after two years, i.e. from August 2, 2026. Obligations for high-risk AI systems will only become fully applicable after three years, i.e. from August 2, 2027.
Organizations are therefore faced with the question: How can AI be integrated responsibly today without being surprised by new requirements later?
Initial orientation (still) without obligation
Authorities such as the German Federal Office for Information Security (BSI) and international initiatives have begun to present catalogs of criteria for the secure use of AI. With the OWASP AI Security and Privacy Guide, OWASP also provides approaches to counter typical vulnerabilities of AI systems. The “OWASP AI Testing Guide” provides practical test methods and recommendations so that organizations can put AI solutions into production in a trustworthy, secure and verifiable manner.
However, a clear, mandatory testing framework does not yet exist. Responsibility for AI regulation is to be assumed by other existing authorities in Germany in future – depending on the respective area of application of the AI systems.
-
The Federal Office for Information Security (BSI) is taking over IT security tasks on a transitional basis and can approve certain high-risk AI systems until a long-term body is established.
-
The Federal Financial Supervisory Authority (BaFin) is responsible for high-risk AI in the financial sector such as banks and insurance companies.
-
Data protection authorities at federal and state level, the Federal Cartel Office and supervisory bodies for media and youth protection are involved in their respective areas of responsibility.
-
In addition, a competence center (KoKIVO) will be created at the Federal Network Agency for cross-sector coordination and as a central point of contact.
-
The Federal Network Agency is also responsible for setting up so-called “AI real laboratories” and an AI service desk for market participants.
This creates a “mosaic” of different authorities around the Federal Network Agency, with the respective specialist authorities remaining responsible for supervision and complaints management in their sector.
This puts organizations in a grey area: there are established certification procedures for traditional software solutions, but there are still no clear guidelines for AI. However, they will come in the foreseeable future, particularly through the national implementation of the EU AI Act Germany.
Why patient and financial data is particularly sensitive
Health and financial data is among the most sensitive information. Incorrect processing can have legal consequences and have a lasting impact on trust.
In the context of health data, AI systems are often based on extensive training data. This raises pressing questions:
-
Where does the data come from?
-
Are they representative and up-to-date?
-
How can discrimination or bias be avoided?
-
Who is responsible for incorrect results?
If AI is embedded in decision-making processes, it is not enough for it to work reliably – it must be verifiable, explainable and comprehensible. In a medical context, for example, this applies to making a diagnosis; in a banking context, for example, it applies to deciding whether to grant a loan.
Between intransparency and traceability
Many AI models are currently considered difficult to understand; they appear to act non-deterministically. They deliver results without decision paths being fully recognizable. For use in safety-critical environments, however, the transparency of decision paths is a key challenge.
Approaches for greater transparency range from explainable AI procedures to supplementary tests that check models for robustness and consistency after the fact. Such measures are not yet mandatory, but can make a decisive contribution to building trust.

Preparation instead of waiting
A common mistake is to wait for binding regulations. Experience from other regulated areas shows this: Those who only react when requirements become mandatory have significantly more work to do and run the risk of not meeting set deadlines. Examples from the regulation of critical infrastructures clearly show this. In 2018, when operators of critical infrastructure had to prove compliance with the state of the art for the first time in accordance with Section 8a BSIG, many were not sufficiently prepared.
Organizations can already take measures today to be prepared:
-
Documentation of data flows and training processes facilitates subsequent checks.
-
Interdisciplinary cooperation between medicine, law, compliance and technology prevents gaps in the procedure.
-
Planning safety principles from the outset increases quality.
-
Early orientation to existing criteria catalogs creates structures that can be adapted later.
Lessons from other test environments
In the past, there have also been phases in which technical developments were faster than regulation. Companies that deal with emerging requirements such as the EU AI Act Germany at an early stage, document requirements and measures, clarify responsibilities and establish processes can later implement new requirements with less friction.
Outlook: From the exception to a matter of course
It is clearly foreseeable that the use of AI will continue to increase in many areas and that regulatory authorities will specify testing requirements. Transparency, documentation and traceable processes will then no longer be voluntary, but a prerequisite for users, operators and manufacturers of AI-based solutions.
Organizations that create structures now show that technological development and diligence are not contradictory. This reduces risks and cuts costs in the subsequent testing process.
The use of AI in handling sensitive data opens up opportunities, but requires particular care. As long as there are no binding audit frameworks, organizations are required to take responsibility themselves. The SRC’s experience from already established audit environments shows that preparation is crucial. The question is not whether audits will come, but when – and whether you are prepared for them.
How can SRC support manufacturers and users of future regulated AI today?
SRC offers companies affected by increasing AI regulation two central services to help them prepare for the new requirements at an early stage:
1. Alignment of the AI management system with future regulatory requirements
SRC supports the systematic analysis and adaptation of existing AI management systems to the requirements of the EU Artificial Intelligence Act and relevant standards such as ISO/IEC 42001, identifying compliance gaps and developing measures for risk minimization, documentation and quality assurance. The aim is to provide reliable proof that the company develops, uses or distributes AI systems responsibly and in compliance with the regulations.
2. Technical testing of AI systems
SRC offers expert technical audits and tests of AI applications in order to identify risks, data protection and security gaps at an early stage. This includes functionality and robustness tests, bias checks, transparency analyses and the testing of data sets and training processes. The technical expertise ensures that AI systems comply with regulatory requirements and are trustworthy.
These integrated services are aimed at manufacturers, developers and operators of AI solutions alike. SRC enables companies to proactively implement regulatory obligations and exploit market opportunities with secure, legally compliant AI products. This enables users and manufacturers alike to master the challenges of AI regulation with confidence today.





