Artificial intelligence is playing an increasingly important role in the financial sector and its use is likely to be regulated in the future. The EU AI Act is already providing initial indications. With the growing range of possible applications, expectations of transparency, security and responsible use are also increasing. In order to create a framework for this, the German Federal Office for Information Security (BSI) published a list of criteria for testing AI systems in the financial sector in May. This catalog describes requirements for products, processes and organizations that could form an important basis for audits in the future.
The importance of this development lies above all in the fact that the financial sector is particularly sensitive: decisions on loans, trading or risk assessments must not only be efficient from an economic perspective, but must also be comprehensible, non-discriminatory and secure, as required by BaFin. AI systems must therefore not only be functionally reliable, but also comply with regulatory principles such as traceability, fairness, robustness and data protection.
The BSI criteria catalog
The catalog covers a wide range of checkpoints that go beyond purely technical aspects. In addition to questions of IT security and robustness, governance structures, process flows and organizational framework conditions are also addressed. This is not just about the technical functionality of AI systems, but also about embedding them in secure and traceable structures.
Some of the criteria can be checked on the basis of documents, for example proof of model transparency or the data used. Other points require technical tests, such as resistance to attacks. The catalog makes it clear that the assessment of AI is complex and requires both organizational and technical expertise.
This comprehensive approach is also reflected in the following BaFin principles, which formulate similar requirements for transparency, risk management, data protection-compliant application and the involvement of human decision-makers. The BSI catalog extends these principles to include specific technical and organizational test methods that are particularly relevant for credit institutions in order to ensure the safe and responsible use of AI.
Current developments in the market
At the same time, ISO/IEC 42001 is establishing an international standard for management systems in the field of artificial intelligence. The standard describes how companies can set up processes and structures in such a way that AI is used in a traceable, secure and compliant manner. The first certifications have already been issued, including to international IT service providers and companies in Germany. These examples make it clear that the requirements for AI governance are becoming more stringent not only nationally, but also worldwide.
BaFin, the BSI and the international standard ISO/IEC 42001 pursue a common goal: the responsible, secure and transparent use of artificial intelligence in the financial sector. There are clear parallels in their approaches and requirements.
All three documents are based on a holistic approach that encompasses the entire life cycle of AI systems – from development and data basis to implementation and operation through to ongoing monitoring and adaptation. They call for effective management and end-to-end governance that define clear responsibilities and control structures for AI. The focus here is on embedding AI in the corporate strategy and risk management in order to systematically address technical, ethical and legal risks.
Challenges for companies
The three sources make it clear that financial companies need to develop an integrated approach to the use of AI systems. Technical challenges such as ensuring model robustness, avoiding bias and guaranteeing transparency must be dovetailed with organizational requirements such as clear responsibility structures and an effective governance framework. The establishment of specialized processes for the development, validation, ongoing monitoring and adaptation of AI systems is also key.
A key problem is that technical measures alone are not enough to minimize risks; nor can formal documentation compensate for deficiencies in technical implementation or integration. Companies are therefore faced with the challenge of combining technical, procedural and organizational levels into a powerful overall concept that offers flexibility in adapting to new regulatory requirements and technological developments.
The dynamic nature of AI in particular – with faster calibration cycles and increasing automation – requires ongoing validation and active risk management. The consistent implementation of these integrated principles is essential in order to create trust among customers, supervisory authorities and other stakeholders. This is the only way to exploit the potential of AI responsibly and sustainably.
SRC’s approach
As a testing body with many years of experience in IT security, payment transactions and compliance, SRC Security Research & Consulting GmbH sees itself at the interface of these developments. In our view, there are three starting points for providing companies with targeted support:
- Translation between standards and practice
At first glance, the BSI criteria catalog and ISO/IEC 42001 seem abstract. We help to translate these requirements into concrete steps and make them manageable for the respective organization.
- Modular audit approach
Companies do not have to undergo a complete audit immediately. Audits can be carried out in modules, for example with a focus on transparency, governance or IT security. This allows you to set specific priorities and keeps the barrier to entry low.
- Technical depth in security and robustness
While many certification bodies cover the organizational part, there is still a need for expertise in AI-specific security and robustness issues. This is where SRC contributes its technical know-how, for example in penetration tests for AI systems, simulations of attacks on training and decision data, tests for data poisoning or adversarial tests. This also includes investigating resilience to manipulation in the model architecture or in the runtime environment.
Supplementary offers from SRC
In addition to traditional audits, we develop formats that provide companies with orientation at an early stage:
- Self-assessment workshops in which companies can determine their position with regard to the catalog of criteria
- Preparation for ISO certifications, combined with BSI criteria tests
- Governance audits that combine product, process and organizational levels
Why act now?
The pressure on the financial sector is growing. National requirements such as the BSI catalog and international standards such as ISO/IEC 42001 make it clear: AI systems will not only be used, but also checked in the future and the intersections of the criteria can already be seen today. Those who establish structures at an early stage will gain several advantages:
- Security for regulatory requirements before they become mandatory
- Building trust with customers, partners and supervisory authorities
- Market advantages through faster audits or certifications
Conclusion and outlook
The BSI criteria catalog plays a central role by consistently focusing on the verifiability and transparency of AI applications in the financial sector. At the same time, international initiatives such as ISO/IEC 42001 make it clear that the establishment of effective governance structures and security measures for AI systems is of great importance worldwide. In this context, SRC offers companies modular, practice-oriented and technically sound support to effectively overcome the challenges of implementing and managing AI solutions.





