AI security: The right measure for regulating AI

AI security: The right measure for regulating AI

Precisely because of their enormous potential, their diverse areas of appli­cation and their ability to learn, artificial intel­li­gence (AI) systems must be and remain safe and control­lable at the same time. Here, it is important to find the right balance in regulation.

Voice assis­tants, trans­la­tions at the push of a button, predictive mainte­nance or applicant management systems. Despite the diverse areas of appli­cation, artificial intel­li­gence (AI) is only at the beginning of its devel­opment. many of the future areas of appli­cation are not even foreseeable yet. This opens up great oppor­tu­nities for devel­opers and manufac­turers to achieve compet­itive advan­tages with improve­ments based on the use of artificial intelligence.

In addition to further coordi­nation, a great deal of detailed work will now have to be done in the future; the corre­sponding norms and standards will have to be worked out or adapted and proce­dures for conformity assessment will have to be developed. In doing so, the organ­i­sa­tional and technical effort for manufac­turers should be kept within reasonable limits so as not to hinder the devel­op­ments of AI systems. At the same time, it is also important to gain economic and social trust in this promising technology.

Under the german title “KI-Sicherheit: Das richtige Maß zur Regulierung von KI finden”, the magazine “it-daily” gave Randolf-Heiko Skerka, Division Manager IS Management at SRC Security Research & Consulting GmbH, the oppor­tunity to comment comprehensively.

If you are inter­ested, we look forward to hearing from you.