high risk AI compliance challenge

As artificial intelligence advances, the European Union’s AI Act sets rigorous compliance standards for high-risk AI systems. This research investigates the challenges and industry impacts of these standards, offering insights into both the obstacles and opportunities they present for companies operating within the EU.

Navigating the AI Act: Challenges and Significance

The European Union’s AI Act (AIA) is a pivotal development in AI governance, imposing extensive regulatory requirements on AI systems based on their risk levels. High-risk AI systems, which can significantly affect health, safety, or fundamental rights, face the most stringent obligations. Understanding the compliance challenges these requirements pose is essential for companies aiming to operate within the EU.

Matthias Wagner and colleagues from Lund University examine the perceived compliance challenges associated with the AIA’s high-risk requirements. The study identifies which requirements are seen as most challenging by industry practitioners and explores the industry’s sentiment towards the voluntary application of these requirements to non-high-risk systems, as encouraged by the AIA’s codes of conduct.

Given AI’s global development and deployment, the AIA’s implications extend beyond the EU, affecting companies worldwide. Thus, the study’s findings are relevant not only for European companies but also for any organization seeking to align with EU regulations. The research underscores the need for a nuanced understanding of the AIA’s impact on the industry, both in terms of compliance challenges and potential benefits.

Exploring Industry Perspectives: Methodology and Insights

The research uses a multiple case study approach, involving six case companies from various industries and three independent experts, totaling 16 respondents. This qualitative method provides a solid foundation for exploring the compliance challenges associated with the AIA’s high-risk requirements. The study builds on previous research by the authors, expanding the scope to include multiple companies and a broader set of data.

Researchers conducted interviews using a guide developed specifically for this study. The interviews focused on understanding the perceived challenges of specific AIA requirements, the contributing factors to these challenges, and the industry’s sentiment towards the AIA’s codes of conduct. The study also considers the technical aspects of compliance, bridging the gap between legal and engineering perspectives.

The research identifies four key contributing factors influencing the perceived compliance challenge: industry and brand values, existing regulatory environment, AI maturity level and proficiency, and company size. By examining these factors, the study offers a comprehensive view of the challenges companies face in operationalizing the AIA’s high-risk requirements.

Key Findings: Challenges and Industry Reactions

The study presents a ranking of the perceived compliance challenges associated with the AIA’s high-risk requirements. Leading the list is data quality and governance, followed by accuracy, robustness, and cybersecurity. Risk and quality management systems and transparency also pose significant challenges. These findings highlight the technical and operational complexities companies must navigate to comply with the AIA.

Despite these challenges, the study finds a positive sentiment towards the AIA’s codes of conduct, which encourage the voluntary application of high-risk requirements to non-high-risk systems. This suggests that industry practitioners recognize the value of these requirements in enhancing AI system safety and reliability, even beyond regulatory obligations.

The research contributes valuable insights into the operationalization of the AIA, addressing a gap in compliance-oriented studies within software engineering. It underscores the need for further development of tools and frameworks to facilitate compliance and supports the industry’s transition towards more regulated AI practices.

Future Directions: Enhancing AI Compliance

The findings of this study have significant implications for AI governance and industry practices. By identifying the most challenging aspects of AIA compliance, the research provides a roadmap for future efforts to develop tools and frameworks that support companies in meeting these requirements. The positive sentiment towards the AIA’s codes of conduct also indicates a willingness within the industry to adopt best practices voluntarily, potentially leading to broader improvements in AI system safety and reliability.

As AI continues to evolve, the need for effective governance and compliance mechanisms will only grow. This study lays the groundwork for future research and development efforts aimed at operationalizing the AIA and enhancing the industry’s ability to meet regulatory demands. We thank the authors for their contribution and encourage anyone with insights or feedback to engage with this important topic.

Reference: Wagner, M., Song, Q., Borg, M., Engström, E., & Lysek, M. AI Act high-risk AI compliance challenge and industry impact: A multiple case study. Information and Software Technology, 194, 108067. DOI: https://doi.org/10.1016/j.infsof.2026.108067

Leave a Comment

Your email address will not be published. Required fields are marked *