Across the globe, governing entities are creating designs for compliance. With so many under construction and totally different by country and even in states here, we need human intelligence to navigate the chaos of constant compliance changes.
Key Challenges to Address
Three distinct groups must work on AI compliance:
- AI product developers who have compliance requirements to meet
- Management teams which decide on AI use, handle operations and meet regulatory requirements, most of which differ by country and states.
- Boards hold key governance and oversight duties, upholding legal and regulatory compliance and risk management.
If each group works separately, confusion makes for trouble. Each group operates under different regulatory frameworks, answers to different stakeholders, and may interpret the same regulations differently. The board worries about fiduciary duty and risk management, management about operational efficiency, and developers about technical feasibility.
The range of topics to address expands board committee duties. In addition to risk committees, some boards are adding AI committees and ethics committees. Each require human judgement to address the context, culture and successful outcomes.
Geographic Differences
If a business has a footprint in six different places, it has six different compliance requirements, which can be confusing and time-consuming.
Constant Changes
Compliance designs are still in an experimental stage, so all must prepare for constant changes and how to handle them. Between starting and finishing an AI implementation, the regulatory landscape often shifts fundamentally. GDPR, the EU AI Act, state-level privacy laws in California, Texas, and Virginia—each arrives with different requirements, timelines, and interpretations.
We recommend you redesign human-AI collaboration in governance. We encourage boards and management create an AI committee that brings the lawyer, the management team, the AI product leader, board committee director together to collaborate and develop effective ways to handle AI and compliance. Bringing together different members provides “cognitive diversity” and creative solutions.
Microsoft is a strong example of how a holistic committee makes a difference. The board demanded a committee be created to combine the insights for ethics, legal duties, vendors’ accountability and management operations. Their proactive collaborative approach positions Microsoft as a leader in successful AI governance, earning the trust of stakeholders and preventing regulatory backlashes.*
Global AI Compliance Research
Discussions are underway to create a research team to study which compliance designs are working and which are not, globally. It is a practical way to both understand what fails and what is working well. Creating a live library of insights for best outcomes and success that can be shared and offer guidance for all countries.
Consider following the work underway by the NGO AI for Developing Countries dedicated to ensure AI has sustainable development and is globally accessible. Learn more at: https://learn.aifod.org/
Goal to Reach a Standard
Creating a standard that cuts across all globally is futuristic. Different legal systems, cultural values, and economic priorities make true standardization a stretch in time.
Promoting or tracking the value of experiments underway and their impact across the globe is a practical step. Promoting it builds confidence for people. If we can't yet standardize regulations, perhaps we can standardize how we measure and communicate success.
The goal isn't infinite regulation but intelligent regulation—rules that protect without paralyzing.
Empathy
Without empathetic leadership that understands both human needs and AI capabilities, compliance becomes a checkbox exercise that protects no one. Getting empathy at the top before you do any AI is vital. Create empathy first. Without it, you're not going to get anywhere.
From Burden to Benefit
The best compliance isn't the most comprehensive but the most comprehensible. If board members, managers, and developers can't understand the rules, they can't follow them, no matter how sophisticated their AI tools are. As AI becomes more powerful, it doesn't replace human judgment. It makes human judgment more critical. The future of AI governance isn't artificial intelligence checking artificial intelligence but human intelligence ensuring artificial intelligence serves human needs. Reach out for our help for governance practices for compliance.
|
*Mark A. Pfister Across The Board research
|
|
|