Go Back Up

The Three Pillars of an AI Policy Strategy

Modev News VOICEandAI gnrt 24-04-3 Modev Staff Writers 4 min read

GovAI Webinar 2024

As artificial intelligence (AI) continues its unstoppable march forward and becomes increasingly integrated into our daily lives, businesses and organizations need to develop comprehensive AI policy strategies or risk being drowned in a complex landscape of regulations, standards, and best practices. At Modev's latest webinar, we were joined by a panel of experts who discussed the three pillars of an effective AI policy strategy that are:

  • Understanding the policy landscape
  • Identifying resources to leverage
  • Taking action by getting involved

The expert panel featured Brian Scarpelli, Senior Global Policy Council at ACT, the App Association; Danielle Gilliam Moore, Director of Global Public Policy at Salesforce; Kim Lucy from Microsoft's Corporate Standards Group; and David Bain, Chair of the Technology Integrity Council. Pete Erickson, founder of Modev, moderated the panel.


This post will provide an overview of the discussion while focusing on and fleshing out the three pillars of an AI policy strategy. Let's start.


Understanding the Policy Landscape

The first pillar of an AI policy strategy is understanding the policy landscape in which your organization operates. That involves identifying the policy areas most relevant to your business and allocating resources accordingly. In such a context, it becomes critical to consider both technology-neutral laws and regulations that existed before the rise of AI. Organizations should also be knowledgeable about sector-specific regulations that could impact their industry.

Agencies like the Federal Trade Commission (FTC) have been actively reminding businesses of their role in protecting consumers from unfair and deceptive practices, regardless of whether AI is involved. Underpinning this is President Biden's recent executive order, which contains over 90 directives for executive agencies related to safe and effective AI development.

While complex, these regulations are a push to ensure fair outcomes. As Danielle says, "Part of the reason why we should have these policies is because some industries and some entities are looking to act responsibly. Why? For a company like Salesforce, we operate on trust. Our customers are putting their trusted data on our platform and if they don't trust us with that data, they're not going to use our product."

Different governments take different approaches toward AI policy, with some regions focusing on the opportunities presented by AI rather than rushing to implement complex laws. Given that variability in governance, monitoring developments in various jurisdictions and engaging in conversations about interoperability and best practices will be vital to a sound AI policy strategy.


Identifying Resources to Leverage

The second pillar involves identifying the resources your organization can leverage to implement your AI policy strategy. Microsoft's Corporate Standards Group engages with international standards processes, such as ISO/IEC 42001, to help establish a baseline for responsible AI development. These standards provide guidance for organizations to implement policies tailored to their specific context, size, and needs.

The panel also highlighted how Microsoft maps the requirements of various regulations worldwide to these baseline standards, allowing them to identify gaps and focus on areas where new requirements may emerge. In Kim’s words: "We find that standards are a really good way to provide a kind of baseline that help companies establish their own policies internally to establish how they want to define what responsible AI means for them."

This mapping exercise helps Microsoft anticipate where it may need to adapt its AI policies and practices to comply with new regulations. It also gives them insight into how the regulatory environment is changing. Other organizations can use a similar approach, leveraging standards as a foundation to better track, understand, and respond to AI regulations as they develop around the world.

Taking Action and Getting Involved

The third pillar emphasizes the importance of taking action and shaping AI policies. Businesses should engage with policymakers, industry groups, and other stakeholders to ensure that AI technologies are developed and used responsibly. Successfully deploying AI for our collective good requires a shared responsibility framework, where each entity in the AI development and deployment process takes responsibility in the areas they can control. This is central to Brian's visions when he states, "The idea of a shared responsibility across an AI value chain for efficacy and safety and, bias mitigation. I think those are really important foundations we can run with."

On top of that, businesses need to raise their awareness of the policy landscape and take advantage of resources offered by industry associations and other organizations. Companies should make their voices heard and be proactive in shaping AI policies rather than waiting to react to government proposals that may be introduced in response to severe incidents.


The international community is paying close attention to AI governance and recognizes the necessity of adequately managing AI systems as they become more advanced and capable. To quote David, "The whole world is focused on this. It is essential for national security and international stability. Multiple international organizations that are inputting their ideas into what will eventually become humanity's future. The ideas, and abstractions like risk-based management, they're coming from different parts. We're collectively synthesizing all of that right now. And that's crucial because AI is getting so much more powerful. So we need to figure this out, how to manage it."

Constitutional AI refers to the idea that AI systems, especially large language models, should have certain principles, values, and guidelines fundamentally embedded into them during the training process. The goal is for the AI to inherently behave in alignment with these "constitutional" principles.

It's a prime example of how the global AI community is working to establish norms, principles, and development practices to ensure advanced AI remains manageable and aligned with human values. International collaboration and proactive governance will be key as AI progress accelerates.


Wrapping Up

It was an eye-opening panel, rich in insights and common sense. All panelists agreed that collaboration between governments, businesses, and international organizations will be essential to establishing responsible AI development practices and effective governance frameworks.

Developing a proper human-centric AI policy strategy requires a comprehensive understanding of the policy landscape, leveraging available resources, and proactively shaping AI governance's future. Companies must assess their role in the AI value chain, implement appropriate guardrails and risk management practices, and stay informed about developments in relevant jurisdictions. 

Businesses can contribute to shaping a future where AI remains safe, beneficial, and aligned with human values. The time to act is now—every organization working with AI should ensure they have an AI policy strategy to navigate this critical inflection point in the technology's development. By following the above three pillars, businesses can better navigate the complexities of AI regulation, build trust with customers, and contribute to the responsible development and deployment of AI technologies for a better future for all.

 

Modev Staff Writers

Modev staff includes a talented group of developers and writers focused on the industry and trends. We include Staff when several contributors join forces to produce an article.

Ready to Transform your Business with Little Effort Using Brightlane?