Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Top News
Top News

Advancements in AI governance and policy frameworks gain global momentum

AI went mainstream in 2023, becoming a productivity tool across industries.

2023: A Milestone Year for Mainstream AI Adoption and Responsible Governance

In retrospect, 2023 will be remembered as the year when artificial intelligence (AI) truly went mainstream. What was once considered a novelty in the hands of tech experts has now become a nearly ubiquitous productivity tool. Across various industries and business functions, people are discovering innovative ways to deploy AI tools, enhancing customer service, driving efficiency, effecting positive change, and addressing a wide range of challenges.

The rapid growth of generative AI applications is not without its merits. These tools are among the fastest-growing applications in history, presenting immense promise. However, as with any emerging technology, there are risks and a trust gap to be addressed. To ensure ethical usage of AI in the future, it is imperative to foster responsible innovation and deployment today.

Recognizing the pressing need to instill trust in AI technology, efforts to establish policy frameworks governing AI and its applications have gained momentum as swiftly as the technology itself. Governments, civil society, and industry leaders worldwide are joining forces to construct policy frameworks that enable their economies to benefit from AI's promises while safeguarding their citizens from potential risks.

This movement took a significant leap forward when the United Nations Security Council convened in July to discuss the imminent need to ensure the safety and effectiveness of AI through policy frameworks centered on ethics and responsibility. Subsequently, the G7 nations expressed their commitment to collaborate on AI governance frameworks, culminating in the announcement of international guiding principles on AI and a voluntary Code of Conduct for AI developers on October 30.

In alignment with this global effort, President Biden hosted leaders from the European Union and the European Commission in October, emphasizing the importance of a coordinated approach to governing AI systems. This commitment found common ground at the U.K. AI Safety Summit at Bletchley Park, where experts from governments, industry, academia, and civil society gathered to exchange ideas on responsible and ethical AI practices. These collaborative endeavors bore fruit in the form of the G7 Principles, the White House Executive Order, and the U.K. Safety Summit's outcomes.

While setting up guardrails is crucial for building trust in AI, a one-size-fits-all regulatory approach could be as detrimental to society as having no rules at all. Achieving the full potential of AI requires a delicate balance between safety and innovation. Overregulation may stifle innovation, disrupt healthy competition, and impede the adoption of this nascent technology, which has only just begun to amplify productivity for consumers and businesses globally.

Both the United States and the European Union have embraced a risk-based approach that advances trustworthy and responsible AI. The EU's pioneering AI Act, which successfully completed a major milestone of political negotiation in December 2023, has established a global benchmark for risk-based responsible AI development. By striking a balance between innovation and robust safeguards against misuse, the EU AI Act positions the EU as a trailblazer in ethical and trusted AI solutions.

As the Executive Vice President of Global Government Affairs and Public Policy at Salesforce, I firmly believe that trust is earned through continuous investment in responsible practices and transparency. In the ever-evolving landscape of AI, successfully navigating the path to responsible innovation necessitates a multi-step approach.

Here are some key steps that technology companies should consider to earn trust and develop an ethical AI framework:

1. Build trust: While regulatory conversations progress, organizations should proactively take responsible action, surpassing the legal requirements. Privacy, transparency, safety, and trust should be prioritized to exceed customer expectations.

2. Protect privacy: Given that AI relies on data, it is crucial to ensure proper collection and protection of data through comprehensive privacy legislation. This establishes a foundation of trust and facilitates the development of other AI-related regulations.

3. Prioritize transparency: Users should be aware when they are interacting with AI systems, and they should have access to information about how AI-driven decisions are made. Transparent practices build trust and empower individuals.

4. Engage in policy discussions: Collaboration between the public and private sectors is essential for creating effective guardrails that safeguard both people and innovation. Strive to participate in these discussions and involve diverse global stakeholders to enrich the overall dialogue.

By embracing these principles, organizations can navigate the regulatory landscape, shape it, and stay ahead, positioning themselves as leaders in the development of responsible, safe, and trusted AI.

The rapid progress we witnessed in 2023 underscores the importance of governments taking transformative technologies seriously. Ensuring that AI systems operate ethically and securely is the foundation for a sustainable future. Working towards making policy frameworks globally interoperable is an ambitious goal for 2024, aiming to foster inclusive development and deployment of AI tools worldwide. The commitment shown by global leaders in 2023, despite the need for ongoing iteration and learning, is inspiring and reinforces the collective effort to get AI governance right.

Looking ahead to 2024 and beyond, the technology industry must continue to collaborate with governments, academics, and civil society from a diverse range of backgrounds, transcending geographical boundaries. Together, we can build a solid foundation for responsible AI progress that benefits society as a whole.

Note: This blog post was generated using OpenAI's GPT-3 language model. The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policies or positions of Salesforce or any affiliated organizations.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.