Last updated on November 3rd, 2023 at 08:41 pm
Since ChatGPT made its much-heralded arrival on the AI scene about a year ago we have seen an acceleration in interest in policies, governing AI, both at the governmental level and at the corporate level. It’s been fascinating and somewhat frustrating to watch these initiatives take shape. Several countries including Canada, the UK, and groups like the EU have developed voluntary guidelines. Yesterday the president of the United States issued an executive order for the US AI policy framework.
Table of Contents
The Intricacies Of Executive Orders
There are a couple of things that are interesting about this. First of all, it’s an executive order, a very amorphous type of “legislation” in the US. Executive orders (EOs) used to be rare, employed when the legislatures could not meet for whatever reason, or there was some issue requiring passing legislation that was critically urgent. Since the Trump era, an astonishing number of executive orders have come into place. The challenge with these is that while they are not supposed to be considered law, they currently are; however, nothing is being run through the House or the Senate so whether or not this EO is democratically supported or would stand up in court has very much to be determined. Biden may have used the EO as a strategic choice to push out any real legislation while making it seem like the ball is moving.
Secondly, it’s as far as these things go, a decent piece of legislation. It is comprehensive, has some teeth (governments can force companies to take action), it calls for standards, testing and best practices in critical areas. It has a point of view, and it’s conservative: AI is a possible threat on multiple levels and should be treated as such.
Is the Ball Really Moving?
But it’s largely a call for more knowledge. And how it will work in a global context is unclear. The ball is moving, but … more up and down than forward. The substance of the EO assumes, like others, that it exists in a global mesh-like framework where policies interconnect across borders. This is obviously a necessity because the ability of the US (or any country) to govern AI policy within its own borders is impacted by everything that’s happening outside of the US particularly the work that’s been done in China over which we have very little insight and very little knowledge.
A potentially critical piece of technology just came out of China called Woodpecker, one of the first really major global innovations of AI tech that’s come out of China, and it’s a hallucination killer. So how does its use, for example, fall under federal AI policy in the US? A mesh type of coverage, which is evolving organically because no other global framework exists, is going to be unsustainable and non-functional very quickly. The ability and necessity of controlling data borders will bog activity down and stifle innovation to a crawl.
Let’s say that a global SaaS, like Salesforce, is using generative AI and autonomous agents in its customer management platform for customer service and customer support. We know the privacy requirements are going to be different in every local jurisdiction other than areas that are governed by GDP, but as the AI evolves, how do we know what jurisdictional requirements it may be crossing? You can imagine a world of meshing policies much like the way that our foreign affairs and immigration systems work, allowing little bits of data or functionality based on an understanding of what local policies are. That is unlikely to be functional.
The Pitfalls of Piecemeal AI Policy
This piecemeal AI policy approach won’t work for two reasons: one, no one has global oversight, trust in globalism of any form is shaky at best, and because of the commercial potential and powerful capabilities of the tech, no one has demonstrated willingness to enter into a standards discussion like those that brought about the Geneva convention. This is the primary problem. Closely related is number two: the bureaucrats writing the policies are expertise deficient and more crucially vision deficient. Nothing we’ve read yet indicates a deep understanding of how AI works and more importantly where it is going.
Key Areas Covered by the US EO
The US EO covers the following areas:
- Safety and Security Standards: This includes mandatory sharing of AI safety test results with the government, AI watermarking guidelines, and a cybersecurity program targeting AI tools for identifying flaws in critical software.
- Consumer Privacy Protection: Establishing guidelines for assessing AI’s privacy techniques helps safeguard individual privacy rights, a core concern as AI becomes more intertwined with personal data processing.
- Advancing Equity and Civil Rights: Providing guidance to federal contractors to prevent discriminatory AI practices, and setting best practices for AI in the justice system
- Overall Consumer Protection: Directing Health and Human Services to scrutinize AI in healthcare and developing resources for responsible AI use in education aims to safeguard the public against harmful AI practices in sensitive sectors.
- Supporting Workers: Researching AI’s impact on the labor market and exploring ways to assist workers impacted by AI-induced market changes.
- Promoting Innovation and Competition: Expanding AI research grants and updating immigration policies for skilled workers
- International Collaboration on AI Standards
- Guidance for Federal Agencies Developing guidance for AI use and procurement by federal agencies, along with accelerating the hiring of AI-skilled workers
The Need for a New Approach
While it’s hard to argue any of these areas of development are not necessary, the effectiveness of this approach must be considered at this juncture. Setting standards for something that is still highly evolutionary and not well understood is often counterproductive in the long run. Innovation will not stop if these voluntary standards are ignored, which could lead to complete chaos (example: internet policy) or a loss of control.
I argue a different approach is necessary. The playing field is too big. You can’t create framework for something that has no shape. We don’t know what capabilities will develop tomorrow, or next week. The only way to actually wrap our heads around the potential is to create the potential. No other technological development in history had this kind of power and flexibility. In previous major innovations, the purpose of the tech was clear, or it was in its infancy stages and regulatory frameworks developed as the tech did.
The (potential) scale and power of AI allow us to be more intentional, in new ways, about what is developed. We must grasp that intention and build on it. Intentionality may be the only way efforts can be meaningfully and effectively regulated and managed. It’s time to ask the big questions: What as a society do we need AI to do for us at this stage of our evolution? Once that is defined by governments, AI policy development becomes much clearer and more linear and focuses on what we know and want to achieve versus a vast set of undefinable parameters.
This is intended to spur not restrict innovation within given parameters. In the short term, the approach could be to focus development and restrict it to three priorities. Imagine if these priorities were:
- Healthcare: Design a safe, secure model for healthcare data application to allow practitioners around the world to diagnose, treat and support patients better from initial contact to issue resolution, and develop a global approach to disease management and understanding.
- Information Validation: Design a system by which information retained in academic repositories is validated, in order to severely limit the spread of misinformation propaganda and disinformation, opened and structured for global access for both formal education and casual research, tailoring the experience to the individual’s talents and interests.
- Basic Income: Develop a system and tax base by which individuals receive an affordable basic income from their local governments, or integrated with housing, transportation and food distribution, with a goal of reducing poverty hunger and homelessness by 99% in the next ten years.
There are many other possibilities when a new intelligence is introduced into an old problem. Climate. Food production. Education design. Making life better for billions in poverty. Imagine if all AI work were required to support one of these initiatives for three years. It would change the world.
Now that’s a regulatory framework that would serve the planet and the species —built from the ground up, with global working groups, AI policy development and issue management organic as new challenges arise.
The Urgency of Now
There are many questions that emerge out of this model that we’ll tackle over the upcoming months. Should AI exist partly in the public domain? Can we move ahead with purposeful vision and not just random market-driven apps that create alternate album covers for beloved classics? I mean, cool, but … right now we need world changing. Humanity needs this, right now, and not a scattershot approach to a new tech arms race, or a focus on supporting billions of lives in billions of years (effective altruism which should really be called ineffective altruism). We have an opportunity NOW to change the quality of life for billions of people on this planet. But it requires that we stop paying AI lip service with pointless lists of policies that it is hard to see ever being successfully implemented.
Driving AI policy development exclusively through global collaboration on public interest projects would require a significant shift in how nations, corporations, and research institutions view and invest in AI. Such a proposition, if it were to gain traction, would need a combination of strong international agreements, shifts in public and political opinion, and mechanisms for enforcing these commitments on a global scale.
We need a model of how to work together. Everything else about this incredible evolution in human capacity building and capability extension will evolve out of that.