On October 30, 2023, the Biden administration released the most comprehensive set of rules and guidelines surrounding AI regulation the US has ever seen.
The executive order highlights the US sentiment towards AI, but it presents a fragmented approach with the possibility of interdepartmental discord.
What does the executive order mean for AI regulation?
The order sets a precedent for directing the
AI market away from the path of self-regulation. Large companies such as Amazon, Google, and OpenAI will come under scrutiny from several government departments, ranging from the Department of Commerce to the Department of Defense.
The measure targets many areas for concern surrounding AI, particularly in terms of civil rights, equity, and privacy. Increased emphasis is put on further developing privacy-preserving techniques like cryptography. Establishing a framework to tackle data breaches is critical to protecting the privacy of US citizens. This is particularly vital, given the risks associated with AI and the added incentives for companies to use data for training AI systems.
The executive order also seeks to tackle “algorithmic discrimination.” Racial and gender bias has been a problem with AI for many years. Some suggest the problem lies in inherently biased data, while others say it is down to whoever is formulating and framing the problem. A key point that stands out is the call to “ensure fairness throughout the criminal justice system.” This indicates that we will likely see AI being used in key aspects of court such as sentencing and risk assessments. The order also tackles several issues about the workforce. It calls for better practices to not only minimize the harm of AI in terms of job displacement but also harness the benefits it can bring, highlighting the importance of innovation.
However, the differing viewpoints and agendas of the various US departments may make it difficult to implement a uniform approach to AI. There are looming concerns that these departments will be overwhelmed with AI directives, resulting in no material effect. The lack of a federal data privacy law in the US will be problematic if no uniform approach to the problem is reached.
How does this compare to other regulatory frameworks?
The EU and China have varying approaches when it comes to AI regulation. China takes a more vertical approach, tackling singular issues such as “deep synthesis.” The EU, on the other hand, has a more horizontal approach, with an overarching framework that applies standards and requirements across a large array of AI applications.
The US has taken a more decentralized approach, seeking to establish rules on specific use cases across AI. Despite differing approaches, the US and China appear to be aligned in their efforts to advance AI innovation, undoubtedly due to their rivalry and shared goal of becoming the world leader in AI. The EU, in contrast, has stuck with a risk-based approach, highlighting specific levels of risk AI applications could fall into.
What does this mean for the rest of the world?
The advent of generative AI shifted the global technological landscape, creating a pre- and post-AI world. Regulators worldwide are designing and implementing AI legislation to keep up with technology’s pace of development. The US executive order sends a strong message globally that it is time to move towards AI regulation.
One country that is particularly lacking in AI regulation is the UK. Rishi Sunak has suggested he is in “no rush” to regulate AI. This seems neglectful, given the power and disruptiveness AI possesses. The US executive order has the potential to influence the UK’s position on AI and lead it to join other major powers in establishing an AI regulatory framework.