AI’s continued development has not followed the same path as that for modernity’s other most significant technological advancements, hinting that developers and politicians are more seriously assuming their ethical responsibility to consider and work proactively to mitigate any harmful unintended consequences of digital inventions and innovations. In 2019 and earlier, before the 2023 explosion of proposed laws aimed at retroactively regulating AI models already released to the public, several groups of and individual countries began publishing ethical guidelines for AI’s development. The OECD, EU, Singapore, South Korea, and Brazil led the effort to get out early versions of these guidelines before accessible Large Language Models (LLMs) with conversational text generation like ChatGPT hit the market in 2022.
The United States, home to ChatGPT and some of the most dominant LLM models in the world, was notably missing from this list of early ethical guidelines adopters, prompting the question: What was the U.S. government’s approach to setting development standards for some of the most influential AI models on the market? In October of 2022, the White House’s Office of Science and Technology published its “Blueprint for an AI Bill of Rights,” just one month before ChatGPT was released to the public. This likely wasn’t a coincidence, as the blueprint was designed with input “from impacted communities and industry stakeholders to technology developers and other experts across fields and sectors, as well as policymakers throughout the Federal government.”
Action at the Federal level also includes Executive Order 14110, creating new standards for “Safe, Secure and Trustworthy Artificial Intelligence” that take a risk- and equity-based approach developed by the National Institute of Standards and Technology (NIST). This approach mirrors global regulatory trends that focus on data privacy and security, nondiscrimination and transparency in decision making. The three other most recent and significant U.S. Federal actions to regulate AI and its development were made in the House of Representatives (the House) and by the Federal Communications Commission (FCC).
On 10 Jan. 2024, Rep. Ted Lieu (D-CA-36) introduced the Federal Artificial Intelligence Risk Management Act (H.R. 6936), which would require Federal agencies to follow the AI risk management framework developed by the NIST. Currently, the bill is being reviewed by two committees, that on Oversight and Accountability and on Science, Space and Technology (SST), for a duration determined by the respective speaker each. Then, in February, the FCC ruled it would apply the Telephone Consumer Protection Act to restrict the use of AI generated human voices, setting a potential precedent for other agencies to leverage existing laws to regulate different aspects of the new technology. Finally, on 31 July 2024 Rep. Zoe Lofgren (D-CA-18) introduced the Workforce for AI Trust Act (H.R. 9215) to primarily “facilitate the growth of multidisciplinary and diverse teams that can advance the development and training of safe and trustworthy artificial intelligence systems …,” also currently under review by the House Committee on SST.
At the state level in the U.S., only three laws have been enacted that regulate AI, two in Colorado and one in Alabama. Alabama’s law prohibits what it defines as “materially deceptive media” to influence an upcoming election. If any of three requirements are met, including if the media was made with AI, the law can explicitly define such election influencing content as materially deceptive media and censor it. Colorado’s law on election influencing media took a narrower approach with their definition of such content to specifically prohibit the use of deepfakes in campaign ads. The Rocky Mountain state’s second law on AI focuses on consumer protections when interacting with AI, calling on developers to take extreme care to avoid creating models that display algorithmic discrimination that mirror human biases.
Lina Khan, Chair of the U.S. Federal Trade Commission, and other high ranking government officials from the U.K. and EU made a rare joint statement on 23 July 2024, laying out their intention to maintain “effective” competition in the AI industry and fair treatment of consumers and businesses, while maintaining each region’s sovereignty. Shared antitrust concerns over the growing oligarchy of companies with top AI models sparked the effort, as all AI companies anticipate the roll out of new regulations like the Digital Services Act package and the Artificial Intelligence Act roll across the EU. Adding to oversight, are norm-setting coalitions like the UN, OECD and G20, who are working to ensure core humanitarian principles remain foundational to AI’s development and deployment.
Thinking back to EchoInsight’s previous blog in this limited series on GenAI’s role in cybersecurity, we concluded by looking at how soon experts think transformative AI, indistinguishable from human intelligence, might come about. Given it’s a relatively near-term possibility, and the norm in AI’s development has been to set try and set ethical guidelines to match the tremendous pace of AI’s development, let’s consider for a moment what a more literal version of the White House’s blueprint for an AI Bill of Rights for transformative AI might look like. Two questions come to mind. Would it be nearly identical to the Universal Declaration of Human Rights? Or would it be more akin to present day animal rights?