Executive Order on AI Seen as Model for AI Accountability
Order is a likely precursor to coming AI regulation, with requirements for secure and trustworthy AI systems, rigorous purchasing provisions and fairness in workforce training
By John P. Desmond, Editor, AI in Business
The impact of the Executive Order on AI signed by President Joe Biden on October 20 is rippling through business and government agencies, with implications for hiring, oversight and especially, accountability.
“In the absence of congressional action, the Biden-Harris administration made clear it intends to use every tool available to respond to the priority of AI policy,” stated Ashley Casovan and Cobun Zweifel-Keegan of the International Association of Privacy Professionals in a recently published account on the implications of the order. “The order serves as a watershed moment in its articulation of the types of rules under consideration and its setting of guideposts for the development of best practices,” state the authors, both managing directors of the Iapp.
While the order is binding on most federal agencies, it addresses all aspects of the AI supply chain thus is relevant to the private sector doing business with federal agencies, and those not engaged with federal agencies, “because of the AI governance norms that will flow from new standards and evolving best practices,” the authors state.
(See full text of the Executive Order on AI from the White House.)
The executive order outlines eight policies and principles; they are:
1. Ensuring the safety and security of AI technology;
2. Promoting innovation and competition;
3. Supporting workers;
4. Advancing equity and civil rights;
5. Protecting consumers, patients, passengers and students;
6. Protecting privacy;
7. Advancing federal government use of AI; and
8. Strengthening American leadership abroad.
To prepare the organization for following the Executive Order, the authors make suggestions including:
Appoint a chief AI officer or accountable individual in the organization;
Train individuals involved in development and use of AI across the organization;
Develop policies and procedures within the organization, including reporting, documenting and tracking the use of AI and data involved;
Introduce accountability and compliance mechanisms for AI, including laying the foundation for likely future reporting requirements;
Develop AI assurance practices, such as tests, evaluations, assessments, and audits for your people, processes and tools.
Familiarize yourself with the NIST AI RMF and other emerging AI standards for development processes.
To review the impact on the workforce due to the increased use of AI, the Iapp authors suggested:
Become more aware of the data being collected by the organization for use in AI systems;
Think about what jobs will change and how you are preparing for an AI-enabled workforce, including requirements for new AI roles in technical, operational or governance teams;
Stay on top of forthcoming AI rules and regulations, since “a lot of clues in this order” point to the types of requirements likely to be included in future regulations.
Lawyers, HR Professionals Tuning into AI Executive Order
Lawyers see opportunity and challenge in the President’s AI Executive Order.
The order “creates new opportunities and challenges for legal professionals as they will need to advise and represent clients on various legal issues related to AI such as compliance, liability, intellectual property, contracts, ethics and human rights,” stated authors of an overview from Thomson Reuters. “On the other hand, it also affects the way legal professionals operate and deliver legal services, as they will need to adopt and use AI systems in their own work, such as document review, research, analysis, drafting, and prediction.”
Human resource professionals are tuning into the AI Executive Order, which directs federal agencies to make it easier for highly skilled immigrants with expertise in critical areas to study and work in the US, according to an account from the Society for Human Resource Management.
."The administration is addressing key issues to mitigate the fear around AI," stated Tommy Jenkins, VP of recruiting at RocketPower of San Francisco. "They see there is going to be a significant impact around labor, and they understand that the executive order has to establish some guidelines around AI because it's more than just a US based effort, it has to be a global initiative."
Jackie Watrous, senior director analyst in the HR practice at Gartner, advises organizations to address three areas in the executive order that will affect HR:
Consider how AI use within an organization may impact jobs and workforce responsibilities. The executive order directs the federal government to assess the impact of AI on the workforce, develop strategies to mitigate any negative impacts, and support programs that help workers develop the skills and knowledge they need to succeed in the AI economy;
Ensure that any use of AI tools has undergone the appropriate rigor to prevent discrimination, an area that many HR executives consider to be a priority. Watrous stated that the executive order "calls for the development of standards and guidelines for the responsible development and use of AI. These standards and guidelines should address the issue of bias and discrimination."
Promote innovation, which may include upskilling existing talent and bringing in AI-skilled talent from outside the US.
"The executive order calls for the federal agencies to promote innovation in AI, including by supporting research and development in the field of AI,” Watrous stated. “The order also emphasizes the importance of attracting AI talent from outside the US and enabling accelerated hiring pathways."
Companies With Pre-ChatGPT AI Experience Have a Leg Up
Companies in regulated industries with experience implementing AI systems pre-ChatGPT typically have technical safeguards in place, including encryption, firewalls, data masking and data erasure, in order to comply with global regulations, stated Zachary Chertok, research manager for employee experience at research firm IDC. Because generative AI was introduced into the public domain without first being subjected to rigorous testing, it is having some unpredictable effects.
Now companies are in a position of having their AI tools and systems “retroactively trained for accuracy, trust and reliability in their use cases, output and outcomes," Chertok stated. "The Biden administration's executive order calls for the development of standards, tools and tests to help ensure that AI systems are safe, secure and trustworthy, including their insights and output."
The president’s Executive Order on AI was followed two days later by a draft memorandum from the Office of Management and Budget, with additional guidance for how the federal government should handle accountability while advancing innovation in AI, as outlined by an account from the think tank Brookings.
“Taken together, these two government directives offer one of the most detailed pictures of how governments should establish rules and guidance around AI,” stated the Brookings report, authored by Sorelle Friedler, Janet Haven and Brian J. Chen. Friedler is an associate professor of computer science at Haverford College; Haven is an executive director and Chen is a policy director, data and society with Brookings.
Brookings made these observations about the two documents taken together:
Both documents mandate hard accountability;
The EO and draft OMB memo set the federal government up to be a model for accountable AI;
AI governance is iterative; more is coming;
Congress needs to act in order to enshrine rights and other protections in law.
If certain safety practices are not in place, “The OMB memo requires agencies to stop using an AI system,” the Brookings report states.
The government’s “use of AI” covers both AI systems developed by the federal government, and AI “that is procured by the government,” the report states,” adding, “By using the power of the government’s purse, the guidance has the potential to influence the private sector as well.”
Guidance for AI contracts is expected to be coming from OMB, including rigorous provisions for government purchasing of AI that “will significantly shape how government AI vendors are building and testing their products,” the report stated.
Read the source articles and information from the International Association of Privacy Professionals, Thomson Reuters, the Society for Human Resource Management and from Brookings. AI governance resources are here: Blueprint for an AI Bill of Rights, and NIST’s January 2023 AI Risk Management Framework.
Click on the image to buy me a cup of coffee and support the production of the AI in Business newsletter. Thank you!