The United States stands out as surprisingly behind its competitors in the race to set global standards for AI development and regulation. This may be for two reasons. First, the anti-regulatory spirit and belief in entrepreneurial freedom is a cornerstone feature of U.S. thinking on public policy matters. Hitherto, the United States has been a leading site of technological development, especially in digital space. There is a strong preference to maintain this top position and continue to be a place that attracts massive inflows of capital investment. Policy-makers express concern that too much regulation would discourage such inflows and hamper innovation. Second, there is the division of power between the executive agencies and the legislative branch that can lead to coordination problems. Thus far it seems that most movement on AI regulation is coming from directives issued by the executive branch to its constituent agencies. Yet, this approach can only go so far as it will ultimately be necessary for the legislature to pass a bill in order for more sweeping measures to be taken. There are signs, however, that the United States is beginning to consider AI regulation an increasingly important point of focus, as we will see in the subsequent sections.

Governance and political economy

Governing responsibilities, including the development of policy, is widely distributed across many different sites and bodies within the U.S. political system. To start, the USA is a federalist or ‘dual-power’ system with governing authority shared between the federal and state jurisdictions. To be sure, this can introduce tensions when conflicts arise between federal and state lawmakers and authorities — tensions that are usually settled through the judicial system. The governing structure at the federal and state levels are mirrored: that is, they are both organized into the three distinct branches of executive (President/Governor), legislature (Bicameral system with a lower and higher chamber), and judiciary. The legislature (both the lower and higher congresses) is tasked with the production of laws, the executive branch with ensuring execution of those laws, and the judiciary with interpreting the legal limits and constitutionality of the laws. 

Inside each of these branches are further sub-units of authority. Of considerable regulatory importance are the federal executive departments which are “the principal units of the executive branch of the federal government of the United States.”1 The secretary of these departments are appointed directly by the President and constitute his or her cabinet of closest advisors. There are currently fifteen such departments: State, Treasury, Defense Justice, Interior, Agriculture, Commerce, Labor, Health and Human Services, Housing and Urban Development, Transportation, Energy, Education, Veterans Affairs, and Homeland Security. Within each of these departments are several ‘agencies’ or ‘offices’ dedicated to executing the legislated purpose or mission of the department. However, there are a number of independent agencies “that exist outside the federal executive departments (those headed by a Cabinet secretary) and the Executive Office of the President.”2

New policies on artificial intelligence are being explored, debated, developed, and passed across all of these different governing bodies — as will be explored below. The engagement of so many different regulatory powers, with no overarching roadmap to guide them, has resulted in U.S. policy frequently being denoted as a ‘patchwork approach’. On the one hand, this may speak to a lack of preparedness when facing such a momentous economic and technological advancement. On the other hand, the U.S. has historically utilized the patchwork approach as a way to test regulations on a smaller, more local scale before implementing them on a national basis. With this latter method, it is possible to study the effects of certain policies at a state or local level and generate conclusions about their efficacy and impact. 

Regulation and development

The federal level

Executive Orders
One avenue through which U.S. policy on artificial intelligence is beginning to take shape is through federal executive orders issued by the President. The most recent and far-reaching example is the Executive Order on Maintaining American Leadership in Artificial Intelligence, which was released by the White House’s Office of Science and Technology Policy on January 7, 2019 of the Trump Administration. It was officially implemented in February. The EPA provides a short summary of the Executive Order’s purpose: “Its purpose is to establish federal principles and strategies to strengthen the nation’s capabilities in artificial intelligence (AI) to promote scientific discovery, economic competitiveness, and national security.”3 This is achieved through five overarching objectives: increased federal investment, the provisioning of federal resources, the construction of guidelines for regulation, preparing the workforce for the onset of AI, and the protection of American AI technology.4 According to Luo (2019), “The executive order appears to be a step in that direction, but many worry that it will not be enough. One concern is that the order did not provide new research funds. The order directs federal agencies to allocate money towards AI but does not indicate where that money is to come from.” 

Following the issuance of this Executive Order 13859, the Trump Administration established the “National Artificial Intelligence Initiative Office in early January 2021, in accordance with the National AI Initiative Act of 2020.”5 According to the official archives, the purpose of this office is to “serv[e] as the central hub for Federal coordination and collaboration in AI research and policymaking across the government, as well as with private sector, academia, and other stakeholders.” 

New developments appear to have slowed with the current Biden Administration, although he ascended to the White House only one year and seven months ago (at the time of writing). No new executive orders have been issued related to AI, nor roadmaps released. However, federal agencies under the direction of the Biden White House are launching new initiatives to further explore, and where possible, regulate the most visible harms of AI. Furthermore, there is much speculation about the hiring decisions of the Biden Administration and what they mean for the future of AI regulation. 

Another important development at the federal level, but related only to the White House, is the AI Bill of Rights proposal released in October of 2022. Described as a ‘blueprint’ it contains five core principles that should guide the responsible development and deployment of AI systems in the US:

  • Safe and Effective Systems: You should be protected from unsafe and ineffective systems
  • Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
  • Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data bout you is used.
  • Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
  • Human Alternatives, Consideration, and Fallback: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter

This blueprint issued by the Biden Administration constitutes a major step forward in thinking about how future policy should be designed to promote the interests of citizens in an increasingly digitalised world, where AI is taking on an ever-greater number of functions. However, it is only ‘advisory’ and does not constitute any actual law or policy.

Federal Congressional Legislation 

Many researchers and analysts have commented on the fact that there is a dearth of federal legislation on AI, especially considering that other countries and economic blocs are racing to pass their own. Specifically, there are no bills or laws that deal directly with the issue of regulating AI. However, there are recent laws and pending legislation that will have some implications for the use and development of AI. 

First, there is the National AI Initiative Act that was passed in January of 2021. This Act establishes the National AI Initiative that will establish, “an overarching framework to strengthen and coordinate AI research, development, demonstration, and education activities across all U.S. Departments and Agencies…”6 According to Foley, “The Act created new offices and task forces aimed at implementing a national AI strategy, implicating a multitude of U.S. administrative agencies including the Federal Trade Commission (FTC), Department of Defense, Department of Agriculture, Department of Education, and the Department of Health and Human Services.”7

Second, in February of 2022 the Algorithmic Accountability Act of 2022 was introduced by Representative Yvette Clarke, D-N.Y. in the U.S. House of Representatives and U.S. Senator Ron Wyden, D-Ore., with Senator Cory Booker, D-N.J. in the U.S. Senate. Senator Wyden’s office released a press statement describing the bill as, “a landmark bill to bring new transparency and oversight of software, algorithms and other automated systems that are used to make critical decisions about nearly every aspect of Americans’ lives.”8 These lofty aims are achieved by directing, “the FTC to promulgate regulations requiring organizations using AI to perform impact assessments and meet other provisions regarding automated decision-making processes.”9

Federal Agency Documents

A third federal avenue through which AI policy is beginning to take shape in the U.S. is the decision-making power of executive agencies. To date, there are developments within several different agencies that portend regulatory crackdowns on some uses of AI technologies, albeit in very specific contexts. 

One agency that AI scholars expect to take meaningful steps is the U.S. Federal Trade Commission (FTC). In April of 2021, the FTC published on its blog the enforcement powers it currently has that could be applied to AI10:

  • Section 5 of the FTC Act. The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of – for example – racially biased algorithms.
  • Fair Credit Reporting Act. The FCRA comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits.
  • Equal Credit Opportunity Act. The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.

These regulatory powers largely benefit the data subject in the form of a consumer, as opposed to a worker. Therefore, it is not clear how much workplace governance will be impacted by the FTC regulatory powers, especially as outlined in this note from April of 2021.

Another Executive agency development came in June of the same year, when the National Science Foundation and the White House Office of Science and Technology Policy formed the ‘National AI Research Resource Task Force’. The purpose of the group is to, “provide recommendations on AI research, including on issues of governance and requirements for security, privacy, civil rights and civil liberties. It is due to submit reports to Congress in May 2022 and November 2022.”11 The task force delivered an interim report in May of 2022, with the topline conclusion being that “coordinated action is critical.”12 To this end, the “interim report of the NAIRR Task Force focuses on developing a concept that would meet this national need through a shared research cyberinfrastructure connecting researchers to the resources and tools that fuel AI R&D.”

Yet another few months later in October of 2021, the U.S. Equal Employment Opportunity Commission (EEOC) launched “an initiative to ensure that AI used in hiring and other employment decisions complies with federal civil rights laws. According to the commission, “the initiative will identify best practices and provide guidance on algorithmic fairness and use of AI in employment decisions.” The EEOC then issued its first official guidance on May 12, 2022 that, “provides practical tips to employers on how to comply with the Americans with Disabilities Act (“ADA”), and to job applicants and employees who think that their rights may have been violated.”13

The guidance document focuses on the major ways that AI systems might potentially violate the ADA, and the duties of employers to ensure they do not. According to the EEOC, “The most common ways that an employer’s use of algorithmic decision-making tools could violate the ADA are:

  • The employer does not provide a “reasonable accommodation” that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm. 
  • The employer relies on an algorithmic decision-making tool that intentionally or unintentionally “screens out” an individual with a disability, even though that individual is able to do the job with a reasonable accommodation…
  • The employer adopts an algorithmic decision-making tool for use with its job applicants or employees that violates the ADA’s restrictions on disability-related inquiries and medical examinations.”14

State Legislation and Initiatives 

Across the United States, numerous state and local governments are taking up initiatives to regulate AI in various ways. In fact, there are simply too many instances to account for all of them here. The U.S. The Chamber of Commerce has compiled an ‘AI Legislation Tracker’ to monitor all state and local developments related to AI regulation. A review of this tracker reveals that, in most cases, states and local jurisdictions are taking steps to combat the obvious harms AI systems are already exacerbating around discriminative automated decision-making, privacy, and data ownership. We highlight a few, notable examples here that are currently in the works. 

In December of 2021, Attorney General Karl Racine proposed, “rules banning “algorithmic discrimination,” defined as the use of computer algorithms that make decisions in several areas, including education, employment, credit, health care, insurance and housing.”15 As reported by SHRM, “The rules would require companies to document how their algorithms are built, conduct annual audits on their algorithms and disclose to consumers how algorithms are used to make decisions.”

Also in December of 2021, New City passed a law to take effect in January 2023 that will regulate the use of AI tools in the workplace. Again, SHRM explains that the purpose of the law is to “prevent employers in New York City from using automated employment decision tools to screen job candidates unless the technology has been audited for bias.” It took additional time for regulators to clarify which protected characteristics would have to inform these audits due to high levels of participation during public comment periods. The final rules ‘were adopted in April [2023] and the enforcement date has been postponed to 5 July 2023’.16 Additionally, the law imposes new duties on companies as they will be “required to notify employees or candidates if the tool was used to make hiring decisions.” 

The state of California is also taking steps to curb the potential abuses of AI in workplaces. In March of 2022, the California Fair Employment and Housing Council published “draft modifications to its employment anti-discrimination laws that would impose liability on companies or third-party agencies administrating artificial intelligence tools that have a discriminatory impact.”17 According to the National Law Review, these modifications to existing laws would make it “it unlawful for an employer or covered entity to ‘use … automated-decision systems, or other selection criteria that screen out or tend to screen out an applicant or employee … on the basis’ of a protected characteristic, unless the ‘selection criteria’ used ‘are shown to be job-related for the position in question and are consistent with business necessity.’”

A number of other states are passing similar kinds of legislation. A notable example is the Artificial Intelligence Video Interview Act passed by the state of Illinois that took effect in January of 2020. The central aim of the law is to protect prospective applicants’ privacy through three primary requirements18: (a) by requiring consent from applicants prior to the use of AI software, (b) prohibiting the distribution of interview materials with third parties, and (c) requiring the deletion of materials submitted by applicants within a month of their request. Other pending regulatory efforts include the Washington DC’s proposed ‘Stop Discrimination by Algorithms Act’, Connecticut’s Senate Bill SB 1103 (‘An Act concerning artificial intelligence, automated decision-making and personal data privacy’), and New Jersey’s Assembly Bill 4909 (requiring bias audits for automated decision-making systems).

Implications for the world of work

The above sections demonstrate a consistent theme when it comes to U.S. regulation of AI and its impact on the world of work: there is simply no systematized approach to regulating how this kind of technology can and will be used in the workplace. For now, a patchwork approach dominates with federal, state, and local legislatures and agencies taking their own measures where they can. At the federal level, in both the U.S. Congress and the Executive branch, there appears to be serious uncertainty over how to proceed beyond spending money on research and development. Great caution abounds due to the fear of ‘killing the golden goose’ of private sector advancements, i.e., the global powerhouse that is Silicon Valley. 

Moreover, the active legislation that is being pursued, predominantly at the state level, is focused on very specific objectives, mostly confined to preventing the more frequently discussed harms of discriminative automated decision-making, threats to individual privacy, and concerns over data ownership. However, the proliferation of these limited and targeted bills speaks to the growing concern around the impact of AI on data subjects of all kinds, including workers. Furthermore, the continual creation of task forces and advisory councils at all governing levels may result in more aggressive interventions on the behalf of workers managed by artificially intelligent systems and software. 

  1. United States Federal Executive Departments, Wikipedia, last modified 30 July 2023, https://en.wikipedia.org/wiki/United_States_federal_executive_departments.
  2. Independent Agencies of the United States Government, Wikipedia, last modified on 27 August 2023, https://en.wikipedia.org/wiki/Independent_agencies_of_the_United_States_government.
  3. Environmental Protection Agency, “Summary of Executive Order 13859 – Maintaining American Leadership in Artificial Intelligence,” last modified July 21, 2023, https://www.epa.gov/laws-regulations/summary-executive-order-13859-maintaining-american-leadership-artificial.
  4. Winston Luo, “President Trump Issues Executive Order to Maintain American Leadership in Artificial Intelligence,” JOLT Digest, March 06 2019, https://jolt.law.harvard.edu/digest/president-trump-issues-executive-order-to-maintain-american-leadership-in-artificial-intelligence.
  5. TrumpWhiteHouse.gov, “Artificial Intelligence for the American People,” https://trumpwhitehouse.archives.gov/ai/.
  6. Chanley T. Howell, “AI Regulation: Where do China, the EU, and the U.S. Stand Today?,” Foley, August 03 2022, https://www.foley.com/en/insights/publications/2022/08/ai-regulation-where-china-eu-us-stand-today.
  7. Ibid.
  8. Ron Wyden United States Senator, “Wyden, Booker and Clarke Introduce Algorithmic Accountability Act of 2022 To Require New Transparency And Accountability For Automated Decision Systems,” February 03 2022, https://www.wyden.senate.gov/news/press-releases/wyden-booker-and-clarke-introduce-algorithmic-accountability-act-of-2022-to-require-new-transparency-and-accountability-for-automated-decision-systems.
  9. Tam Harbert, “Regulations Ahead on AI,” SHRM, April 2 2022, https://www.shrm.org/hr-today/news/all-things-work/pages/regulations-ahead-on-artificial-intelligence.aspx.
  10. Elisa Jillson, “Aiming for truth, fairness, and equity in your company’s use of AI,” Federal Trade Commission, April 19 2022, https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.
  11. Tam Harbert, “Regulations Ahead on AI,” SHRM, April 2 2022, https://www.shrm.org/hr-today/news/all-things-work/pages/regulations-ahead-on-artificial-intelligence.aspx.
  12. NAIRR Task Force, “Envisioning a National Artificial Intelligence Research Resource” (Washington, D.C.: National Artificial Intelligence Research Resource, May 2022), https://www.ai.gov/wp-content/uploads/2022/05/NAIRR-TF-Interim-Report-2022.pdf.
  13. Smith Gambrell Russell, “EEOC Issues Artificial Intelligence Guidance,” JDSURPA, June 2 2022, https://www.jdsupra.com/legalnews/eeoc-issues-artificial-intelligence-2457882/.
  14. “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” U.S. Equal Employment Opportunity Commission, May 12 2022, https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence.
  15.  Tam Harbert, “Regulations Ahead on AI,” SHRM, April 2 2022, https://www.shrm.org/hr-today/news/all-things-work/pages/regulations-ahead-on-artificial-intelligence.aspx.
  16. Airlie Hilliard. Carignan, Lindsay. ‘NYC Bias Audits Protected Characteristics’ Holistic AI. 2023. Accessed: https://www.holisticai.com/blog/nyc-bias-audit-protected-characteristics
  17. Danielle Ochs, “California’s Draft Regulations Spotlight Artificial Intelligence Tools’ Potential to Lead to Discrimination Claims,” The National Law Review, May 13 2022, https://www.natlawreview.com/article/california-s-draft-regulations-spotlight-artificial-intelligence-tools-potential-to.
  18. Rebecca Heilweil, “Illinois says you should know if AI is grading your online job interviews,” Vox, January 1 2020, https://www.vox.com/recode/2020/1/1/21043000/artificial-intelligence-job-applications-illinios-video-interivew-act.