
For example, there don’t appear to be commitments regarding the data inputs that “teach” the algorithm how to “think.” (See this critique in the New York Times.) And true, the tech companies only agreed to commitments they wanted to agree to – other issues may have been left on the cutting room floor.

True, the commitments provide wiggle room, using words like “developing” and “prioritizing” and, in some cases, reflecting practices that are already common among these companies.

Not enforceable? Actually, the FTC can enforce these pledges. Advocacy groups and some members of Congress, in turn, heralded the announcement as a “good first step” but stressed the need for guardrails that would actually be enforceable. The press release (and articles) also emphasized the voluntary nature of the commitments, noting that the Administration is currently developing an executive order and will pursue bipartisan legislation, presumably to expand on the commitments and make them compulsory. As stressed in the press release and in news articles since, these commitments are just the beginning of a longer process to ensure the “safe, secure, and transparent” development of AI. On July 21, 2023, the White House announced that it had secured commitments from the leading artificial intelligence companies to manage the risks posed by AI.
