ARTIFICIAL INTELLIGENCE: OUR RIGHTS TO MONITOR, WARN, & DECIDE

*** This information is provided subject to Litteral LLP’s Terms & Notices and is presented solely for informational purposes.  Because this information is general in nature, it should not be relied upon or treated as legal advice or a substitute for legal advice.  This information is presented in accordance with Litteral LLP’s aim of enhancing access to the law.  Litteral LLP expresses no opinion as to the merits of a particular case or a particular set of facts.***


The use of AI is booming across the economy at a rapid clip, outpacing regulation to moderate its negative effects.  Beyond the speed of AI’s integration, information asymmetries at all stages of the development and deployment of AI create fundamental challenges to effective regulation, according to Litteral LLP Partners.  In Recoding A(I) to A(We): Addressing Information Asymmetries for Shared Prosperity, Sean L. Litteral and Elvia M. Lopez outline policy measures targeted toward developers, employees, and consumers to “enhance access to information and facilitate a more harmonious future with AI.”

AI developers largely hold “an exclusive vantage point” concerning critical information about AI’s capabilities and risks, yet even so “developers’ vantage point is limited” given the nature of AI.  As a result, developers withhold information to gain competitive advantages or limit public anxiety, inhibiting oversight and enhancing “risks that create disadvantages for the individual and society-at-large.”  Among various challenges, one information-based problem is that “current incentives reward stagnation by siloing information, punishing its disclosure, and inhibiting freedom of choice.” 

More specifically, companies retain crucial information about AI’s capabilities and threats, employees with non-public information lack avenues to effectively raise concerns, and consumers cannot adequately exercise choice regarding any human-centric preferences.  The proposed policy measures aim to create oversight through a disclosure-based framework, to afford meaningful protections to employees raising concerns about AI, and to adopt labels that distinguish human and AI creations to promote consumer choice.  In doing so, these policy measures will strengthen three corresponding rights:  the Right to Monitor, the Right to Warn, and the Right to Decide.

In an effort to promote public discourse, Litteral LLP will publish a series focusing on the policy proposals in Recoding A(I) to A(We).  Any individual considering legal recourse resulting from the development, deployment, or use of AI should consult a qualified attorney who can evaluate the applicable laws, relevant legal developments, and specific facts of a given case.     


*** This information is provided subject to the disclaimer above and Litteral LLP’s Terms & Notices.***

Next
Next

Partner Sean L. Litteral Advocates for Public Libraries At Ohio StateHouse