Artificial Intelligence and the Law: Five Observations

  1. AI is not just a type of application. AI is an application-enabling infrastructure.
  2. Having a universally-accepted AI taxonomy for AI is a first step in establishing an AI-relevant legal framework. (I first described this taxonomy in a 2012 presentation at SLS.)
  3. Synchronizing AI law with standard setting organizations and thought-leaders (e.g., ISO, IEC, IEEE, NIST, IARPA, DARPA) will be an important on-going effort that ensures (among other things) continued relevancy.
  4. Early implementation of information sharing best practices will be invaluable for mitigating AI-related risks; doing so will be legally-required, and the failure to do so will/should be seen as unreasonable.
  5. Maintaining AI-related supply chain transparency and accountability will become more challenging and complex, but blockchain-enabled smart contracts can help manage the risk.

***Postscript***

April 7, 2020: When it comes to using AI in medical devices, specifically AI with autonomous learning capabilities, the FDA’s Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) contains relevant guidance that syncs with some of the ideas I presented here. One example is the idea of “locking” algorithms and unlocking them (for updates) only with FDA review. This ties in with the July 19, 2019 update below, specifically, my thoughts on hard-coding and algorithmically isolating the AI’s acceptable-behavior schema. This locking function serves to augment the FDA’s goals of ensuring that the AI-based SaMD’s performance, safety and effectiveness are properly maintained throughout its lifecycle.

October 14, 2019: In terms of item 3 above: ‘Blind execution’ is a mission-centric characterization — the AI’s objective is its focus; nothing else. In rules-based expert systems, blind execution is not problematic. It is actually a desirable feature, and one that does not carry unnecessary liability for the programmer and end-user. In more complex systems, such as autonomous vehicles, however, the desirability of strict objective adherence can be significantly diluted. Instead, an algorithm capable of efficient risk-analysis, one employing game theory features (Prof. Stuart Russell) is desirable; think of it as an ‘intelligent deviation’ system capability. As part of the effort to develop AI standards, it is important to delineate the type of applications that need to be capable of intelligent deviation. Doing so effectively will aid in (among other things) building relevant liability metrics. The IEEE’s P7000 standards series of AI projects, for example, has at least one effort that touches on this: the P7008 “Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems.” How well it will synchronize with other similar efforts will either promote or undermine the effectiveness of the AI legal liability framework.

September 23, 2019: Deepfakes are becoming an increasingly complex, and potentially dangerous problem. AI’s ability to spot them can be augmented through effective information sharing practices that build a deepfake ontology of all known…how should I put it…”fakers.” AI deepfake detectors will be able to use this ontology to measure the likelihood a given video has been altered before it is viewed. Whether the deepfake video is flagged or deleted becomes a user-based setting that can also become part of the AI Risk Ratio (discussed here).

September 13, 2019: Brain-machine interfaces (BMI) applications, be they noninvasive (positioned on the body) or invasive (inserted into the body) significantly amplify the liability concerns that we are already familiar with through experience with, for example, implantable medical devices. The liability amplifying variable here is capability: the BMI’s potential to cause wide-ranging harm is far greater than a legacy medical device. For instance, injecting a virus carrying nanobots to fight a disease or to carry out another mission is vastly different and carries an intrinsic operational risk that is vastly greater than implanting a pacemaker. Iterative liability, XAI, and the regulation of AI discussed in this post coalesce into a normative and legal safety net that can help mitigate the risks associated with BMI.

July 19, 2019: Regulating AI behavior is necessary in order to mitigate harm. One approach for achieving this is imposing a legal requirement that prior to deployment, the AI must be certified as having passed training on what constitutes acceptable behavior. (Another way to think of this certification is that once the AI passes this training, it is in effect licensed to operate.) The AI’s acceptable-behavior framework, the learning set, is constructed from a variety of universally-accepted criteria, including, for example, applicable international standards, which helps yield uniform application and operational performance. The AI’s acceptable-behavior model is then algorithmically isolated in the application (be it cyber or cybernetic) and hard-coded, meaning it is made to be operationally independent from the AI’s capabilities, rendering it immune from iterative code changes. This acceptable-behavior approach dynamically disciplines the AI’s behavior. It enables real-time deterrence and allows regulating AI behavior.

July 12, 2019: In relation to item 5 above: An AI application’s quality profile that evaluates the six NIST metrics (reliable, robust, trustworthy, secure, portable and interoperable) can be delivered using principles similar to those employed in a SOC 2 Type 2 report. Contractually requiring that such a report be delivered annually (for example) can help monitor vendor performance and identify operational variances that trigger timely remedies. And the more complex the AI application, the more desirable and beneficial such a report becomes.