The boom of Artificial Intelligence (AI) has brought benefits and challenges alike. One particular concern about the application of AI is the imposition of liability. So far, establishing responsibility in the event that an AI system causes harm has proven difficult for a variety of reasons. Primarily, AI systems are distinct in that they lack transparency, which makes imposing liability difficult. AI also has a sense of autonomy, which makes imposing liability on the programmer, software developer, or user difficult, despite the fact that many liability regimes are human-oriented. This creates a lacuna where the perpetrator of the crime, tort or harm, often the AI system itself, is unpunished. This article examines current liability regimes and highlights their shortcomings in determining culpability. It will also propose various liability regimes with the goal of not only making amends for wrongs committed but also acting as a deterrent.
PART I: Introduction
Artificial Intelligence (AI) has potentially been dubbed the Fourth Industrial Revolution (4IR) marker due to its role in aiding countries and cultures around the world to transition technologically.[1] AI contributes to the 4IR by automating and substituting labour across multiple economies.[2] Remarkably, there is no single definition of AI. This is spurred by a number of factors, chief among them the distinctive nature and operation of AI that appears to evolve over time.[3] The European Union, however, has defined AI as ‘a system, whether software or hardware embedded, that exhibits intelligent behavior by gathering, processing, evaluating, and understanding its surroundings and by taking autonomous actions to achieve predefined goals.’[4] This description aptly captures two key characteristics of AI: adaptability, or the capacity to continuously improve performance through experience learning, and autonomy, the capacity to accomplish tasks in an uncontrolled environment.[5]
AI has been critical in actuaactualizing a number of sustainable development goals (SDGs).[6] These SDGs include, among others, eradicating hunger and poverty (SDGs 1 and 2, respectively), ensuring good health (SDG 3), raising the standard of education (SDG 4), and promoting economic growth (SDG 8).[7] In Kenya, artificial intelligence is being used in the healthcare industry to speed up disease identification and treatment.[8] AI also significantly facilitates digital trading, or e-commerce, through the creation and adaptation of smart contracts and smart loans.[9] AI systems are also employed in the agricultural industry to identify diseases in crops and livestock as well as to evaluate the best options for farmers.[10] In the judicial sector, AI has expedited the delivery of justice through processes such as Online Dispute Resolution and the usage of teleconferencing and e-filing systems.[11] Consequently, AI has demonstrated enormous promise for driving change in Kenya and throughout Africa.
It is fascinating to note that Sub-Saharan Africa has one of the lowest levels of AI preparedness internationally, despite the pressing need for AI systems.[12] This is in accordance with the 2022 AI Readiness Index. The fundamental purpose of this index is to examine the actions taken by the government to apply AI by looking into three significant sectors; government, the technology sector and data and infrastructure.[13] According to the survey, Kenya, which now holds the 90th spot, is one of the few African nations that are in the top 100 in the world.[14] Additionally, according to UNCTAD, Least Developed Countries like Kenya, are unprepared to adopt and adapt to the technology revolution.[15] This is due to the fact that LDCs have fewer resources, less advanced technology, and less productive industries, which could effectively impede the achievement of SDGs.[16]
Kenya has arguably taken steps to reap the greatest benefits from AI. In 2018, the government created a Blockchain and Artificial Intelligence Taskforce to provide recommendations on how best to exploit AI.[17] The commission recommended, among other things, that the government create laws that support AI while safeguarding human rights.[18] Unfortunately, this has yet to be accomplished, as the main AI-related issues addressed in legislation are data privacy and cybercrime in the Data Protection Act of 2019 and the Computer Misuse and Cybercrimes Act of 2018.[19] Additional suggestions include creating an AI-friendly ecosystem, assessing the hazards of AI, and putting steps in place to mitigate the problem.[20] Notwithstanding Kenya's absence of national legislation to control AI, it is important to remember that Kenya is a signatory to international agreements like the African Union Convention on Cyber Security and Personal Data Protection, whose scope includes the handling of private information by AI systems.[21]
Despite all of its advantages, AI still confronts significant difficulties. These include bias that worsens gender inequity and problems with data protection.[22] Further discouraging the usage of AI is the idea that its incorporation into many industries may result in job losses.[23] The topic of liability is one particular concern linked with AI that will be the focus and discussion of this article. The million-dollar question in this case is, in the event that an AI system malfunctions and causes harm to a third party, who is to be held accountable? A good example is the radiotherapy machine designed by Atomic Energy of Canada Limited, which delivered lethal doses of radiation to cancer patients due to a system malfunction.[24] The liability problem is yet to be determined, as it was asserted that certain hospitals had upgraded the system thus further complicating the issue of culpability.[25] Regrettably, given the continued rise of telemedicine and other industries, this scenario is real and likely to occur in Kenya. Therefore, it is imperative that Kenya and many other African countries should enact legislation imposing liability on various actors in the event of injury caused by an AI system. These policies should be consistent with the Organization for Economic Cooperation and Development's (OECD) five AI principles.[26] The aforementioned principles include:[27]
● That the development of AI systems should promote democracy, the rule of law, human rights, and diversity;
● That AI should be a catalyst for inclusive growth and sustainable development;
● AI systems should be transparent so that people may comprehend them better;
● That organizations and people responsible for creating or using AI systems be held accountable when those systems cause harm; and
● That potential risks associated with AI be continuously assessed and managed.
Yet, imposing culpability is easier said than done. This is due to a variety of factors, including regime diversity and the opaque nature of AI systems.[28] The purpose of this study is to draw attention to the ambiguity around liability in AI in the hopes that it will offer more clarity on the same. To that end, this paper will be divided into the following sections: Part II will explore the obstacles to the imposition of liability. Part III will concentrate on the types of liability in different jurisdictions and highlight their applicability. Part IV will highlight various culpability determination models and discuss their drawbacks. The liability of AI in the Kenyan setting will be covered in Part V. Finally, Parts VI and VII will be set aside for the recommendations and conclusion, respectively.
PART II: Challenges that hinder the imposition of liability
AI systems create a one-of-a-kind environment in which decisions may be far removed from human decision-making, unpredictable, and opaque.[29] This poses a quandary in determining culpability. Another issue that complicates assigning responsibility is whether AI qualifies as a legal person. This section is dedicated to exploring the many problems surrounding responsibility imposition and creating a deeper understanding of the nature of AI.
Legal personhood is the capacity to exercise rights and perform obligations.[30] Its scope includes the subject of legal responsibility.[31] Legal personhood is extended not only to persons but also to non-human entities such as corporations.[32] The EU parliament, in its Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics, proposed awarding of robots electronic personhood that is critical in holding them liable in case of harm.[33] The motion tabled suggested that robots be allowed to own property, have taxes and pay damages whenever there is harm occasioned on another.[34] Governments such as Saudi Arabia, where artificial intelligence has grown rapidly, have granted AI legal personhood.[35] To be more exact, an Artificial humanoid named Sophie has been awarded legal personality and is now required to perform its corresponding obligations.[36] This implies that if Sophie takes a harmful action, she may be held liable.[37] The victims of Sophie's harm are also entitled to compensation under civil culpability.[38] Legal personhood for AI is still an illusion in many jurisdictions, where AI has and continues to thrive, including Kenya.[39] This considerably adds to the difficulty in apportioning blame when fully autonomous AI systems inflict harm.
Autonomy is a predominant feature of AI, as was previously mentioned. AI systems use machine learning algorithms in order to process data and provide outcomes.[40] This means that as long as they are provided with enough data, they require little to no supervision.[41] AI systems' capacity for independent learning could cause them to make unanticipated and difficult-to-understand conclusions.[42] Typically, the choices should fall within the range of anticipated outcomes, although this is not always the case.[43] Due to how remote the AI system's decisions are from human oversight, it can be challenging to determine liability when a mishap happens at this stage.[44] In South Africa, for example, restrictions are in place to guarantee that doctors depend on their knowledge to best treat patients, regardless of recommendations from AI systems.[45] This is due to their recognition that AI occasionally has the potential to produce ludicrous and unforeseen outcomes. Yet, this might only be effective in situations where humans still have some influence on AI systems.[46] It would thus be rendered ineffectual when such control is relaxed, like in the case of self-driving cars and fully automated AI systems.[47]
Connectivity to other systems is a key component of how AI functions.[48] In order to learn and operate, AI is significantly dependent on data.[49] For instance, in order for self-driving automobiles to operate effectively, they must communicate with other vehicles, traffic signs, and other traffic signals.[50] This interconnectedness presents a stumbling block in determining culpability since it increases the number of individuals who must be held accountable.[51] The problem is further complicated by the fact that the participants have no control over what the other party does with their data and, as a result, will be reticent to accept responsibility for acts indirectly carried out by them.[52] Also, because there are so many passive participants and AI systems, it might be challenging to identify the source of a defect and who was responsible for the defect.[53]
AI systems are also opaque in the sense that they lack transparency in their operation and performance.[54] Choices made by AI systems are often inexplicable. In addition to using pre-processed data, machine learning algorithms also employ their own methods of trial and error to get outcomes.[55] Because of this, even "reliable" AI systems might not be as transparent as desired.[56] This is known as the "black box nature of AI," and it has a variety of effects on liability.[57] Fundamentally, AI learning systems are complex and difficult to comprehend, necessitating effort and a technical mind.[58] As a result, legal practitioners and legislators must understand the fundamentals of AI in order to assign blame to the party who made a mistake.[59] Furthermore, in the case of a liability claim, the parties in possession of the data and algorithms that explain how the injury occurred have no incentive to share the information because doing so would expose them to liability.[60] The lack of transparency also makes determining causation difficult. This is because it is difficult to overcome the burden of proof, as the claimant must also demonstrate injury and cause.[61] This overarching principle is too condemning since it will be very challenging for the victims to establish causality where the system is not transparent.[62]
It is also difficult to hold AI systems accountable where AI is being created and used on a global basis.[63] This is due to the fact that different countries have different legal frameworks, ethical dilemmas, and cultural perspectives on AI.[64] So, if an AI system is created across multiple jurisdictions with various responsibility regimes or without any liability regimes at all, a mistake made by the AI system may be challenging to remedy.
PART III: Types of liability
The diversity of liability regimes, as discussed in the previous section, is one impediment to establishing liability for AI defects. It is fascinating to note that most liability regimes rely on the human element to impose liability in the absence of express regulations.[65] This means that the AI system cannot be held liable; rather, the system's programmer, developer, or user will be held liable for any harm caused by the system. The purpose of this section is to discuss liability regimes and to highlight the relevance of each.
a. Civil liability
The term "civil liability" describes the legal obligation to make up for damage or loss brought on to another person or piece of property.[66] Civil liability in the context of AI refers to the responsibility of individuals, businesses, or governments for any harm or loss caused by AI systems.[67] As AI systems become more autonomous, the question of who is ultimately responsible for their actions becomes more complicated. Civil liability includes several liability regimes, including contractual liability, tortious liability, products liability, and strict liability.
b. Contractual Liability
This liability regime is appropriate when the AI system is the subject of a sales contract. Under contract law, a supplier-recipient relationship may include an exclusion clause to exclude liability in the event of defective AI.[68] Depending on how the court interprets the exclusion clause, the supplier may or may not be held liable for the faulty AI. Furthermore, claims may be brought before the court for harm suffered by the claimant who relies on the implied term regarding the fitness of the product, i.e.i.e., the AI system.[69] The implied term must therefore be interpreted by the court in relation to the AI system. In dealing with cases where an AI system that is subject to a contract has caused harm, is a well-established principle that the loss suffered should not be so far off that it is impossible to recover it under contractual liability.[70]
Nonetheless, there are weaknesses in the contractual liability regime. In many jurisdictions, including Kenya, AI is not considered a good under the Sale of Goods Act, which governs contractual transactions involving the sale of goods.[71] In the United Kingdom, for example, in the cases of St Albans City and District Council v International Computers and Computer Associates UK Ltd v Software Incubator Ltd, the court determined that computer software does not qualify as a good under the Sale of Goods Act of 1979.[72] This would make it particularly challenging to establish contractual liability under the Act since AI would be similarly classified.[73
c. Tortious Liability
This liability regime is frequently used when a claimant has run into some difficulties in proving contractual liability.[74] Under this regime, the claimant may bring a negligence claim in order to impose liability on a party who is not subject to contractual liability.[75] Causation, duty of care and foreseeability must be proven in order to prove negligence.[76] Arguably, the foreseeability component of negligence in the context of AI may be particularly challenging to demonstrate due to the "black box" nature of AI.[77] Nonetheless, if the claimant can establish a causal link between the supplier/developer’s conduct and the defect in the AI system that caused harm, the latter may be held liable under the tort of negligence.[78]
d. Product Liability
Product liability is a hybrid system of contractual and tortious liability.[79] It addresses remedies for injuries caused by product defects as well as product misrepresentation.[80] This regime covers negligence, design flaws, manufacturing flaws, failure to warn, and breach of warranty.[81] Manufacturers must create safe products that are used in a way that is reasonably foreseeable, for example, if they want to avoid liability in a negligence case.[82] Thus, if the claimant used this product in a reasonably foreseeable manner and suffered harm as a result, he may argue that the manufacturer was negligent in failing to foresee that specific outcome. Another critical aspect of product liability is strict liability, which is discussed further below.
e. Strict Liability
Case law demonstrates a global shift away from relying on negligence to define liability and toward the imposition of strict liability regimes.[83] This is supported by the belief that consumers have a right to safe products.[84] This type of liability makes the supplier responsible for any product flaws, regardless of negligence or intent. Manufacturers who fail to disclose potential risks associated with their products are primarily subject to strict liability.[85] The test of strict liability was established in the locus classicus case of Rylands v Fletcher which states “tThe person who for his own purposes brings on his lands and collects and keeps there anything likely to do mischief if it escapes, must keep it in at his peril, if he does not do so, is prima facie answerable for all the damage which is the natural consequence of its escape.”[86] Thus, the ultimate goal of this regime is to ensure that a user who suffers harm as a result of a defective AI system is entitled to a remedy as he does not have to bear the burden of determining precisely where the defect in the AI system is.
f. Criminal liability
Two requirements must be met in order to establish criminal liability. The person must first be proven to have committed an act or omission (actus reus).[87] Furthermore, mens rea, the mental element, is also required to establish liability.[88] Establishing the actus reus in the context of AI is relatively easy. However, proving the mens rea element of an AI system itself creates a formidable challenge. It is critical to understand that in many jurisdictions, AI is not recognisedrecognized as a legal person.[89] As a result, AI, like animals, generally lacks the necessary mens rea to determine criminal liability.[90] It is worth noting however that even humans have occasionally displayed behaviourbehavior that makes determining the mental component difficult.[91] In these situations, the court has imposed liability based on fault.[92]
g. Fault-based Liability
These Common Law principles refer to the requirement that the claimant demonstrate that the product supplier acted negligently by failing to act in some way that resulted in harm in addition to product defects.[93] A stellar illustration of this would be where a doctor is held liable for failing to look into the recommendations suggested by AI software on treatment administered to patients. In this instance, the medical professional is still responsible for mistakes and omissions in treatments that were reasonably foreseeable.[94] Strikingly, this type of liability excludes harm caused by unknown or unforeseeable flaws.[95] The implication of this is that it would be unjust to hold the practitioner liable if the defects were unforeseeable while at the same time, the patient would be left without recourse.
Part IV: Models of determining liability
This section aims to explore three essential models for determining culpability. They are the following: the perpetration by another AI liability, the natural probable consequence model, and the direct liability model.[96] It is crucial to highlight that the theories may be used in concert to determine criminal responsibility (to be specific).[97]
a) The perpetration by another liability of AI
This model promotes the idea that AI can be utilized as a conduit for criminal activity.[98] In this instance, the offender will be held accountable, and AI will be deemed an innocent agent, just as a minor or a person of unsound mind would be under the same circumstances.[99] The model is based on the justification that a person or thing cannot be held accountable if it lacks the freedom to make its own decisions.[100] Because AI systems rely so significantly on the data that is provided to them, they are thought to be incapable of making decisions on their own.[101] This concept integrates the strict liability system, according to which the programmer is held accountable for crimes committed by the system. In addition, the harm caused by the defect will be the responsibility of the user or programmer who should have been able to predict it.[102]
Under this model, the perpetrator could be one of two people: the coder or the end user.[103] When a programmer creates a robot, for instance, and embeds it with software, and the robot then commits a crime, the programmer will be held accountable for the crime.[104] On the other hand, the user is responsible if the AI system violates the law while being used by the user for personal gain.[105] This master-servant relationship between the AI and the user justifies the imposition of culpability. Although the AI system committed the crime in both instances, satisfying the actus reus of criminal culpability, the mental component of the AI system is tasking to determine.[106] Hence, courts do not emphasize the mens rea of the system or perpetrators.
This approach is ideal when a user or programmer uses an AI system to commit a crime without making use of any of its further capabilities.[107] Also, it might be applied to outdated AI systems that haven't been updated to more recent, sophisticated ones.[108] AI is used as a tool in each of these situations to commit the crime.[109] Yet, the paradigm would not work if an AI system was fully autonomous and committed a crime on its own.[110]
b) The natural probable consequence
The fundamental premise of this paradigm is that the AI system is under the control of its programmer, who had no intention of using the system to perform any crime.[111] Nonetheless, the AI system breaks the law while performing its daily tasks. Users and programmers were not involved in perpetrating the crime and were not aware it had been done.[112] In order to determine culpability, this approach depends on the programmer's or user's ability to foresee.[113] According to this argument, a person is legally responsible when the crime they committed was a logical and likely result of AI's behavior.[114]
As per this theory, the programmer or user must have been acting negligently.[115] It is not necessary for them to know that the crime will be committed; rather, it is sufficient to know that the crime's commission was a logical and likely result of the AI's routine actions.[116] This theory attempts to address the issue of culpability in situations when the programmer or user predicted the conduct of the offence, but it is unable to address the question of whether the AI itself should be held accountable for the offence.[117]
c. Direct Liability Model
This model aims to hold the AI system accountable.[118] The justification for assigning blame is based on the notion that an AI system should be held accountable if it can demonstrate both the actus reus and the mens rea requirements for criminal responsibility.[119] As the actus reus in criminal proceedings involving AI entails an action or inaction by the system, proving it is quite simple.[120] The actual stumbling block is demonstrating the internal aspect. Liability is subject to the mental elements of knowledge, intent, and negligence.[121] The mens rea criteria is deemed to have been met when an AI system exhibits awareness of the external element or was developed with a specific aim or purpose such as to commit a crime.[122] In light of this, there is no justification for not imposing culpability when an AI system determines both the mens rea and actus reus of the offence.[123] In such a scenario, the AI system's criminal accountability is imposed in addition to that of the programmer or user.[124] Hence, the criminal culpability of AI is not reliant on that of the programmer or user; rather, the programmer or user and AI system may be found jointly accountable as accomplices and abettors.[125]
Part V: Liability of AI- The Kenyan Experience
AI has played an important role in the development of various industries across the country. Digital credit, for example, has been a game changer in loan issuance.[126] A study by the University of Nairobi's Institute for Development Studies, University College London's Institute for Advanced Studies, and Lawyers Hub estimates that at least six million Kenyans have taken out at least one digital loan.[127] In such a case, artificial intelligence is used to automate lending decisions and risk assessments.[128] What happens then if, in the case of a mobile lending app, consumer data is compromised or if, as a result of the AI system's improper creditworthiness assessment, the system leads to over indebtedness or defaults? Who will be held accountable for the harm caused?
Kenyan law is often technology-neutral and applies the same legal standards to AI as it does to other technologies. As such, it lacks AI specific liability regimes.[129] The implication of this is that it creates a challenge in establishing culpability in case an AI system causes harm. Currently, to impose liability, Kenya takes into account several legal systems including; intellectual property law, contract law, tort law, and data protection legislation. Primarily the Constitution of Kenya guarantees the right to a fair hearing and access to justice.[130] Thus, where AI causes harm or damage, those affected can seek redress in court.
Copyright law may be used to safeguard software code and works produced by AI under intellectual property law.[131] Under the Copyright Act, the first owner of the copyright is the author.[132] Generally, an author is identified as a person or legal entity that creates the literary work, photograph or computer programme.[133] Consequently, an AI system is not recognized as an author and cannot have any claim under copyright law.[134] The implication of this is that as it cannot claim any rights, it is also not subject to any liabilities. Kenya, like India and Hong Kong have attributed any rights and liabilities relating to AI generated pieces to the person who made the necessary preparations for the creation of the work in question.[135] This may indicate that a user or computer programmer may be held liable where an AI system is in breach of copyright law.
In situations when AI is employed in a contractual relationship, such as when an AI system is used to provide automated customer support, contract law suffices. Liability in such circumstances would most likely be determined by the terms of the parties' contract.[136] As discussed in the previous section, the Sale of Goods Act is silent on whether AI is considered a good.[137] Thus, applying the act to supplier-consumer transactions may present a legal problem. Tort law may apply when AI causes harm or damage, such as when an AI system erroneously prescribes medication for a patient.[138] Depending on the specifics of the situation, liability may be placed on the AI system's owner, software developer or user. In such a case, the claimant ought to establish that there was harm occasioned by the system owing to the negligence of the system owner, manufacturer or developer.[139]
In accordance with the Data Protection Act (DPA), it is the responsibility of data controllers and processors to guarantee that the processing of personal data complies with the law.[140] However, it is unclear who is responsible when an AI system is independently involved in the processing of personal data. One potential problem is that AI systems occasionally make decisions that are hard to justify.[141] If something goes wrong, it may be difficult to assign fault due to this lack of transparency. For instance, it might be challenging to determine who is responsible if an AI system makes a bad choice that hurts a person—the data controller or processor, the AI system, or both.
The DPA mandates that data controllers and processors put in place the necessary organizational and technical safeguards to ensure the security of personal data in order to address this problem.[142] This includes putting in place safeguards to prevent unauthorized or unlawful processing, accidental loss or destruction of personal data, and data damage.[143] It is also worth noting that AI systems are not exempt from the DPA's requirement to obtain data subjects' consent before processing their personal data.[144] Individuals must be informed about the use of AI in the processing of their personal data by data controllers and processors, and their explicit consent must be obtained.[145]
The Consumer Protection Act (CPA) provides for the liability of suppliers and manufacturers of goods and services to consumers.[146] A supplier is defined by the Act as someone who provides goods or services in the course of their business, whereas a manufacturer is someone who makes, assembles, or produces goods.[147] A supplier or manufacturer is liable under the CPA for any harm caused to a consumer by the goods or services provided.[148] This includes damage caused by defective goods or services, a failure to provide adequate instructions or warnings, and a breach of an express or implied warranty.
In the case of AI, liability may arise when a consumer is harmed as a result of an AI system's decision.[149] For example, if an AI-powered recommendation system recommends a product that harms a consumer, the system's supplier or manufacturer may be held liable under the CPA.[150] However, liability may not arise if the harm caused was not reasonably foreseeable or if the consumer was aware of the risks associated with the product or service's use.[151] Furthermore, liability may be reduced if the supplier or manufacturer can demonstrate that they took reasonable precautions to avoid the harm.[152]
Ultimately, establishing responsibility for AI in Kenya requires a case-by-case analysis that takes into account the specifics of each instance as well as the pertinent legal frameworks. Kenyan legislators are advised to draft specific laws and rules to address the liability issues raised by the use of AI as the technology becomes more widespread.
Part VI: Recommendations
Kenya must adopt an AI liability regime that is consistent with the ever-changing nature of AI. To address these issues, a greater emphasis on transparency and accountability in the development and deployment of AI systems is required.[153] This is done to reduce the black-box nature of AI. Consequently, developers must work to make AI systems more transparent so that people can understand how they make decisions. Legislators should develop a liability framework for AI that takes a human-centric approach, in line with the European Commission's recommendations.[154] This legal framework should be based on transparency, accountability, and human rights protection.[155] Additionally, there needs to be a greater emphasis on international coordination, so that there is a unified approach to the development and deployment of AI.[156]
Kenya's legal system must distinguish between accountability for AI as a technology and accountability for AI as an individual. According to this paper, the legal personhood of AI may be the key to establishing civil liability.[157] This is due to the fact that granting legal personhood imposes obligations on the AI. When an AI system violates its obligations and causes harm, the judicial system may award restitution to the victim by holding the AI system liable.[158] Damages can only be awarded if AI is able to own property, which is only possible if AI attains legal personhood.[159] Furthermore, granting AI systems legal personhood may help to circumvent the issue of imposing liability on fully autonomous AI systems. Thus, there is a need to sufficiently address the legal status of AI before assigning liability.[Ma6]
It is critical to note that in discussing the liability of AI, one has to consider the issue of enforcing court orders and what happens when the AI system fails to comply with the court order. Ideally, courts may hold AI systems in contempt and may order the system to pay fines or impose other sanctions. In doing so, the courts may consider factors such as the intent of the AI, the extent to which the AI’s behaviour was foreseeable etc. Thus, holding AI in contempt requires a nuanced approach to balancing its autonomous and adaptable nature.
This paper recognizes that no single type of liability can govern the use of AI as AI is dynamic and has infiltrated many aspects of human life. As a result, it proposes integrating various liability regimes to govern the imposition of AI liability. In particular, when drafting regulations to impose criminal liability, consideration should be given to the three models mentioned above: natural consequence, direct liability, and perpetration by another liability.
Legislators should also take into account the concept of joint liability.[160] Under such a liability regime, the developer and user, the developer and AI system, or the user and AI system, can be held liable for any harm caused.[161] Furthermore, legislators should enact an adapted duty of care, such as additional obligations on AI system suppliers to monitor and maintain those systems in order to control for unexpected outcomes due to machine learning.[162] The implication of this revised duty of care is that AI systems will cause less harm as they are actively maintained. However, if the AI system causes harm as a result of system flaws, the system operator/supplier will be held liable.
Part VII: Conclusion
The purpose of this paper was to explore the various liability regimes for AI while outlining the difficulties in imposing responsibility. As stated, the opaque, adaptable, and autonomous nature of AI systems creates a particularly unique situation that makes imposing liability difficult. In addition to the models for imposing liability, this paper has highlighted various forms of liability that are essential in ensuring that harms caused by AI are remedied. Since AI is so flexible, these liability regimes cannot function in isolation. Therefore, it is urgent for legislators to pass integrated liability regimes to guarantee that harm caused by AI systems is not left unattended. Legislators should work to make sure that these liability regimes serve to both remedy the harm already done and prevent further harm. After all, an ounce of prevention is worth a pound of cure.
* The author is a third year student at the University of Nairobi as well as an intern at the National Council for Law Reporting and a News Editor at Jurist. She is also the Editor in Chief of the University of Nairobi Law Journal. Among many other interests, the author is passionate about Artificial Intelligence and in particular, the nexus between AI and the Law.
[1] International Labour Organisation, The Fourth Industrial Revolution, Artificial Intelligence, and the Future of Work in Egypt, (2021), 8.
[2] David Mhlanga, ‘Artificial Intelligence in the Industry 4.0, and Its Impact on Poverty, Innovation, Infrastructure Development, and the Sustainable Development Goals: Lessons from Emerging Economies?’ (2021) 13 Sustainability MPDI, 6.
[3] University of Helsinki, ‘Elements of AI’ <https://course.elementsofai.com/1/1> accessed 12 February 2023.
[4] European Parliament Resolution 20 October 2020 with Recommendations to the Commission on a Framework of Ethical Aspects of Artificial Intelligence, Robotics and Related Technologies [2020] A9-0186, Art 4.
[5] University of Helsinki, (n 3).
[6] Arthur Gwagwa, Patti Katchidza, Kathleen Siminyu, Matthew Smith, ‘Responsible Artificial Intelligence in Sub Saharan Africa: Landscape and General State of Play’ (2021) 5 AI4D <https://ircai.org/wp-content/uploads/2021/03/AI4D_Report_Responsible_AI_in_SSA.pdf> accessed 12 February 2023.
[7] Transforming our world: the 2030 Agenda for Sustainable Development, (adopted on 21 October 2015), UNGA A/RES/70/1, <https://www.refworld.org/docid/57b6e3e44.html> accessed 10 February 2023.
[8] Jackline Akello, Artificial Intelligence in Kenya, Padigrim Initiative (2022), 4.
[9] Ibid.
[10] Ibid.
[11] Megha Shawani, ‘Alternate Dispute Resolution and Artificial Intelligence; Boom or Bane?’ (2020) 2(1) LexForti Legal Journal, 2. AI can be used in ODR systems as a neutral to examine documents or as a neutral in itself to provide optimum solutions to parties.
[12] Annys Rogerson, Emma Hankins et al, Government AI Readiness Index 2022, (Oxford Insights, 2022) <https://www.unido.org/sites/default/files/files/2023-01/Government_AI_Readiness_2022_FV.pdf> accessed 12 February 2023.
[13] Ibid.
[14] Ibid.
[15] United Nations Conference on Trade and Development, Technology and Innovation Report, (2021) 31.
[16] United Nations Conference on Trade and Development, The Least Developed Countries, (2020)
[17] Muthoki Mumo, ‘Tech Dream Team to Produce Kenya’s Blockchain Roadmap’ (Business Daily, 28 February 2018) <https://www.businessdailyafrica.com/corporate/tech/Ndemo-taskforce-Kenya-blockchain-roadmap-
ICT/4258474-4323074-gjwgqnz/index.html> accessed 13 February 2021.
[18] Distributed Ledgers and Artificial Intelligence Taskforce, Emerging Digital Technologies for Kenya: Exploration
and Analysis, (2019) <https://www.ict.go.ke/blockchain.pdf> accessed 13 February 2023.
[19] Data Protection Act, 2019 and Computer Misuse and Cybercrimes Act, 2018
[20] Distributed Ledgers and Artificial Intelligence Taskforce, (n 19).
[21] African Union, African Union Convention on Cyber Security and Personal Data Protection (Adopted on 27 June 2014) AU <https://au.int/sites/default/files/treaties/29560-treaty-0048_-_african_union_convention_on_cyber_security_and_personal_data_protection_e.pdf> accessed 13 February 2023.
[22] Sheridan Wall and Hilke Schellmann, ‘LinkedIn’s job matching AI was biased. The company’s solution? More AI’ (MIT Technology Review, 23 June 2021) < https://www.technologyreview.com/2021/06/23/1026825/linkedin-ai-bias-ziprecruiter-monster-artificial-intelligence/> accessed 10 February 2023.
[23] Ibrahim Godofa, ‘Artificial Intelligence and Its Future in Arbitration’ (2020) 4(1) JCMSD, 10.
[24] Lee Gluyas, Stefanie Day, ‘Who is liable when AI fails to perform?’ (CMS, 2018) <https://cms.law/en/gbr/publication/artificial-intelligence-who-is-liable-when-ai-fails-to-perform> accessed 10 February 2023.
[25] Ibid.
[26] OECD, ‘Recommendation of the Council on Artificial Intelligence’ (Adopted on 22 May 2019) OECD/LEGAL/0449 <https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449> accessed on 13 February 2023.
[27] Ibid.
[28] Nieves Briz and Allison Bender, ‘Key challenges of artificial intelligence: Liability for AI decisions’ (Dentons, 2021) <https://www.businessgoing.digital/key-challenges-of-artificial-intelligence-liability-for-ai-decisions/> accessed 13 February 2023.
[29] Ibid.
[30] Bryan Garner and Henry Campbell Black, Black’s Law Dictionary (7th ed, St. Paul Minn: West Group 1999) defines a person at law as; a person is any being whom the law regards as capable of rights and duties.
[31] Ibid.
[32] Visa AJ Kurki, A Theory of Legal Personhood, (Oxford University, 2019), 3.
[33] Committee on Legal Affairs, Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics (European Parliament, A8-0005/2017), 18.
[34] Ibid; Raed Alnimer and Eman Naboush, ‘The extent of the civil liability of technologies for the infection and the spread of Covid-19’ (2022) 25(2) Journal of Legal, Ethical and Regulatory Issues, 1-16.
[35] Alistair Walsh, ‘Saudi Arabia grants robot citizenship’ (DW, 28 October 2017) <https://www.dw.com/en/saudi-arabia-grants-citizenship-to-robot-sophia/a-41150856> accessed 10 February 2023.
[36] Ibid.
[37] Ibid.
[38] Ibid.
[39] Kenya Copyright Board, ‘Copyright in the Age of Artificial Intelligence’ (Copyright News) <https://copyright.go.ke/sites/default/files/newsletters/issue-38.pdf> accessed on 10 February 2023.
[40] Giangiacomo Olivi and Brendan Graves, ‘Dentons Artificial Intelligence Guide 2022: The AI journey—opening our eyes to opportunity and risk’ (Dentons, 2022) <https://www.businessgoing.digital/dentons-artificial-intelligence-guide-2022-the-ai-journey-opening-our-eyes-to-opportunity-and-risk/> accessed on 11 February 2023.
[41] Michael Da Silva, ‘Autonomous Artificial Intelligence and Liability: A Comment on List’ (2022) 35 (44) Philosophy & Technology.
[42] Ibid.
[43] Ibid.
[44] Ibid.
[45] Dustee Lee Donnelly, ‘First Do No Harm: Legal Principles Regulating the Future of Artificial Intelligence in HealthCare in South Africa’ (2022) 25 PER, 2.
[46] Michael Da Silva (n 46).
[47] Michael Da Silva (n 46).
[48] Michael Da Silva (n 46).
[49] Christiane Wendehorst, ‘Strict Liability for AI and other Emerging Technologies’, (2020) 11(2) JETL 160.
[50] Michael Da Silva (n 46).
[51] Christiane Wendehorst, (n 54).
[52] Report from the Expert Group on Liability and New Technologies, Liability for Artificial Intelligence and other emerging digital technologies (European Commission, 2019).
[53] Philip Boucher, ‘Artificial intelligence: How does it work, why does it matter, and what can we do about it?’ (2020) European Parliamentary Research Service.
[54] Matthew Fenech, Nika Strukelj and Olly Buston, Ethical, social and political challenges of artificial intelligence in health, (Future Advocacy report for the Wellcome Trust, 2018).
[55] Anirudh V K, ‘How Does Artificial Intelligence Learn Through Machine Learning Algorithms?’ (Spiceworks, 10 February 2022) <https://www.spiceworks.com/tech/artificial-intelligence/articles/how-does-ai-learn-through-ml-algorithms/> accessed 13 February 2023.
[56] Ibid; Yavar Bathaee, ‘The Artificial Intelligence Black Box and the Failure of Intent and Causation’ (2018) 31(2) Harvard Journal on Law & Tech, 889.
[57] Philip Boucher, ‘Artificial intelligence: How does it work, why does it matter, and what can we do about it?’ (2020) European Parliamentary Research Service, 19; Yaniv Benhamou & Justine Ferland, ‘Artificial Intelligence & Damages: Assessing Liability and Calculating the Damages’ in Giuseppina D’Agostino, Aviv Gaon & Carole Piovesan, (eds), Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers and the Law (Toronto: Thomson Reuters Canada, 2021) at 8.
[58] Expert Group on Liability and New Technologies, New Technologies Formation, Liability for Artificial Intelligence and Other Emerging Digital Technologies (Luxembourg: Publications Office of the European Union, 2019), 52-59.
[59] Ibid.
[60] Ibid; Lee Akazaki, ‘Failing to Predict the Past: Will Legal Causation Kill Tort Law in Cyberspace?’ [2017] Annual Review of Civil Litigation 27
[61] Yavar Bathaee, ‘The Artificial Intelligence Black Box and the Failure of Intent and Causation’ (2018) 31(2) Harvard Journal of Law & Technology, 922.
[62] Ibid.
[63] Michael Da Silva (n 46).
[64] Ibid.
[65] Leon Wein, ‘The Responsibility of Intelligent Artefacts: Towards an Automation Jurisprudence’ (1992) 6 Harvard Journal of Law & Technology.
[66] Jean-Sebastien Boghetti, ‘Civil Liability for Artificial Intelligence: What Should its Basis Be?’ (219) SSRN, 1.
[67] Ibid.
[68] Ibid.
[69] Ibid.
[70] Phillip Kelly, Marcus Walsh, Sofia Wyzykiewicz and Simone Young-Alls, ‘Man vs Machine: Legal liability in Artificial Intelligence contracts and the challenges that can arise’ (DLA piper, 6 October 2021) <https://www.dlapiper.com/en/insights/publications/2021/10/man-vs-machine-legal-liability-artificial-intelligence-contracts> accessed 14 February 2023.
[71] Sale of Goods Act, 1979 enacted in the UK Sale of Goods Act, Cap 31 in Kenya do not recognise AI as a good.
[72] St Albans City and District Council v International Computers [1996] 4 All ER 481; Computer Associates UK Ltd v Software Incubator Ltd [2018] EWCA Civ 518.
[73] Phillip Kelly, Marcus Walsh, Sofia Wyzykiewicz and Simone Young-Alls, ‘Man vs Machine: Legal liability in Artificial Intelligence contracts and the challenges that can arise’ (DLA piper, 6 October 2021) <https://www.dlapiper.com/en/insights/publications/2021/10/man-vs-machine-legal-liability-artificial-intelligence-contracts> accessed 14 February 2023.
[74] Phillip Kelly, Marcus Walsh, Sofia Wyzykiewicz and Simone Young-Alls, ‘Man vs Machine: Legal liability in Artificial Intelligence contracts and the challenges that can arise’ (DLA piper, 6 October 2021) <https://www.dlapiper.com/en/insights/publications/2021/10/man-vs-machine-legal-liability-artificial-intelligence-contracts> accessed 14 February 2023; Winnipeg Condominium Corporation No. 36 v. Bird Construction Co., [1995] 1 S.C.R. 85 the court noted that establishing the duty of care under the tort of negligence is crucial where there is no contractual relationship between the party that could allow for recovery of damages.
[75] Thomasen Kristen, ‘AI and Tort Law’ in Florian Martin-Bariteau & Teresa Scassa (eds), Artificial Intelligence and the Law in Canada (Toronto: LexisNexis Canada, 2021).
[76] Donoghue v. Stevenson, [1932] A.C. 562 (H.L.) at 580–581 on the duty of care; Mustapha v. Culligan of Canada Ltd on causation.
[77] Brian Sheppard, ‘Warming up to Inscrutability: How Technology Could Challenge Our Concept of Law’ (2018) 68:1 UTLJ 36.
[78] Thomasen, Kristen, (n 76).
[79] John Villasenor, ‘Products Liability and Driverless Cars: Issues and Guiding Principles for Legislation’ (2014) Washington DC Brookings Institution, 7.
[80] John Villasenor, Products liability law as a way to address AI harm, (AI Governance Series, 2019).
[81] Ibid.
[82] Ibid.
[83] Christiane Wendehorst, ‘Liability for Artificial Intelligence: The Need to Address Both Safety Risks and Fundamental Rights Risks’ in Silja Voeneky, Phillip Kellmeyer, et al (eds), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives (Cambridge Law Handbooks, 2022).
[84] John Villasenor, (n 81).
[85] Ibid.
[86] (1868) LR 3 HL 330.
[87] Aryashree Kunhambu and Akshita Rohatgi, ‘Artificial intelligence and the shift in liability’ (ipleaders, 9 September 2021) <https://blog.ipleaders.in/artificial-intelligence-shift-liability/> accessed on 14 February 2023.
[88] Aryashree Kunhambu and Akshita Rohatgi, ‘Artificial intelligence and the shift in liability’ (ipleaders, 9 September 2021) <https://blog.ipleaders.in/artificial-intelligence-shift-liability/> accessed on 14 February 2023.
[89]Aryashree Kunhambu and Akshita Rohatgi, ‘Artificial intelligence and the shift in liability’ (ipleaders, 9 September 2021) <https://blog.ipleaders.in/artificial-intelligence-shift-liability/> accessed on 14 February 2023.
[90] Aryashree Kunhambu and Akshita Rohatgi, ‘Artificial intelligence and the shift in liability’ (ipleaders, 9 September 2021) <https://blog.ipleaders.in/artificial-intelligence-shift-liability/> accessed on 14 February 2023.
[91] Ibid.
[92] Ibid.
[93] Dustee Lee Donelly, (n 46), 20
[94] Dustee Lee Donelly, (n 46), 20.
[95] Dustee Lee Donelly, (n 46), 20.
[96] Gabriel Hallevy, ‘The Basic Models of Criminal Liability of AI Systems and Outer Circles’ (2019) SSRN,1.
[97] Ibid.
[98] Lawrence Solum, ‘Legal Personhood for Artificial Intelligences’ (1992) 70 NCL REV, 1231.
[99] Gabriel Hallevy, (n 97).
[100] Gabriel Hallevy, ‘The Criminal Liability of Artificial Intelligence Entities - from Science Fiction to Legal Social Control, (2010) 4(2)(1) Akron Intellectual Property Journal, 179.
[101] Ibid.
[102] Gabriel Hallevy, (n 97), 3.
[103] Gabriel Hallevy, (n 101), 179.
[104] Ibid.
[105] Gabriel Hallevy, (n 97), 3.
[106] Lawrence Solum (n 99), 69; George Cross and Cary Debessonet, ‘An Artificial Intelligence Application in the Law: CCLIPS, A Computer Program that Processes Legal Information’ (1986).1 HIGH TECH. L.J. 329.
[107] Compare with; Andrew Wu, ‘From Video Games to Artificial Intelligence: Assigning Copyright Ownership to Works Generated by Increasingly Sophisticated Computer Programs’ (1997) 25 AIPLA Q.J, 131.
[108] Ibid.
[109] Ibid.
[110] Gabriel Hallevy, (n 101), 181.
[111] Ibid.
[112] Gabriel Hallevy, (n 97).
[113] Ibid.
[114] Ibid.
[115] Robert Fine and Gary Cohen, ‘Is Criminal Negligence a Defensible Basis for Criminal Liability?’ (1966) 16 BUFF L REV. 749; Herbert L.A. Hart, ‘Negligence, Mens Rea and Criminal Responsibility’ (1961) 29 Oxford Essays in Jurisprudence.
[116] Gabriel Hallevy, (n 97), 6.
[117] Ibid.
[118] Ibid.
[119] Ibid.
[120] Joshua Dressler and Stephen Garvey, Criminal law: cases and materials (7th Ed West Academic Publishing 2016).
[121] Gabriel Hallevy, (n 101), 188.
[122] Ibid.
[123] Joshua Dressler and Stephen Garvey, Criminal law: cases and materials (7th Ed West Academic Publishing 2016).
[124] Ibid.
[125] Ibid.
[126] Mark Gaffley, Rachel Adams and Ololade Shyllon, Artificial Intelligence. African Insight A Research Summary of the Ethical and Human Rights Implications of AI in Africa (HSRC & Meta AI and Ethics Human Rights Research Project for Africa, 2022), 5.
[127] Ibid.
[128] Ibid.
[129] Jackline Akello, (n 8), 5.
[130] Constitution of Kenya 2010, Art 48 and 50.
[131] Copyright Act, Cap 130.
[132] Ibid, S2.
[133] Ibid.
[134] Kenya Copyright Board, ‘Copyright in the Age of Artificial Intelligence’ (Copyright News) <https://copyright.go.ke/sites/default/files/newsletters/issue-38.pdf> accessed on 10 February 2023.
[135] Kenya Copyright Board, ‘Copyright in the Age of Artificial Intelligence’ (Copyright News) <https://copyright.go.ke/sites/default/files/newsletters/issue-38.pdf> accessed on 10 February 2023.
[136] Supra 72.
[137] Phillip Kelly, Marcus Walsh, Sofia Wyzykiewicz and Simone Young-Alls (n 62).
[138] Thomasen, Kristen, (n 76).
[139] Ibid.
[140] Data Protection Act, 2019.
[141] Giangiacomo Olivi and Brendan Graves, ‘Dentons Artificial Intelligence Guide 2022: The AI journey—opening our eyes to opportunity and risk’ (Dentons, 2022) <https://www.businessgoing.digital/dentons-artificial-intelligence-guide-2022-the-ai-journey-opening-our-eyes-to-opportunity-and-risk/> accessed on 11 February 2023.
[142] Data Protection Act, 2019.
[143] Ibid.
[144] Ibid.
[145] Ibid.
[146] Consumer Protection Act, 2012.
[147] Ibid.
[148] Ibid.
[149] John Villasenor, (n 81).
[150] Ibid.
[151] Ibid.
[152] Ibid.
[153] Richard Roovers, ‘Transparency and Responsibility in Artificial Intelligence A call for explainable AI’ (Deloitte, 2019).
[154] Communication from The Commission to The European Parliament, The Council, The European Economic and Social Committee and The Committee of the Regions, Building Trust in Human-Centric Artificial Intelligence, (Brussels COM 168 Final, 2019).
[155] Tambiama Madiega, Artificial intelligence liability directive (European Parliamentary Research Service, 2023).
[156] Ibid.
[157] Committee on Legal Affairs, Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics (European Parliament, A8-0005/2017), 18.
[158] Ibid.
[159] Ibid.
[160] Amanda Leiu, ‘Artificial Intelligence ('AI'): Legal Liability Implications’ (Burges and Salmon, 30 January 2020) < https://www.burges-salmon.com/news-and-insight/legal-updates/commercial/artificial-intelligence-legal-liability-implications> accessed 11 February 2023.
[161] Ibid.
[162] Tambiama Madiega, Artificial intelligence liability directive (European Parliamentary Research Service, 2023).
[Ma1]Perfect abstract.
[Ma2]The introduction is greatly done.
[Ma3]All the aspects given in this introductory map are all responded to.
[Ma4]The two elements met. You answered the question of what is the liability and what are its weaknesses.
[Ma5]The two elements met.
[Ma6]Just a curious question, what if the AI defaults to comply with the court decision, will the normal rules of contempt apply?
This has been answered in the subsequent paragraph- NK.