Monday, 05 October 2020

Does artificial intelligence challenge corporate governance and the South African legal framework?

Written by
Dr. Simo Lushaba CD(SA) and Chairman of Signium Africa

Co-authored by* Russel Mulamula Cert. Dir. and Masters student Wits University &** Dr. Simo Lushaba CD(SA) and Chairman of Signium Africa (www.signium.co.za)

Artificial Intelligence (AI) is becoming more popular and powerful. Its application is gaining preference in everyday used processes and operations. It uses algorithms that require less human monitoring and supervision. AI does not only perform routine tasks independently, but now makes difficult life and death decisions on their own, which brings into question the issue of responsibilities and accountability for such decisions.

Is corporate governance and the South African law ready for AI"? Prof. Tshilidzi Marwala of the University of Johannesburg recently asked a fundamental ethical and accountability question to his twitter followers. "A self-driving car carrying a passenger encounters a pedestrian. It either has to avoid the pedestrian and kill the passenger or save the passenger and kill the pedestrian. What should it do? Kill the pedestrian, or kill the passenger, or depends on their age and or Do as the owner programmed". Interestingly, 63.8% of respondents said the car should do as the "owner programmed". Who is the owner in this scenario, Tesla Inc (USA) or the owner of the vehicle? Who should be held accountable for culpable homicide? Is it the ‘driver’ of the driverless car? Is it the manufacturer of the technology or that of the car? What about the directors of those companies that are involved? Do they take responsibility of how the technology/or car is programmed? How far does their duty of care extend to protect the company from such potential legal suites? Is it in the best interest of the company to produce technologies/machines that may have to make such crucial decisions on lives of people?Who is the owner, when a machine is tasked with performing a critical surgery, and a patient dies of negligence in the process, Zimmer Biomet (US), Essilor International (France) or the doctor on call at Bara Hospital? Who should be held accountable for negligence? Where does the SA law stand on the question of culpable homicide and or medical negligence?

This article basically questions whether criminal liability should apply; to whom it should apply; and, under which law—noting that currently, AI does not enjoy a separate legal status in South Africa. It also considers corporate governance implications for directors of companies behind the machines/and technologies that will be charged with making such “life-caring” decisions. The article is aimed at surfacing the challenges and debate on these issues and does not seek to provide answers at this stage on such a complex and intricate matter.

The South Africa law defines "Culpable homicide" simply as "the unlawful, negligent killing of a human being", while medical negligence refers to the breach of a duty of care by a medical practitioner which results in damage or death in this case. It occurs when a person's conduct falls below the standards of behaviour established by law for the protection of others against unreasonable risk of harm. A person has acted negligently if he or she has departed from the conduct expected of a reasonably prudent person acting under similar circumstances.In the case of the State vs Maarohanye, hip-hop artist Molemo 'Jub Jub' Maarohanye and Themba Tshabalala were both convicted and imprisoned to 10 years for culpable homicide. In this case, the law was clear on who should be held accountable, but what happens if this was a self-driving car that killed four children. Are we going to charge a driver for simply switching the car "on" or the programmer (manufacturer) who programmed the car to take such a decision? Does the duty of care for Medical Practitioners extend to Directors of companies that design and manufacture such technologies and machines? Can “thinking” machines be expected to exhibit emotive reasoning like compassion? Where such is required to show ‘care’ who carries the responsibility for the actions of the machine? What are the requirements of effective ethical leadership for Boards who direct companies that create machines who decide on livelihoods, safety and health of people?

Currently, our laws will only subject the AI machine to product design legislation or customer protection laws to which the offence of negligence applies. It is worth noting that punishment is not the same. In the case of AI, it could only be subjected to civil suit which a fine can be imposed or a simple product recall, while a human being would be languishing in jail for a similar offence. Is this a fair and equitable treatment in the eyes of the law? This is not to argue against the advancement of AI and Machine learning. A case has been made that AI improves efficiencies and augment human capabilities. In some cases, it saves lives and many companies are investing in and researching ways that AI can help improve healthcare systems. Example, a machine which is programmed to figure out personalised drug protocols to better diagnostic tools or robots to assist in surgeries. Can they really remove culpability of human beings that operate at the coalface? Does it transfer from the coalface operators and technicians to the designers and directors of the companies that designed and manufactured the machines and technologies. Is the current legal framework in South Africa and Corporate Governance Principles sufficient for such technological advancements.

May be AI is good for efficiency, productivity and quality but cannot be used to abdicate responsibility and accountability of the driver/operator. Some argue that this approach of retaining responsibility and accountability with the person defeats the very purpose and key value-add of AI! Whilst working towards the future of finding the right balance for accountability and responsibility it seems plausible that at this stage; corporate governance and the legal framework in South Africa is challenged when it comes to AI responsibility and accountability of its creators and operators. One thing very feasible at this point is that it is highly unlikely that machine will carry responsibility and their actions in spite of the intelligence that they possess. The ever-changing nature of this technological landscape would require endless resources to regulate unless the operator and/or creator remains responsible and accountable. Thus, we need policy and lawmakers who are synced to the current realities of AI and its implications.

There is a greater need to review the existing laws and factor in accountability of key players in the unfolding machine- and technology-driven environment that is facilitated by AI. The current legal framework may not be adequately catering for the responsibilities and accountability issues that arise from AI. Corporate governance needs to develop best practices and principles to guide Boards to govern ethical and other risks that arise from the evolving new world of AI.

-- ENDS --

Co-authored by* Russel Mulamula Cert. Dir. and Masters student Wits University &** Dr. Simo Lushaba CD(SA) and Chairman of Signium Africa
www.signium.co.za
Tel: +27 11 771 4800