Responsible Artificial Intelligence in Government: Development of a Legal Framework for South Africa

Authors

DOI:

https://doi.org/10.29379/jedem.v14i1.678

Keywords:

artificial intelligence, responsible AI, legal framework, automated decision-making, algorithm

Abstract

Various international guideline documents suggest a human-centric approach to the development and use of artificial intelligence (AI) in society, to ensure that AI products are developed and used with due respect to ethical principles and human rights. Key principles contained in these international documents are: transparency (explainability), accountability, fairness and privacy. Some governments are using AI in the delivery of public services, but there is a lack of appropriate policy and legal frameworks to ensure responsible AI in government. This paper reviews recent international developments and concludes that, an appropriate policy and legal framework must be based on the key principles contextualised to the world of AI. A national legal framework alone is not sufficient and should be accompanied by a practical instrument, such as an algorithm impact assessment, aimed at reducing risk or harm. Recommendations for a possible South African legal framework for responsible AI in government are proposed.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Downloads

Published

19.07.2022

How to Cite

Brand, D. (2022). Responsible Artificial Intelligence in Government: Development of a Legal Framework for South Africa. JeDEM - EJournal of EDemocracy and Open Government, 14(1), 130–150. https://doi.org/10.29379/jedem.v14i1.678

Issue

Section

Research Papers