Responsible Artificial Intelligence in Government: Development of a Legal Framework for South Africa
Keywords:artificial intelligence, responsible AI, legal framework, automated decision-making, algorithm
Various international guideline documents suggest a human-centric approach to the development and use of artificial intelligence (AI) in society, to ensure that AI products are developed and used with due respect to ethical principles and human rights. Key principles contained in these international documents are: transparency (explainability), accountability, fairness and privacy. Some governments are using AI in the delivery of public services, but there is a lack of appropriate policy and legal frameworks to ensure responsible AI in government. This paper reviews recent international developments and concludes that, an appropriate policy and legal framework must be based on the key principles contextualised to the world of AI. A national legal framework alone is not sufficient and should be accompanied by a practical instrument, such as an algorithm impact assessment, aimed at reducing risk or harm. Recommendations for a possible South African legal framework for responsible AI in government are proposed.
How to Cite
Copyright (c) 2022 Dirk Brand
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
JeDEM is a peer-reviewed, open-access journal (ISSN: 2075-9517). All journal content, except where otherwise noted, is licensed under the Creative Commons Attribution 3.0 Austria (CC BY 3.0) License.