TDT31 - The regulation of artificial intelligence
Teacher: Anna Leida Mölder
With the increased use of artificial intelligence, the role of a software engineer and researcher in computer science has changed from implementation of execution to decision maker. The legal framework covering AI is insufficient to regulate its use, leaving the responsibility to the developer to code with ethical use in mind. Three aspects are of importance, which are in part covered by existing laws: Inclusiveness, privacy and transparency. Inclusiveness implies non-discrimination and fair use. Privacy affects aspects such as traceability and proper identification of data and metadata. Both have raised concerns in AI development due to the effect of training data selection. Regarding transparency, traditional artificial intelligence using multilayered networks have been described as a black box, due to overload of information. A complete explainability might not be possible or even useful. However, as a successful technology shift depends on trust, some degree of transparency is required. This course will cover what aspects of developing AI systems are affected by regulations in these fields, and which are not, discuss existing and new solutions in system design. It will cover the different aspects of data identification, use of meta data, machine learning and coding using both central and distributed systems.
Knowledge: The students will gain an in depth understanding of the considerations needed when developing an AI system for use in large scale decision making. They should be able to assess the affect of their design choices for intended and non-intended stakeholders, such as specific minority groups, applications and users with specific demands for data security and data privacy preservation. The course will focus in particular on how to design AI systems for non-discriminating, privacy preserving and interpretable solutions. The student will understand the different aspects of AI modeling in a cloud solution and in edge computing, using e.g. federated learning and updating patterns, and be able to assess aproper trade-off between interpretability and completeness of explanation, in a FAT framework.
Competence: The students should be able to understand the legal framework surrounding artificial intelligence, both from the developers and the societies point of view. They should be able to assess their own role as both developers and decision makers and evaluate their own AI projects for traceability, interpretability, transparency and ethical use.
This course will be seminar-based. The syllabus will be a selected set of scientific articles and handout material. The students will be required to read the material in advance, and take turns preparing a discussion material and questions.
Participation in seminars. The date of the first seminar will be announced in the beginning of September.
 John Weaver. Robots Are People Too: How Siri, Google Car, and Artificial Intelligence Will Force Us to Change Our Laws. ISBN-13: 978-1440829451
 Qiang Yang, Yang Liu, Tianjian Chen, Yongxin Tong, Federated Machine Learning: Concept and Applications. ACM Transactions on Intelligent Systems and Technology. January 2019 Article No.: 12 https://doi.org/10.1145/3298981
 Hugo Jair Escalante, Sergio Escalera, et al. Explainable and Interpretable Models in Computer Vision and Machine Learning (2018). ISBN-13: 978-3319981307
 Krishna Gade, Sahin Cem Geyik, Sahin Cem Geyik, Krishnaram Kenthapadi, Krishnaram Kenthapadi, Varun Mithal, Varun Mithal, Ankur Taly, Ankur Taly, Explainable AI in industry: practical challenges and lessons learned: implications tutorial. FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and TransparencyJanuary 2020 Pages 699. https://doi.org/10.1145/3351095.3375664