Summary of Study |
Summary of Branches |
All Subject Groups |
All Subjects |
List of Roles |
Explanatory Notes
Instructions
| BECM36AIS |
AI and Society |
| Roles: | P |
Extent of teaching: | 1P+1C |
| Department: | 13136 |
Language of teaching: | EN |
| Guarantors: | Střítecký V. |
Completion: | ZK |
| Lecturers: | Střítecký V., Vostal F. |
Credits: | 6 |
| Tutors: | Střítecký V., Vostal F. |
Semester: | Z |
Anotation:
The course introduces students to topics that combine technical understanding of ML/AI safety and security with social
and philosophical dimensions of ML/AI. The focus is on explaining limitations of ML/AI in high-risk scenarios and on
helping students understand how to design robust, fair, and accountable ML/AI lifecycles that address societal concerns
over technology. The course will also show students how to navigate the complex regulatory environment emerging in
response to rising concerns over impacts of ML/AI on society.
Course outlines:
| 1. | | Open v. close development in ML/AI and its security implications |
| 2. | | Learning from observations in the causal world: What does it mean for robustness? |
| 3. | | Alignment of ML models and lessons learned: social choice theory |
| 4. | | Fairness, bias, and other normative issues impacting social acceptability of ML/AI |
| 5. | | Foundations for safety and security of ML: how to reason about the open world? |
| 6. | | Sociotechnical vulnerabilities of ML/AI |
| 7. | | ML/AI policy and regulatory approaches |
| 8. | | Epistemology of inductive inference and ML/AI: a tale of two traditions |
| 9. | | Ethics of ML/AI development practices |
| 10. | | The real world misuseability of generative models |
| 11. | | Social accountability of corporate ML/AI development |
| 12. | | Philosophical origins of the existential risks AI debate |
Studijní literatura a studijní pomůcky
Hendrycks, D., Carlini, N., Schulman, J., Steinhardt, J. (20
Exercises outline:
Literature:
* Hendrycks, D., Carlini, N., Schulman, J., Steinhardt, J. (2022). Unsolved Problems in ML Safety.
https://arxiv.org/abs/2109.13916.
* Ashmore, R., Calinescu, R., Paterson, C. (2021). Assuring the Machine Learning Lifecycle:
* Desiderata, Methods, and Challenges. ACM Computing Surveys 54(5).
* Casper S et al. (2023) Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback.
https://arxiv.org/pdf/2307.15217.
* Conitzer V et al. (2024) Social Choice for AI Alignment: Dealing with Diverse Human Feedback.
https://arxiv.org/abs/2404.10271.
* Gebru, T., Morgenstern, J., Vecchione, B., Wortman Vaughan, J., Wallach, H., Daumé III, H.,
* Crawford, K. (2021). Datasheets for datasets. Communications of the ACM 64(12).
* Barocas S, Hardt M, Narayanan A (2023) Fairness and Machine Learning: Limitations and Opportunities. Cambridge, MA: The MIT Press.
* Mökander, J., Axente, M., Casolari, F. et al. Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation. Minds &
achines (2021).
https://doi.org/10.1007/s11023-021-09577-4.
* Weidinger, L. et al. (2023) Sociotechnical Safety Evaluation of Generative AI Systems.
https://arxiv.org/abs/2310.11986.
* Moynihan, T. (2020). Existential risk and human extinction: An intellectual history. Futures 116(102495).
Requirements:
Subject is included into these academic programs:
| Page updated 17.12.2025 09:52:19, semester: L/2026-7, L/2025-6, L/2024-5, Z/2025-6, Z/2026-7, Send comments about the content to the Administrators of the Academic Programs |
Proposal and Realization: I. Halaška (K336), J. Novák (K336) |