Popis předmětu - BECM36AIS
| BECM36AIS | AI and Society | ||
|---|---|---|---|
| Role: | P | Rozsah výuky: | 1P+1C |
| Katedra: | 13136 | Jazyk výuky: | EN |
| Garanti: | Střítecký V. | Zakončení: | ZK |
| Přednášející: | Střítecký V., Vostal F. | Kreditů: | 6 |
| Cvičící: | Střítecký V., Vostal F. | Semestr: | Z |
Anotace:
The course introduces students to topics that combine technical understanding of ML/AI safety and security with social and philosophical dimensions of ML/AI. The focus is on explaining limitations of ML/AI in high-risk scenarios and on helping students understand how to design robust, fair, and accountable ML/AI lifecycles that address societal concerns over technology. The course will also show students how to navigate the complex regulatory environment emerging in response to rising concerns over impacts of ML/AI on society.Osnovy přednášek:
| 1. | Open v. close development in ML/AI and its security implications | |
| 2. | Learning from observations in the causal world: What does it mean for robustness? | |
| 3. | Alignment of ML models and lessons learned: social choice theory | |
| 4. | Fairness, bias, and other normative issues impacting social acceptability of ML/AI | |
| 5. | Foundations for safety and security of ML: how to reason about the open world? | |
| 6. | Sociotechnical vulnerabilities of ML/AI | |
| 7. | ML/AI policy and regulatory approaches | |
| 8. | Epistemology of inductive inference and ML/AI: a tale of two traditions | |
| 9. | Ethics of ML/AI development practices | |
| 10. | The real world misuseability of generative models | |
| 11. | Social accountability of corporate ML/AI development | |
| 12. | Philosophical origins of the existential risks AI debate |
Osnovy cvičení:
Literatura:
* Hendrycks, D., Carlini, N., Schulman, J., Steinhardt, J. (2022). Unsolved Problems in ML Safety. https://arxiv.org/abs/2109.13916. * Ashmore, R., Calinescu, R., Paterson, C. (2021). Assuring the Machine Learning Lifecycle: * Desiderata, Methods, and Challenges. ACM Computing Surveys 54(5). * Casper S et al. (2023) Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback. https://arxiv.org/pdf/2307.15217. * Conitzer V et al. (2024) Social Choice for AI Alignment: Dealing with Diverse Human Feedback. https://arxiv.org/abs/2404.10271. * Gebru, T., Morgenstern, J., Vecchione, B., Wortman Vaughan, J., Wallach, H., Daumé III, H., * Crawford, K. (2021). Datasheets for datasets. Communications of the ACM 64(12). * Barocas S, Hardt M, Narayanan A (2023) Fairness and Machine Learning: Limitations and Opportunities. Cambridge, MA: The MIT Press. * Mökander, J., Axente, M., Casolari, F. et al. Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation. Minds & achines (2021). https://doi.org/10.1007/s11023-021-09577-4. * Weidinger, L. et al. (2023) Sociotechnical Safety Evaluation of Generative AI Systems.https://arxiv.org/abs/2310.11986. * Moynihan, T. (2020). Existential risk and human extinction: An intellectual history. Futures 116(102495).Požadavky:
Předmět je zahrnut do těchto studijních plánů:
| Plán | Obor | Role | Dop. semestr |
| MPPRGAI_2025 | Před zařazením do oboru | P | 1 |
| Stránka vytvořena 13.11.2025 12:51:06, semestry: Z,L/2026-7, Z/2025-6, L/2024-5, L/2025-6, připomínky k informační náplni zasílejte správci studijních plánů | Návrh a realizace: I. Halaška (K336), J. Novák (K336) |