Subject description - BE4M36SMU
Summary of Study |
Summary of Branches |
All Subject Groups |
All Subjects |
List of Roles |
Explanatory Notes
Instructions
BE4M36SMU | Symbolic Machine Learning | ||
---|---|---|---|
Roles: | PV, PO | Extent of teaching: | 2P+2C |
Department: | 13136 | Language of teaching: | EN |
Guarantors: | Kuželka O. | Completion: | Z,ZK |
Lecturers: | Kuželka O., Šír G., Železný F. | Credits: | 6 |
Tutors: | Too many persons | Semester: | L |
Web page:
https://cw.fel.cvut.cz/wiki/courses/smu/startAnotation:
This course consists of four parts. The first part of the course will explain methods through which an intelligent agent can learn by interacting with its environment, also known as reinforcement learning. This will include deep reinforcement learning. The second part focuses on Bayesian networks, specifically methods for inference. The third part will cover fundamental topics from natural language learning, starting from the basics and ending with state-of-the-art architectures such as transformer. Finally, the last part will provide an introduction to several topics from the computational learning theory, including the online and batch learning settings.Course outlines:
1. | Reinforcement Learning - Markov decision processes | |
2. | Reinforcement Learning - Model-free policy evaluation | |
3. | Reinforcement Learning - Model-free control | |
4. | Reinforcement Learning - Deep reinforcement learning | |
5. | Bayesian Networks - Intro | |
6. | Bayesian Networks - Variable elimination, importance sampling | |
7. | Natural Language Processing 1 | |
8. | Natural Language Processing 2 | |
9. | Natural Language Processing 3 | |
10. | Natural Language Processing 4 | |
11. | Computational Leaning Theory 1 | |
12. | Computation Learning Theory 2 | |
13. | Computational Learning Theory 3. | |
14. | Course Wrap Up |
Exercises outline:
1. | Reinforcement Learning - Markov decision processes | |
2. | Reinforcement Learning - Model-free policy evaluation | |
3. | Reinforcement Learning - Model-free control | |
4. | Reinforcement Learning - Deep reinforcement learning | |
5. | Bayesian Networks - Intro | |
6. | Bayesian Networks - Variable elimination, importance sampling | |
7. | Natural Language Processing 1 | |
8. | Natural Language Processing 2 | |
9. | Natural Language Processing 3 | |
10. | Natural Language Processing 4 | |
11. | Computational Leaning Theory 1 | |
12. | Computation Learning Theory 2 | |
13. | Computational Learning Theory 3. | |
14. | Course Wrap Up |
Literature:
R. | S. Sutton, A. G. Barto: Reinforcement learning: An introduction. MIT press, 2018. | |
D. | Jurafsky & J. H. Martin: Speech and Language Processing - 3rd edition draft | |
M. | J. Kearns, U. Vazirani: An Introduction to Computational Learning Theory, MIT Press 1994 |
Requirements:
Students can get a maximum of 100 points which is the sum of the projects score and the exam score. A minimum of 25 (out of 50) exam points is required to pass the exam. A minimum of 25 (out of 50) projects points is required to obtain an assessment. Subject is included into these academic programs:Program | Branch | Role | Recommended semester |
MEBIO1_2018 | Bioinformatics | PV | 2 |
MEBIO4_2018 | Signal Processing | PV | 2 |
MEBIO2_2018 | Medical Instrumentation | PV | 2 |
MEBIO3_2018 | Image Processing | PV | 2 |
MEOI7_2018 | Artificial Intelligence | PO | 2 |
MEOI9_2018 | Data Science | PO | 2 |
MEOI8_2018 | Bioinformatics | PO | 2 |
Page updated 21.11.2024 17:51:36, semester: L/2023-4, L/2024-5, Z/2025-6, Z/2024-5, Send comments about the content to the Administrators of the Academic Programs | Proposal and Realization: I. Halaška (K336), J. Novák (K336) |