Subject description - B4M36SMU
Summary of Study |
Summary of Branches |
All Subject Groups |
All Subjects |
List of Roles |
Explanatory Notes
Instructions
B4M36SMU | Symbolic Machine Learning | ||
---|---|---|---|
Roles: | PV, PO | Extent of teaching: | 2P+2C |
Department: | 13136 | Language of teaching: | CS |
Guarantors: | Kuželka O. | Completion: | Z,ZK |
Lecturers: | Kuželka O., Šír G., Železný F. | Credits: | 6 |
Tutors: | Too many persons | Semester: | L |
Web page:
https://cw.fel.cvut.cz/wiki/courses/smu/startAnotation:
This course consists of four parts. The first part of the course will explain methods through which an intelligent agent can learn by interacting with its environment, also known as reinforcement learning. This will include deep reinforcement learning. The second part focuses on Bayesian networks, specifically methods for inference. The third part will cover fundamental topics from natural language learning, starting from the basics and ending with state-of-the-art architectures such as transformer. Finally, the last part will provide an introduction to several topics from the computational learning theory, including the online and batch learning settings.Course outlines:
1. | Reinforcement Learning - Markov decision processes | |
2. | Reinforcement Learning - Model-free policy evaluation | |
3. | Reinforcement Learning - Model-free control | |
4. | Reinforcement Learning - Deep reinforcement learning | |
5. | Bayesian Networks - Intro | |
6. | Bayesian Networks - Variable elimination, importance sampling | |
7. | Natural Language Processing 1 | |
8. | Natural Language Processing 2 | |
9. | Natural Language Processing 3 | |
10. | Natural Language Processing 4 | |
11. | Computational Leaning Theory 1 | |
12. | Computation Learning Theory 2 | |
13. | Computational Learning Theory 3. | |
14. | Course Wrap Up |
Exercises outline:
1. | Reinforcement Learning - Markov decision processes | |
2. | Reinforcement Learning - Model-free policy evaluation | |
3. | Reinforcement Learning - Model-free control | |
4. | Reinforcement Learning - Deep reinforcement learning | |
5. | Bayesian Networks - Intro | |
6. | Bayesian Networks - Variable elimination, importance sampling | |
7. | Natural Language Processing 1 | |
8. | Natural Language Processing 2 | |
9. | Natural Language Processing 3 | |
10. | Natural Language Processing 4 | |
11. | Computational Leaning Theory 1 | |
12. | Computation Learning Theory 2 | |
13. | Computational Learning Theory 3. | |
14. | Course Wrap Up |
Literature:
R. | S. Sutton, A. G. Barto: Reinforcement learning: An introduction. MIT press, 2018. | |
D. | Jurafsky & J. H. Martin: Speech and Language Processing - 3rd edition draft | |
M. | J. Kearns, U. Vazirani: An Introduction to Computational Learning Theory, MIT Press 1994 |
Requirements:
Students can get a maximum of 100 points which is the sum of the projects score and the exam score. A minimum of 25 (out of 50) exam points is required to pass the exam. A minimum of 25 (out of 50) projects points is required to obtain an assessment.Keywords:
Reinforcement learning, Bayesian networks, natural language processing, computational learning theory Subject is included into these academic programs:Program | Branch | Role | Recommended semester |
MPBIO1_2018 | Bioinformatics | PV | 2 |
MPOI7_2018 | Artificial Intelligence | PO | 2 |
MPBIO2_2018 | Medical Instrumentation | PV | 2 |
MPBIO3_2018 | Image processing | PV | 2 |
MPOI9_2018 | Data Science | PO | 2 |
MPOI8_2018 | Bioinformatics | PO | 2 |
MPBIO4_2018 | Signal processing | PV | 2 |
Page updated 4.12.2024 17:51:26, semester: Z,L/2024-5, Z/2025-6, Send comments about the content to the Administrators of the Academic Programs | Proposal and Realization: I. Halaška (K336), J. Novák (K336) |