Popis předmětu - BE0M33BDT

Přehled studia | Přehled oborů | Všechny skupiny předmětů | Všechny předměty | Seznam rolí | Vysvětlivky               Návod
BE0M33BDT Big Data Technologies
Role:  Rozsah výuky:2P+1C
Katedra:13136 Jazyk výuky:EN
Garanti:Hučín J. Zakončení:Z,ZK
Přednášející:Hučín J., Paščenko P., Sušický M. Kreditů:4
Cvičící:Osob je mnoho Semestr:Z

Webová stránka:

https://cw.fel.cvut.cz/wiki/courses/BE0M33BDT

Anotace:

The objective of this elective course is to familiarize students with new trends and technologies for storing, management and processing of Big Data. The course will focus on methods for extraction, analysis as well as a selection of hardware infrastructure for managing persistent and streamed data, such as data from social networks. As part of the course we will present how to apply the traditional methods of artificial intelligence and machine learning to Big Data analysis.

Cíle studia:

The goal of the course is to show on practical examples to the basic methods for processing Big Data. Examples will focus on the statistical data processing.

Osnovy přednášek:

1. Introduction, Big Data processing motivation, requirements
2. Hadoop overview - all components and how they work together
i) Hadoop Common: The common utilities that support the other Hadoop modules.
ii) Hadoop Distributed File System (HDFS?): A distributed file system that provides high-throughput access to application data.
iii) Hadoop YARN: A framework for job scheduling and cluster resource management.
iv) Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.
3. Introduction to MapReduce, how to use pre-installed data. Basic skeleton for running words histogram in Java
4. HDFS, NoSQL databases, HBase, Cassandra, SQL access, Hive,
5. What is Mahout, what are the basic algorithms
6. Streamed data - real time processing
7. Twitter data processing, simple sentiment algorithm

Osnovy cvičení:

1. Cloud computing cluster OpenStack basic commands, virtualization.
2. Install hadoop, hw requirements, sw requirements, how to administer (create access), introduce to the basic setup on our cluster, how to monitor. Run the words histogram, single thread.
3. The bag of words notion, TF-IDF, run SVD, LDA.
4. Manipulation with data, how to upscale-downscale HDFS, How to run and monitor computation progres, how to organize the computation.
5. Run random forest classification task using the Mahout algorithms, show how much faster is the map reduce implementation compared to single thread on one box.
6. Prezentace semestrálních prací a zápočet

Literatura:

Hadoop: The Definitive Guide, 4th Edition, by Tom White

Požadavky:

Seminars will be run the standard way. We assume that students will bring their own computers for editing scripts. Calculations will be executed in the computer cluster with remote access. For practical exercises, students will use pre-loaded text database. The seminars will focus on practical application of technology to specific examples. During the semester are scheduled two short tests of subject matter.

Klíčová slova:

Big Data, Hadoop, Machine learning

Předmět je zahrnut do těchto studijních plánů:

Plán Obor Role Dop. semestr


Stránka vytvořena 5.12.2024 17:51:00, semestry: Z/2025-6, Z,L/2024-5, připomínky k informační náplni zasílejte správci studijních plánů Návrh a realizace: I. Halaška (K336), J. Novák (K336)