Big Data Technologies

B232 - Summer 23/24
This course is not present in Moodle. You can visit its homepage by clicking the "Course page (outside Moodle)" button on the right (if available).

Big Data Technologies - B0M33BDT

Credits 4
Semesters Winter
Completion Assessment + Examination
Language of teaching Czech
Extent of teaching 2P+1C
Annotation
The objective of this elective course is to familiarize students with new trends and technologies for storing, management and processing of Big Data. The course will focus on methods for extraction, analysis as well as a selection of hardware infrastructure for managing persistent and streamed data, such as data from social networks. As part of the course we will present how to apply the traditional methods of artificial intelligence and machine learning to Big Data analysis.
Study targets
The goal of the course is to show on practical examples to the basic methods for processing Big Data. Examples will focus on the statistical data processing.
Course outlines
1. Introduction, Big Data processing motivation, requirements
2. Hadoop overview - all components and how they work together
i) Hadoop Common: The common utilities that support the other Hadoop modules.
ii) Hadoop Distributed File System (HDFS?): A distributed file system that provides high-throughput access to application data.
iii) Hadoop YARN: A framework for job scheduling and cluster resource management.
iv) Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.
3. Introduction to MapReduce, how to use pre-installed data. Basic skeleton for running words histogram in Java
4. HDFS, NoSQL databases, HBase, Cassandra, SQL access, Hive,
5. What is Mahout, what are the basic algorithms
6. Streamed data - real time processing
7. Twitter data processing, simple sentiment algorithm
Exercises outlines
1. Cloud computing cluster OpenStack basic commands, virtualization.
2. Install hadoop, hw requirements, sw requirements, how to administer (create access), introduce to the basic setup on our cluster, how to monitor. Run the words histogram, single thread.
3. The bag of words notion, TF-IDF, run SVD, LDA.
4. Manipulation with data, how to upscale-downscale HDFS, How to run and monitor computation progres, how to organize the computation.
5. Run random forest classification task using the Mahout algorithms, show how much faster is the map reduce implementation compared to single thread on one box.
6. Semester work presentation and zápočet

Literature
Hadoop: The Definitive Guide, 4th Edition, by Tom White
Requirements
Seminars will be run the standard way. We assume that students will bring their own computers for editing scripts. Calculations will be executed in the computer cluster with remote access. For practical exercises, students will use pre-loaded text database. The seminars will focus on practical application of technology to specific examples. During the semester are scheduled two short tests of subject matter.