Big Data processing using Apache Spark

During the training, you will learn how to use the Apache Spark framework to quickly process large amounts of data.

Purpose of training

This course provides an introduction to the Apache Spark architecture. It can be conducted in Scala or Python. It covers the process of creating a Spark application - integration with the source, data processing, process optimization and saving in a database in the Cloud environment.

You will get familiar with Spark API. You will learn how to write Spark Jobs related to common and specific problems. We will discuss optimizations, the most common challenges and ways to overcome them. The training focuses mainly on practical skills.

40% - theory, 60% - practical workshops

The training can be conducted in the Client’s office or online.

Duration:

2-3 days.

Training addresses

Developers and business analysts whose aim is to learn about Apache Spark technology. Basic knowledge of Python or Scala is recommended.

Our trainers

Contact

Ask for training

contact@datumo.pl
+48 789 566 177
Dziekońskiego 1 street,
 
00-728 Warsaw
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.