This is the code repository for Apache Spark with Scala - Learn Spark from a Big Data Guru [Video], published by Packt. It contains all the supporting project files necessary to work through the video course from start to finish.
This course covers all the fundamentals of Apache Spark with Scala and teaches you everything you need to know about developing Spark applications with Scala. At the end of this course, you will gain in-depth knowledge about Apache Spark and general big data analysis and manipulations skills to help your company to adapt Apache Spark for building a big data processing pipeline and data analytics applications. This course covers 10+ hands-on big data examples. You will learn valuable knowledge about how to frame data analysis problems as Spark problems. Together we will learn examples such as aggregating NASA Apache web logs from different sources; we will explore the price trend by looking at the real estate data in California; we will write Spark applications to find out the median salary of developers in different countries through the Stack Overflow survey data; we will develop a system to analyze how maker spaces are distributed across different regions in the United Kingdom, and much, much more. This course is taught in Scala. Scala is the next generation programming language for functional programming that is growing in popularity and it is one of the most widely used languages in the industry to write Spark programs. Let's learn how to write Spark programs with Scala to model big data problems today!
- An overview of the architecture of Apache Spark.
- Work with Apache Spark's primary abstraction, resilient distributed datasets(RDDs) to process and analyze large data sets.
- Develop Apache Spark 2.0 applications using RDD transformations and actions and Spark SQL.
- Scale up Spark applications on a Hadoop YARN cluster through Amazon's Elastic MapReduce service.
- Analyze structured and semi-structured data using Datasets and DataFrames, and develop a thorough understanding about Spark SQL.
- Share information across different nodes on a Apache Spark cluster by broadcast variables and accumulators.
- Advanced techniques to optimize and tune Apache Spark jobs by partitioning, caching and persisting RDDs.
- Best practices of working with Apache Spark in the field.
To fully benefit from the coverage included in this course, you will need:
This course has the following software requirements:
N/A