Krishna Sankar
Learn how to use Spark to process big data at speed and scale for sharper analytics. Put the principles into practice for faster, slicker big data projects.
When people want a way to process Big Data at speed, Spark is invariably the solution. With its ease of development (in comparison to the relative complexity of Hadoop), it’s unsurprising that it’s becoming popular with data analysts and engineers everywhere.
Beginning with the fundamentals, we’ll show you how to get set up with Spark with minimum fuss. You’ll then get to grips with some simple APIs before investigating machine learning and graph processing – throughout we’ll make sure you know exactly how to apply your knowledge.
You will also learn how to use the Spark shell, how to load data before finding out how to build and run your own Spark applications. Discover how to manipulate your RDD and get stuck into a range of DataFrame APIs. As if that’s not enough, you’ll also learn some useful Machine Learning algorithms with the help of Spark MLlib and integrating Spark with R. We’ll also make sure you’re confident and prepared for graph processing, as you learn more about the GraphX API.
Table of Contents
1: INSTALLING SPARK AND SETTING UP YOUR CLUSTER
2: USING THE SPARK SHELL
3: BUILDING AND RUNNING A SPARK APPLICATION
4: CREATING A SPARKSESSION OBJECT
5: LOADING AND SAVING DATA IN SPARK
6: MANIPULATING YOUR RDD
7: SPARK 2.0 CONCEPTS
8: SPARK SQL
9: FOUNDATIONS OF DATASETS/DATAFRAMES – THE PROVERBIAL WORKHORSE FOR DATASCIENTISTS
10: SPARK WITH BIG DATA
11: MACHINE LEARNING WITH SPARK ML PIPELINES
12: GRAPHX
原版 PDF + EPUB + MOBI:
本帖隐藏的内容
原版 PDF:EPUB:
MOBI: