High Performance SparkBest Practices for Scaling and Optimizing Apache SparkBy Holden Karau, Rachel Warren Publisher: O'Reilly Media Release Date: June 2017 Pages: 175
Apache Spark is amazing when everything clicks. But if you haven’t seen the performance improvements you expected, or still don’t feel confident enough to use Spark in production, this practical book is for you. Authors Holden Karau and Rachel Warren demonstrate performance optimizations to help your Spark queries run faster and handle larger data sizes, while using fewer resources. Ideal for software engineers, data engineers, developers, and system administrators working with large-scale data applications, this book describes techniques that can reduce data infrastructure costs and developer hours. Not only will you gain a more comprehensive understanding of Spark, you’ll also learn how to make it sing. With this book, you’ll explore: - How Spark SQL’s new interfaces improve performance over SQL’s RDD data structure
- The choice between data joins in Core Spark and Spark SQL
- Techniques for getting the most out of standard RDD transformations
- How to work around performance issues in Spark’s key/value pair paradigm
- Writing high-performance Spark code without Scala or the JVM
- How to test for functionality and performance when applying suggested improvements
- Using Spark MLlib and Spark ML machine learning libraries
- Spark’s Streaming components and external community packages
Table of Contents- Chapter 1 Introduction to High Performance Spark
- Chapter 2 How Spark Works
- Chapter 3 DataFrames, Datasets & Spark SQL
- Chapter 4 Joins (SQL & Core)
- Chapter 5 Effective Transformations
- Chapter 6 Working with Key/Value Data
- Chapter 7 Going Beyond Scala
- Chapter 8 Testing & Validation
- Chapter 9 Spark MLlib and ML
- Chapter 10 Spark Components and Packages
- Appendix Spark Tuning and Cluster Sizing
http://shop.oreilly.com/product/0636920046967.do
|