795 0

【数据分析电子书免费下载】Fast Data Processing with Spark pdf_下载_mobi [推广有奖]

  • 0关注
  • 66粉丝

教授

55%

还不是VIP/贵宾

-

威望
1
论坛币
13016 个
通用积分
63.9349
学术水平
26 点
热心指数
25 点
信用等级
15 点
经验
8663 点
帖子
617
精华
0
在线时间
170 小时
注册时间
2016-12-6
最后登录
2017-4-8

相似文件 换一批

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
数据分析电子书免费下载】Fast Data Processing with Spark pdf_下载_mobi  

Overview
Implement Spark's interactive shell to prototype distributed applications
Deploy Spark jobs to various clusters such as Mesos, EC2, Chef, YARN, EMR, and so on
Use Shark's SQL query-like syntax with Spark
In Detail
Spark is a framework for writing fast, distributed programs. Spark solves similar problems as Hadoop MapReduce does but with a fast in-memory approach and a clean functional style API. With its ability to integrate with Hadoop and inbuilt tools for interactive query analysis (Shark), large-scale graph processing and analysis (Bagel), and real-time analysis (Spark Streaming), it can be interactively used to quickly process and query big data sets.
Fast Data Processing with Spark covers how to write distributed map reduce style programs with Spark. The book will guide you through every step required to write effective distributed programs from setting up your cluster and interactively exploring the API, to deploying your job to the cluster, and tuning it for your purposes.
Fast Data Processing with Spark covers everything from setting up your Spark cluster in a variety of situations (stand-alone, EC2, and so on), to how to use the interactive shell to write distributed code interactively. From there, we move on to cover how to write and deploy distributed jobs in Java, Scala, and Python.
We then examine how to use the interactive shell to quickly prototype distributed programs and explore the Spark API. We also look at how to use Hive with Spark to use a SQL-like query syntax with Shark, as well as manipulating resilient distributed datasets (RDDs).
What you will learn from this book
Prototype distributed applications with Spark's interactive shell
Learn different ways to interact with Spark's distributed representation of data (RDDs)
Load data from the various data sources
Query Spark with a SQL-like query syntax
Integrate Shark queries with Spark programs
Effectively test your distributed software
Tune a Spark installation
Install and set up Spark on your cluster
Work effectively with large data sets
Approach
This book will be a basic, step-by-step tutorial, which will help readers take advantage of all that Spark has to offer.
Who this book is written for
Fast Data Processing with Spark is for software developers who want to learn how to write distributed programs with Spark. It will help developers who have had problems that were too much to be dealt with on a single computer. No previous experience with distributed programming is necessary. This book assumes knowledge of either Java, Scala, or Python.


Fast_Data_Processing_with_Spark_2nd_Edition.zip (8.82 MB) 本附件包括:
  • Fast_Data_Processing_with_Spark_2nd_Edition.pdf

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Processing processI Process Spark 电子书免费 数据分析电子书 Fast_Data_Processing_with_Spark_pdf Fast_Data_Processing_with_Spark_下载 Fast_Data_Processing_with_Spark_mobi 数据分析

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注cda
拉您进交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-4-20 06:16