楼主: Lisrelchen
2301 11

【Reading Notes】Learning Spark: Lightning-Fast Big Data Analysis [推广有奖]

  • 0关注
  • 62粉丝

VIP

院士

67%

还不是VIP/贵宾

-

TA的文库  其他...

Bayesian NewOccidental

Spatial Data Analysis

东西方数据挖掘

威望
0
论坛币
49957 个
通用积分
79.5487
学术水平
253 点
热心指数
300 点
信用等级
208 点
经验
41518 点
帖子
3256
精华
14
在线时间
766 小时
注册时间
2006-5-4
最后登录
2022-11-6

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币

本帖隐藏的内容

https://github.com/gaoxuesong/learning-spark-lightning-fast-big-data-analysis


二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Learning Analysis Big data earning reading

本帖被以下文库推荐

沙发
Lisrelchen 发表于 2017-2-15 11:08:05 |只看作者 |坛友微信交流群
  1. Learning Spark: Lightning-Fast Big Data Analysis reading notes

  2. Reading notes for the book of Learning Spark: Lightning-Fast Big Data Analysis is only for spark developer educational purposes. Reading notes in Github: https://github.com/gaoxuesong/learning-spark-lightning-fast-big-data-analysis

  3. 《Learning Spark: Lightning-Fast Big Data Analysis》的中文读书笔记纯属个人对于Spark的兴趣,仅供学习。 读书笔记分享于Github: https://github.com/gaoxuesong/learning-spark-lightning-fast-big-data-analysis

  4. About the Author

  5. Holden Karau is a software development engineer at Databricks and is active in open source. She is the author of an earlier Spark book. Prior to Databricks she worked on a variety of search and classification problems at Google, Foursquare, and Amazon. She graduated from the University of Waterloo with a Bachelors of Mathematics in Computer Science. Outside of software she enjoys playing with fire, welding, and hula hooping.

  6. Most recently, Andy Konwinski co-founded Databricks. Before that he was a PhD student and then postdoc in the AMPLab at UC Berkeley, focused on large scale distributed computing and cluster scheduling. He co-created and is a committer on the Apache Mesos project. He also worked with systems engineers and researchers at Google on the design of Omega, their next generation cluster scheduling system. More recently, he developed and led the AMP Camp Big Data Bootcamps and first Spark Summit, and has been contributing to the Spark project.

  7. Patrick Wendell is an engineer at Databricks as well as a Spark Committer and PMC member. In the Spark project, Patrick has acted as release manager for several Spark releases, including Spark 1.0. Patrick also maintains several subsystems of Spark's core engine. Before helping start Databricks, Patrick obtained an M.S. in Computer Science at UC Berkeley. His research focused on low latency scheduling for large scale analytics workloads. He holds a B.S.E in Computer Science from Princeton University

  8. Matei Zaharia is the creator of Apache Spark and CTO at Databricks. He holds a PhD from UC Berkeley, where he started Spark as a research project. He now serves as its Vice President at Apache. Apart from Spark, he has made research and open source contributions to other projects in the cluster computing area, including Apache Hadoop (where he is a committer) and Apache Mesos (which he also helped start at Berkeley).
复制代码

使用道具

藤椅
Lisrelchen 发表于 2017-2-15 11:08:45 |只看作者 |坛友微信交流群
  1. """
  2. >>> from pyspark.context import SparkContext
  3. >>> sc = SparkContext('local', 'test')
  4. >>> b = sc.parallelize([1, 2, 3, 4])
  5. >>> avg(b)
  6. 2.5
  7. """

  8. import sys

  9. from pyspark import SparkContext


  10. def partitionCtr(nums):
  11.     """Compute sumCounter for partition"""
  12.     sumCount = [0, 0]
  13.     for num in nums:
  14.         sumCount[0] += num
  15.         sumCount[1] += 1
  16.     return [sumCount]


  17. def combineCtrs(c1, c2):
  18.     return (c1[0] + c2[0], c1[1] + c2[1])


  19. def basicAvg(nums):
  20.     """Compute the avg"""
  21.     sumCount = nums.mapPartitions(partitionCtr).reduce(combineCtrs)
  22.     return sumCount[0] / float(sumCount[1])

  23. if __name__ == "__main__":
  24.     cluster = "local"
  25.     if len(sys.argv) == 2:
  26.         cluster = sys.argv[1]
  27.     sc = SparkContext(cluster, "Sum")
  28.     nums = sc.parallelize([1, 2, 3, 4])
  29.     avg = basicAvg(nums)
  30.     print avg
复制代码

使用道具

板凳
Lisrelchen 发表于 2017-2-15 11:09:17 |只看作者 |坛友微信交流群
  1. """
  2. >>> from pyspark.context import SparkContext
  3. >>> sc = SparkContext('local', 'test')
  4. >>> b = sc.parallelize([1, 2, 3, 4])
  5. >>> basicAvg(b)
  6. 2.5
  7. """

  8. import sys

  9. from pyspark import SparkContext


  10. def basicAvg(nums):
  11.     """Compute the avg"""
  12.     sumCount = nums.map(lambda x: (x, 1)).fold(
  13.         (0, 0), (lambda x, y: (x[0] + y[0], x[1] + y[1])))
  14.     return sumCount[0] / float(sumCount[1])

  15. if __name__ == "__main__":
  16.     master = "local"
  17.     if len(sys.argv) == 2:
  18.         master = sys.argv[1]
  19.     sc = SparkContext(master, "Sum")
  20.     nums = sc.parallelize([1, 2, 3, 4])
  21.     avg = basicAvg(nums)
  22.     print avg
复制代码

使用道具

报纸
Lisrelchen 发表于 2017-2-15 11:10:39 |只看作者 |坛友微信交流群
  1. """
  2. >>> from pyspark.context import SparkContext
  3. >>> sc = SparkContext('local', 'test')
  4. >>> b = sc.parallelize([1, 2, 3, 4])
  5. >>> sorted(basicSquareNoOnes(b).collect())
  6. [4, 9, 16]
  7. """

  8. import sys

  9. from pyspark import SparkContext


  10. def basicSquareNoOnes(nums):
  11.     """Square the numbers"""
  12.     return nums.map(lambda x: x * x).filter(lambda x: x != 1)

  13. if __name__ == "__main__":
  14.     master = "local"
  15.     if len(sys.argv) == 2:
  16.         master = sys.argv[1]
  17.     sc = SparkContext(master, "BasicFilterMap")
  18.     nums = sc.parallelize([1, 2, 3, 4])
  19.     output = sorted(basicSquareNoOnes(nums).collect())
  20.     for num in output:
  21.         print "%i " % (num)
复制代码

使用道具

地板
Lisrelchen 发表于 2017-2-15 11:12:09 |只看作者 |坛友微信交流群
  1. """
  2. >>> from pyspark.context import SparkContext
  3. >>> sc = SparkContext('local', 'test')
  4. >>> b = sc.parallelize([1, 2, 3, 4])
  5. >>> sorted(basicSquare(b).collect())
  6. [1, 4, 9, 16]
  7. """

  8. import sys

  9. from pyspark import SparkContext


  10. def basicSquare(nums):
  11.     """Square the numbers"""
  12.     return nums.map(lambda x: x * x)

  13. if __name__ == "__main__":
  14.     master = "local"
  15.     if len(sys.argv) == 2:
  16.         master = sys.argv[1]
  17.     sc = SparkContext(master, "BasicMap")
  18.     nums = sc.parallelize([1, 2, 3, 4])
  19.     output = sorted(basicSquare(nums).collect())
  20.     for num in output:
  21.         print "%i " % (num)
复制代码

使用道具

7
franky_sas 发表于 2017-2-15 14:24:19 |只看作者 |坛友微信交流群

使用道具

8
钱学森64 发表于 2017-2-15 16:54:22 |只看作者 |坛友微信交流群
谢谢分享!

使用道具

9
zfeiyafei 发表于 2017-2-25 22:40:16 |只看作者 |坛友微信交流群
Got it, tks.

使用道具

10
gangyaocn 发表于 2017-2-25 23:36:41 |只看作者 |坛友微信交流群
这书怎么样?

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注cda
拉您进交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-4-20 10:23