楼主: ReneeBK
928 2

【Use Case】Approximate quantiles in Apache Spark [推广有奖]

  • 1关注
  • 62粉丝

VIP

学术权威

14%

还不是VIP/贵宾

-

TA的文库  其他...

R资源总汇

Panel Data Analysis

Experimental Design

威望
1
论坛币
49407 个
通用积分
51.8704
学术水平
370 点
热心指数
273 点
信用等级
335 点
经验
57815 点
帖子
4006
精华
21
在线时间
582 小时
注册时间
2005-5-8
最后登录
2023-11-26

相似文件 换一批

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
  1. Introduction
  2. Apache Spark is fast, but applications such as preliminary data exploration need to be even faster and are willing to sacrifice some accuracy for a faster result. Since version 1.6, Spark implements approximate algorithms for some common tasks: counting the number of distinct elements in a set, finding if an element belongs to a set, computing some basic statistical information for a large set of numbers. Eugene Zhulenev, from Collective, has already blogged in these pages about the use of approximate counting in the advertising business.

  3. The following algorithms have been implemented against DataFrames and Datasets and committed into Apache Spark’s branch-2.0, so they will be available in Apache Spark 2.0 for Python, R, and Scala:

  4. approxCountDistinct: returns an estimate of the number of distinct elements
  5. approxQuantile: returns approximate percentiles of numerical data
  6. Researchers have looked at such algorithms for a long time. Spark strives at implementing approximate algorithms that are deterministic (they do not depend on random numbers to work) and that have proven theoretical error bounds: for each algorithm, the user can specify a target error bound, and the result is guaranteed to be within this bound, either exactly (deterministic error bounds) or with very high confidence (probabilistic error bounds). Also, it is important that this algorithm works well for the wealth of use cases seen in the Spark community.

  7. In this blog, we are going to present details on the implementation of approxCountDistinct and approxQuantile algorithms and showcase its implementation in a Databricks notebook.
复制代码

本帖隐藏的内容

Approximate Algorithms in Apache Spark.pdf (247.22 KB)


二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Apache Spark Approximate quantiles quantile Approx

沙发
ReneeBK 发表于 2017-5-28 03:10:07 |只看作者 |坛友微信交流群
  1. dataset = sqlContext.read.load("/databricks-datasets/amazon/data20K")
  2. sqlContext.registerDataFrameAsTable(dataset, "amazon_partitioned")

  3. # A few imports
  4. import numpy as np
  5. import time
  6. import pandas as pd
  7. from pyspark.sql.functions import expr
复制代码

使用道具

藤椅
ReneeBK 发表于 2017-5-28 03:12:22 |只看作者 |坛友微信交流群
  1. Approximate distinct counts
  2. import numpy as np
  3. from pyspark.sql.functions import approxCountDistinct
  4. import time
  5. user_dataset = sqlContext.read.format("csv").load("/databricks-datasets/amazon/users").toDF("user")
  6. sqlContext.registerDataFrameAsTable(user_dataset, "transaction_users")
  7. users = sqlContext.sql("select user from transaction_users").cache()
  8. users.count()
复制代码

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-4-27 08:04