楼主: ReneeBK
762 3

Using sparklyr in Databricks [推广有奖]

  • 1关注
  • 62粉丝

VIP

学术权威

14%

还不是VIP/贵宾

-

TA的文库  其他...

R资源总汇

Panel Data Analysis

Experimental Design

威望
1
论坛币
49407 个
通用积分
51.8104
学术水平
370 点
热心指数
273 点
信用等级
335 点
经验
57815 点
帖子
4006
精华
21
在线时间
582 小时
注册时间
2005-5-8
最后登录
2023-11-26

相似文件 换一批

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
  1. In September 2016, RStudio announced sparklyr, a new R interface to Apache Spark. sparklyr’s interface to Spark follows the popular dplyr syntax. At Databricks, we provide the best place to run Apache Spark and all applications and packages powered by it, from all the languages that Spark supports. sparklyr’s addition to the Spark ecosystem not only complements SparkR but also extends Spark’s reach to new users and communities.

  2. Today, we are happy to announce that sparklyr can be seamlessly used in Databricks clusters running Apache Spark 2.2 or higher. In this blog post, we show how you can install and configure sparklyr in Databricks. We also introduce some of the latest improvements in Databricks R Notebooks.
复制代码

本帖隐藏的内容

Using sparklyr in Databricks.pdf (201.34 KB)


二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:BRICKS Using abric Spark BRIC

沙发
ReneeBK 发表于 2017-5-28 02:04:02 |只看作者 |坛友微信交流群
  1. Installing sparklyr
  2. sparklyr is under active development and new versions are regularly released with new API and bug fixes. We do not pre-install sparklyr, allowing our users can install and enjoy the latest version of the package. You can install the latest development version from GitHub:

  3. devtools::install_github("rstudio/sparklyr")
  4. After sparklyr 0.6 is released to CRAN, the installation process can be done much simpler.

  5. install_packages(sparklyr)
复制代码

使用道具

藤椅
ReneeBK 发表于 2017-5-28 02:04:33 |只看作者 |坛友微信交流群
  1. Using sparklyr API
  2. After setting up the sparklyr connection, you can use all sparklyr APIs. You can import and combine sparklyr with dplyr or MLlib. You can also use sparklyr extensions. Note that if the extension packages include third-party JARs, you may need to install those JARs as libraries in your workspace.

  3. library(dplyr)
  4. iris_tbl <- copy_to(sc, iris)
  5. iris_summary <- iris_tbl %>%
  6.     mutate(Sepal_Width = ROUND(Sepal_Width * 2) / 2) %>%
  7.     group_by(Species, Sepal_Width) %>%
  8.     summarize(count = n(),
  9. Sepal_Length = mean(Sepal_Length),
  10. stdev = sd(Sepal_Length)) %>% collect

  11. library(ggplot2)
  12. ggplot(iris_summary,
  13.    aes(Sepal_Width, Sepal_Length, color = Species)) +
  14.     geom_line(size = 1.2) +
  15.     geom_errorbar(aes(
  16. ymin = Sepal_Length - stdev,
  17. ymax = Sepal_Length + stdev),
  18.    width = 0.05) +
  19.     geom_text(aes(label = count),
  20. vjust = -0.2, hjust = 1.2, color = "black") +
  21.     theme(legend.position="top")
复制代码

使用道具

板凳
ReneeBK 发表于 2017-5-28 02:05:00 |只看作者 |坛友微信交流群
  1. Using SparkR and sparklyr Together
  2. We find SparkR and sparklyr complementary. You can use the packages next to each other in a single notebook or job. To do so you can import SparkR along with sparklyr in Databricks notebooks. The SparkR connection is pre-configured in the notebook, and after importing the package, you can start using SparkR API. Also, remember that some of the functions in SparkR mask a number of functions in dplyr.

  3. library(SparkR)
  4. The following objects are masked from ‘package:dplyr’:

  5.     arrange, between, coalesce, collect, contains, count, cume_dist,
  6.     dense_rank, desc, distinct, explain, filter, first, group_by,
  7.     intersect, lag, last, lead, mutate, n, n_distinct, ntile,
  8.     percent_rank, rename, row_number, sample_frac, select, sql,
  9.     summarize, union
复制代码

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-4-26 14:25