楼主: Lisrelchen
1048 3

【Apache Spark】Redshift Data Source for Apache Spark [推广有奖]

  • 0关注
  • 62粉丝

VIP

院士

67%

还不是VIP/贵宾

-

TA的文库  其他...

Bayesian NewOccidental

Spatial Data Analysis

东西方数据挖掘

威望
0
论坛币
49957 个
通用积分
79.5487
学术水平
253 点
热心指数
300 点
信用等级
208 点
经验
41518 点
帖子
3256
精华
14
在线时间
766 小时
注册时间
2006-5-4
最后登录
2022-11-6

相似文件 换一批

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
Redshift Data Source for Apache Spark

本帖隐藏的内容

spark-redshift-master.zip (277.11 KB)


A library to load data into Spark SQL DataFrames from Amazon Redshift, and write them back to Redshift tables. Amazon S3 is used to efficiently transfer data in and out of Redshift, and JDBC is used to automatically trigger the appropriate COPY andUNLOAD commands on Redshift.

This library is more suited to ETL than interactive queries, since large amounts of data could be extracted to S3 for each query execution. If you plan to perform many queries against the same Redshift tables then we recommend saving the extracted data in a format such as Parquet.

Installation

This library requires Apache Spark 2.0+ and Amazon Redshift 1.0.963+.

For version that works with Spark 1.x, please check for the 1.x branch.

You may use this library in your applications with the following dependency information:

Scala 2.10

groupId: com.databricksartifactId: spark-redshift_2.10version: 3.0.0-preview1

Scala 2.11

groupId: com.databricksartifactId: spark-redshift_2.11version: 3.0.0-preview1

You will also need to provide a JDBC driver that is compatible with Redshift. Amazon recommend that you use their driver, which is distributed as a JAR that is hosted on Amazon's website. This library has also been successfully tested using the Postgres JDBC driver.

Note on Hadoop versions: This library depends on spark-avro, which should automatically be downloaded because it is declared as a dependency. However, you may need to provide the corresponding avro-mapred dependency which matches your Hadoop distribution. In most deployments, however, this dependency will be automatically provided by your cluster's Spark assemblies and no additional action will be required.

Note on Amazon SDK dependency: This library declares a provided dependency on components of the AWS Java SDK. In most cases, these libraries will be provided by your deployment environment. However, if you get ClassNotFoundExceptions for Amazon SDK classes then you will need to add explicit dependencies on com.amazonaws.aws-java-sdk-core andcom.amazonaws.aws-java-sdk-s3 as part of your build / runtime configuration. See the comments inproject/SparkRedshiftBuild.scala for more details.

Snapshot builds

Master snapshot builds of this library are built using jitpack.io. In order to use these snapshots in your build, you'll need to add the JitPack repository to your build file.

  • In Maven:

    <repositories>   <repository>     <id>jitpack.io</id>     <url>https://jitpack.io</url>   </repository></repositories>

    then

    <dependency>  <groupId>com.github.databricks</groupId>  <artifactId>spark-redshift_2.10</artifactId>  <!-- For Scala 2.11, use spark-redshift_2.11 instead -->  <version>master-SNAPSHOT</version></dependency>
  • In SBT:

    resolvers += "jitpack" at "https://jitpack.io"

    then

    libraryDependencies += "com.github.databricks" %% "spark-redshift" % "master-SNAPSHOT"

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Apache Spark apache Source Spark shift execution library tables write

沙发
soccy 发表于 2017-4-18 06:56:45 |只看作者 |坛友微信交流群

使用道具

藤椅
MouJack007 发表于 2017-4-18 07:07:11 |只看作者 |坛友微信交流群
谢谢楼主分享!

使用道具

板凳
MouJack007 发表于 2017-4-18 07:07:44 |只看作者 |坛友微信交流群

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-4-28 13:06