无量天尊Spark 发表于 2016-3-20 22:19
sc
> sc <- sparkR.init(sparkPackages="com.databricks:spark-csv_2.11:1.0.3")
Launching java with spark-submit command e:\spark-1.4.1-bin-hadoop2.6\spark-1.4.1-bin-hadoop2.6/bin/spark-submit.cmd --packages com.databricks:spark-csv_2.11:1.0.3 sparkr-shell C:\Users\ADMINI~1\AppData\Local\Temp\RtmpkjBC4I\backend_port14c4b1617cb
> sqlContext <- sparkRSQL.init(sc)
> people <- read.df(sqlContext, "e:/sample_submission.csv", "csv")
Error: 不是所有的returnStatus == 0都是TRUE
> people <- read.df(sqlContext, "e:/test.csv", "csv")
Error: 不是所有的returnStatus == 0都是TRUE