楼主: hubq
797 0

Spark 机器学习 [推广有奖]

  • 0关注
  • 0粉丝

VIP1

学前班

50%

还不是VIP/贵宾

-

威望
0
论坛币
2100 个
通用积分
0
学术水平
0 点
热心指数
0 点
信用等级
0 点
经验
20 点
帖子
1
精华
0
在线时间
1 小时
注册时间
2016-3-1
最后登录
2016-3-3

楼主
hubq 发表于 2016-3-3 11:23:17 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
原文链接:
http://hubqoaing.github.io/2016/03/03/SparkMLlibClassification

欢迎关注本人博客。

摘录:

[BigData-Spark]Classification using Spark.By Boqiang Hu on 03 March 2016   |   View on Github

Classification using Spark

Learning note for Machine learning with spark.

Besides, thanks to Zeppelin. Although it is not so user-friendly like RStudio or Jupyter, it really makes the learning of Spark much easier.

1. Data Loading from HDFS

First, download the data from https://www.kaggle.com/c/stumbleupon.

Then upload data to HDFS:

tail -n +2 train.tsv >train_noheader.tsvhdfs dfs -mkdir hdfs://tanglab1:9000/user/hadoop/stumbleuponhdfs dfs -put train_noheader.tsv hdfs://tanglab1:9000/user/hadoop/stumbleupon
val rawData = sc.textFile("/user/hadoop/stumbleupon/train_noheader.tsv")val records = rawData.map(line => line.split("\t"))records.first()
2. Data Process

Select the column for label(last column) and Feature(5 ~ last but one column) Data cleanning and convert NA to 0.0 Save the label and feature in vector into MLlib.

As naive bayesian model do not accept negative input value, convert negtive input into 0

import org.apache.spark.mllib.regression.LabeledPointimport org.apache.spark.mllib.linalg.Vectorsval data = records.map{ r =>     val trimmed = r.map(_.replaceAll("\"", ""))    val label = trimmed(r.size - 1).toInt    val features = trimmed.slice(4, r.size - 1).map(d =>             if (d=="?") 0.0 else d.toDouble)    LabeledPoint(label, Vectors.dense(features))}val nbData = records.map{ r =>     val trimmed = r.map(_.replaceAll("\"", ""))    val label = trimmed(r.size - 1).toInt    val features = trimmed.slice(4, r.size - 1).map(d =>             if(d=="?") 0.0 else d.toDouble).map( d=> if(d<0.0) 0.0 else d)    LabeledPoint(label, Vectors.dense(features))}data.cachedata.count
3. Model training

Import modules required. Then define the parameters required by the models.

import org.apache.spark.mllib.classification.LogisticRegressionWithSGDimport org.apache.spark.mllib.classification.SVMWithSGDimport org.apache.spark.mllib.classification.NaiveBayesimport org.apache.spark.mllib.tree.DecisionTreeimport org.apache.spark.mllib.tree.configuration.Algoimport org.apache.spark.mllib.tree.impurity.Entropyval numIterations = 10val maxTreeDepth = 5
3.1 Training logistic regressionval lrModel = LogisticRegressionWithSGD.train(data, numIterations)val dataPoint = data.firstval prediction = lrModel.predict(dataPoint.features)val trueLabel = dataPoint.label




二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Spark SPAR Park 机器学习 SPA download learning really 博客

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
jg-xs1
拉您进交流群
GMT+8, 2025-12-25 22:31