楼主: oliyiyi
1451 3

Beginners Guide: Apache Spark Machine Learning with Large Data [推广有奖]

版主

泰斗

0%

还不是VIP/贵宾

-

TA的文库  其他...

计量文库

威望
7
论坛币
271951 个
通用积分
31269.3519
学术水平
1435 点
热心指数
1554 点
信用等级
1345 点
经验
383775 点
帖子
9598
精华
66
在线时间
5468 小时
注册时间
2007-5-21
最后登录
2024-4-18

初级学术勋章 初级热心勋章 初级信用勋章 中级信用勋章 中级学术勋章 中级热心勋章 高级热心勋章 高级学术勋章 高级信用勋章 特级热心勋章 特级学术勋章 特级信用勋章

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
By Dmitry Petrov, FullStackML.

What if you want to create a machine learning model but realized that your input dataset doesn't fit your computer memory? Usual you would use distributed computing tools like Hadoop and Apache Spark for that computation in a cluster with many machines. However, Apache Spark is able to process your data in local machine standalone mode and even build models when the input data set is larger than the amount of memory your computer has. In this blog post, I'll show you an end-to-end scenario with Apache Spark where we will be creating a binary classification model using a 34.6 gigabytes of input dataset. Run this scenario in your laptop (yes, yours with its 4-8 gigabytes of memory and 50+ gigabytes of disk space) to test this.


Choose dataset.

1. Input data and expected results
In the previous post we discussed "How To Find Simple And Interesting Multi-Gigabytes Data Set". The Posts.xml file from this dataset will be used in the current post. The file size is 34.6 gigabytes. This xml file contains the stackoverflow.com posts data as xml attributes:

  • Title - post title
  • Body - post text
  • Tags - list of tags for post
  • 10+ more xml-attributes that we won't use.

The full dataset with stackoverflow.com Posts.xml file is available here at https://archive.org/details/stackexchange.Additionally I created a smaller version of this file with only 10 items\posts in it. This file contains a small size of original dataset. This data is licensed under the Creative Commons license (cc-by-sa).

As you might expect, this small file is not the best choice for model training. This file is only good for experimenting with your data preparation code. However, the end-to-end Spark scenario from this article works with this small file as well. Please download the file fromhere.

Our goal is to create a predictive model which predicts post Tags based on Body and Title. To simplify the task and reduce the amount of code, we are going to concatenate Title and Body and use that as a single text column.

It might be easy to imagine how this model should work in the stackoverflow.com web site – the user types a question and the web size automatically gives tags suggestion.

Assume that we need as many correct tags as possible and that the user would remove the unnecessary tags. Because of this assumption we are choosing recall as a high priority target for our model.

2. Binary and multi-label classification
The problem of stackoverflow tag prediction is a multi-label classification one because the model should predict many classes, which are not exclusive. The same text might be classified as “Java” and “Multithreading”. Note that multi-label classification is a generalization of different problems – multi-class classification problem which predict only one class from a set of classes.

To simplify our the first Apache Spark problem and reduce the amount of code, let’s simplify our problem. Instead of training a multi-label classifier, let’s train a simple binary classifier for a given tag. For instance, for the tag “Java” one classifier will be created which can predict a post that is about the Java language.

By using this simple approach, many classifiers might be created for almost all frequent labels (Java, C++, Python, multi-threading etc…). This approach is simple and good for studying. However, it is not perfect in practice because by splitting predictive models by separate classifiers, you are ignoring the correlations between classes. Another reason – training many classifiers might be computationally expensive.
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Apache Spark beginners Learning beginner earning computing machines computer learning process

已有 3 人评分经验 论坛币 收起 理由
Nicolle + 100 精彩帖子
星野 + 600 恭喜获得2016年7月优秀会员
william9225 + 40 精彩帖子

总评分: 经验 + 100  论坛币 + 640   查看全部评分

本帖被以下文库推荐

缺少币币的网友请访问有奖回帖集合
https://bbs.pinggu.org/thread-3990750-1-1.html
沙发
oliyiyi 发表于 2016-7-27 10:32:44 |只看作者 |坛友微信交流群
3. Setup and Run Apache Spark in a standalone mode
If you don’t have Apache Spark in your machine you can simply download it from the Spark web page http://spark.apache.org/. Please use version 1.5.1. Direct link to a pre-built version –http://d3kbcqa49mib13.cloudfront.net/spark-1.5.1-bin-hadoop2.6.tgz

You are ready to run Spark in Standalone mode if Java is installed in your computer. If not – install Java.

For Unix systems and Macs, uncompress the file and copy to any directory. This is a Spark directory now.

Run spark master:
sbin/start-master.sh
Run spark slave:
sbin/start-slaves.sh
Run Spark shell:
bin/spark-shell
Spark shell can run your Scala command in interactive mode.

Windows users can find the instruction here:http://nishutayaltech.blogspot.in/2015/04/how-to-run-apache-spark-on-windows7-in.html

If you are working in cluster mode in a Hadoop environment, I’m assuming you already know how to run the Spark shell.

4. Importing libraries
For this end-to-end scenario we are going to use Scala, the primary language for Apache Spark.
// General purpose libraryimport scala.xml._// Spark data manipulation librariesimport org.apache.spark.sql.catalyst.plans._import org.apache.spark.sql._import org.apache.spark.sql.types._import org.apache.spark.sql.functions._// Spark machine learning librariesimport org.apache.spark.ml.feature.{HashingTF, Tokenizer}import org.apache.spark.ml.classification    .LogisticRegressionimport org.apache.spark.mllib.evaluation    .BinaryClassificationMetricsimport org.apache.spark.ml.Pipeline
5. Parsing XML
We need to extract Body, Text and Tags from the input xml file and create a single data-frame with these columns. First, let’s remove the xml header and footer. I assume that the input file is located in the same directory where you run the spark shell command.

val fileName = "Posts.small.xml"val textFile = sc.textFile(fileName)val postsXml = textFile.map(_.trim).   filter(!_.startsWith("<?xml version=")).   filter(_ != "<posts>").   filter(_ != "</posts>")
Spark has good functions for parsing json and csv formats. For Xml we need to write several additional lines of code to create a data frame by specifying the schema programmatically.

Note, Scala language automatically converts all xml codes like “<a>” to actual tags “<a>”. Also we are going to concatenate title and body and remove all unnecessary tags and new line characters from the body and all space duplications.

val postsRDD = postsXml.map { s =>   val xml = XML.loadString(s)   val id = (xml \ "@Id").text   val tags = (xml \ "@Tags").text   val title = (xml \ "@Title").text   val body = (xml \ "@Body").text   val bodyPlain = ("<\\S+>".r).replaceAllIn(body, " ")   val text = (title + " " + bodyPlain).replaceAll("\n",       " ").replaceAll("( )+", " ");   Row(id, tags, text)}
To create a data-frame, schema should be applied to RDD.

val schemaString = "Id Tags Text"val schema = StructType(   schemaString.split(" ").map(fieldName =>       StructField(fieldName, StringType, true)))val postsDf = sqlContext.createDataFrame(postsRDD, schema)
Now you can take a look at your data frame.

postsDf.show()
缺少币币的网友请访问有奖回帖集合
https://bbs.pinggu.org/thread-3990750-1-1.html

使用道具

藤椅
oliyiyi 发表于 2016-7-27 10:33:21 |只看作者 |坛友微信交流群
6. Preparing training and testing datasets
The next step – creating binary labels for a binary classifier. For this code examples, we are using “java” as a label that we would like to predict by a binary classifier. All rows with the “java” label should be marked as a “1” and rows with no “java” as a “0”. Let’s identify our target tag “java” and create binary labels based on this tag.

val targetTag = "java"val myudf: (String => Double) = (str: String) =>     {if (str.contains(targetTag)) 1.0 else 0.0}val sqlfunc = udf(myudf)val postsLabeled = postsDf.withColumn("Label",     sqlfunc(col("Tags")) )Dataset can be split into negative and positive subsets by using the new label.

val positive = postsLabeled.filter('Label > 0.0)val negative = postsLabeled.filter('Label < 1.0)
We are going to use 90% of our data for the model training and 10% as a testing dataset. Let’s create a training dataset by sampling the positive and negative datasets separately.

val positiveTrain = positive.sample(false, 0.9)val negativeTrain = negative.sample(false, 0.9)val training = positiveTrain.unionAll(negativeTrain)
The testing dataset should include all rows which are not included in the training datasets. And again – positive and negative examples separately.

val negativeTrainTmp = negativeTrain    .withColumnRenamed("Label", "Flag").select('Id, 'Flag)val negativeTest = negative.join( negativeTrainTmp,     negative("Id") === negativeTrainTmp("Id"),     "LeftOuter").filter("Flag is null")    .select(negative("Id"), 'Tags, 'Text, 'Label)val positiveTrainTmp = positiveTrain    .withColumnRenamed("Label", "Flag")    .select('Id, 'Flag)val positiveTest = positive.join( positiveTrainTmp,     positive("Id") === positiveTrainTmp("Id"),     "LeftOuter").filter("Flag is null")    .select(positive("Id"), 'Tags, 'Text, 'Label)val testing = negativeTest.unionAll(positiveTest)
7. Training a model
Let’s identify training parameters:

  • Number of features
  • Regression parameters
  • Number of epoch for gradient decent
Spark API creates a model based on columns from the data-frame and the training parameters:

val numFeatures = 64000val numEpochs = 30val regParam = 0.02val tokenizer = new Tokenizer().setInputCol("Text")    .setOutputCol("Words")val hashingTF = new  org.apache.spark.ml.feature    .HashingTF().setNumFeatures(numFeatures).    setInputCol(tokenizer.getOutputCol)    .setOutputCol("Features")val lr = new LogisticRegression().setMaxIter(numEpochs)    .setRegParam(regParam)setFeaturesCol("Features")    .setLabelCol("Label").setRawPredictionCol("Score")    .setPredictionCol("Prediction")val pipeline = new Pipeline()    .setStages(Array(tokenizer, hashingTF, lr))val model = pipeline.fit(training)
8. Testing a model
This is our final code for the binary “Java” classifier which returns a prediction (0.0 or 1.0):

val testTitle =  "Easiest way to merge a release into one JAR file"val testBoby =  """Is there a tool or script which easily merges a bunch  of href="http://en.wikipedia.org/wiki/JAR_%28file_format %29" JAR files into one JAR file? A bonus would be to  easily set the main-file manifest and make it executable. I would like to run it with something like: As far as I  can tell, it has no dependencies which indicates that it  shouldn't be an easy single-file tool, but the downloaded ZIP file contains a lot of libraries."""val testText = testTitle + testBodyval testDF = sqlContext   .createDataFrame(Seq( (99.0, testText)))   .toDF("Label", "Text")val result = model.transform(testDF)val prediction = result.collect()(0)(6)   .asInstanceOf[Double]print("Prediction: "+ prediction)Let’s evaluate the quality of the model based on training dataset.

val testingResult = model.transform(testing)val testingResultScores = testingResult   .select("Prediction", "Label").rdd   .map(r => (r(0).asInstanceOf[Double], r(1)   .asInstanceOf[Double]))val bc =    new BinaryClassificationMetrics(testingResultScores)val roc = bc.areaUnderROCprint("Area under the ROC:" + roc)If you use the small dataset then the quality of your model is probably not the best. Area under the ROC value will be very low (close to 50%) which indicates a poor quality of the model. With an entire Posts.xml dataset, the quality is no so bad. Area under the ROC is 0.64. Probably you can improve this result by playing with different transformations such as TF-IDF and normalization. Not in this blog post.

Conclusion
Apache Spark could be a great option for data processing and for machine learning scenarios if your dataset is larger than your computer memory can hold. It might not be easy to use Spark in a cluster mode within the Hadoop Yarn environment. However, in a local (or standalone) mode, Spark is as simple as any other analytical tool.

Please let me know if you encountered any problem or had future questions. I would really like to hear your feedback.

Bio: Dmitry Petrov, Ph.D. is a Data Scientist at Microsoft. He previously was a Researcher at a university.
缺少币币的网友请访问有奖回帖集合
https://bbs.pinggu.org/thread-3990750-1-1.html

使用道具

板凳
william9225 学生认证  发表于 2016-7-27 12:23:27 来自手机 |只看作者 |坛友微信交流群
谢谢分享

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注cda
拉您进交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-4-25 17:21