楼主: ReneeBK
792 0

[Case Study]Simple Text Classification using Python [推广有奖]

  • 1关注
  • 62粉丝

VIP

学术权威

14%

还不是VIP/贵宾

-

TA的文库  其他...

R资源总汇

Panel Data Analysis

Experimental Design

威望
1
论坛币
49417 个
通用积分
52.1704
学术水平
370 点
热心指数
273 点
信用等级
335 点
经验
57815 点
帖子
4006
精华
21
在线时间
582 小时
注册时间
2005-5-8
最后登录
2023-11-26

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
  1. from_future__ import print_function

  2. from pyspark import SparkContext
  3. from pyspark.ml import Pipeline
  4. from pyspark.ml.classification import LogisticRegression
  5. from pyspark.ml.feature import HashingTF, Tokenizer
  6. from pyspark.sql import Row, SQLContext


  7. """
  8. A simple text classification pipeline that recognizes "spark" from
  9. input text. This is to show how to create and configure a Spark ML
  10. pipeline in Python. Run with:
  11.   bin/spark-submit examples/src/main/python/ml/simple_text_classification_pipeline.py
  12. """


  13. if __name__ == "__main__":
  14.     sc = SparkContext(appName="SimpleTextClassificationPipeline")
  15.     sqlContext = SQLContext(sc)

  16.     # Prepare training documents, which are labeled.
  17.     LabeledDocument = Row("id", "text", "label")
  18.     training = sc.parallelize([(0, "a b c d e spark", 1.0),
  19.                                (1, "b d", 0.0),
  20.                                (2, "spark f g h", 1.0),
  21.                                (3, "hadoop mapreduce", 0.0)]) \
  22.         .map(lambda x: LabeledDocument(*x)).toDF()

  23.     # Configure an ML pipeline, which consists of tree stages: tokenizer, hashingTF, and lr.
  24.     tokenizer = Tokenizer(inputCol="text", outputCol="words")
  25.     hashingTF = HashingTF(inputCol=tokenizer.getOutputCol(), outputCol="features")
  26.     lr = LogisticRegression(maxIter=10, regParam=0.001)
  27.     pipeline = Pipeline(stages=[tokenizer, hashingTF, lr])

  28.     # Fit the pipeline to training documents.
  29.     model = pipeline.fit(training)

  30.     # Prepare test documents, which are unlabeled.
  31.     Document = Row("id", "text")
  32.     test = sc.parallelize([(4, "spark i j k"),
  33.                            (5, "l m n"),
  34.                            (6, "spark hadoop spark"),
  35.                            (7, "apache hadoop")]) \
  36.         .map(lambda x: Document(*x)).toDF()

  37.     # Make predictions on test documents and print columns of interest.
  38.     prediction = model.transform(test)
  39.     selected = prediction.select("id", "text", "prediction")
  40.     for row in selected.collect():
  41.         print(row)

  42.     sc.stop()
  43. Status API Training Shop Blog About Pricing
  44. © 2015 GitHub, Inc. Terms Privacy Security Contact Help
复制代码


二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Case study simple cation python Using examples python simple create import

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-5-21 08:40