楼主: ReneeBK
1081 7

Intel’s BigDL on Databricks [推广有奖]

  • 1关注
  • 62粉丝

VIP

学术权威

14%

还不是VIP/贵宾

-

TA的文库  其他...

R资源总汇

Panel Data Analysis

Experimental Design

威望
1
论坛币
49407 个
通用积分
51.8704
学术水平
370 点
热心指数
273 点
信用等级
335 点
经验
57815 点
帖子
4006
精华
21
在线时间
582 小时
注册时间
2005-5-8
最后登录
2023-11-26

相似文件 换一批

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
  1. Intel recently released its BigDL project for distributed deep learning on Apache Spark. BigDL has native Spark integration, allowing it to leverage Spark during model training, prediction, and tuning. This blog post gives highlights of BigDL and a tutorial showing how to get started with BigDL on Databricks.
复制代码

本帖隐藏的内容

Intel’s BigDL on Databricks.pdf (263.59 KB)


二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:BRICKS abric INTEL BRIC Data Intel

沙发
ReneeBK 发表于 2017-5-28 02:47:50 |只看作者 |坛友微信交流群
  1. Setting up BigDL on Databricks
  2. First, here are the steps to make BigDL available in a Databricks notebook:
  3. Build the BigDL jar by following the instructions on the BigDL build page. It is recommended to use Java 8, Spark 2.0 and Scala 2.11.
  4. Set up a Databricks cluster using the same Spark and Scala versions as those used to build the jar file. While setting up a cluster in the Databricks Cluster UI will work, to take full advantage of BigDL (i.e. make it run fast), we can set up a cluster via Databricks' REST API and incorporate the recommended BigDL & Spark settings. Here is an example call that can be made from a terminal (from the settings in 1, 2, 3):
  5. curl -n -H "Content-Type: application/json" -X POST -d @- https://YOURACCOUNT.cloud.databricks.com/api/2.0/clusters/create <<JSON
  6. {
  7. "cluster_name": "bigdl-test",
  8. "spark_version": "2.0.1-db1-scala2.11",
  9. "aws_attributes": {
  10. "availability": "SPOT",
  11. "zone_id": "us-west-2c"
  12. },
  13. "node_type_id": "c3.8xlarge",
  14. "num_workers": 1,
  15. "spark_config": {
  16. "spark.shuffle.blockTransferService": "nio",
  17. "spark.scheduler.minRegisteredResourcesRatio": "1.0",
  18. "spark.akka.frameSize": "64",
  19. "spark.task.maxFailures": "1",
  20. "spark.executorEnv.DL_ENGINE_TYPE": "mklblas",
  21. "spark.executorEnv.MKL_DISABLE_FAST_MM": "1",
  22. "spark.executorEnv.KMP_BLOCKTIME": "0",
  23. "spark.executorEnv.OMP_WAIT_POLICY": "passive",
  24. "spark.executorEnv.OMP_NUM_THREADS": "1",
  25. "spark.executorEnv.DL_CORE_NUMBER": "32",
  26. "spark.executorEnv.DL_NODE_NUMBER": "1"
  27. },
  28. "spark_env_vars": {
  29. "OMP_NUM_THREADS": 1,
  30. "KMP_BLOCKTIME": 0,
  31. "OMP_WAIT_POLICY": "passive",
  32. "DL_ENGINE_TYPE": "mklblas"
  33. }
  34. }
  35. JSON
  36. Import the library (bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar) into Databricks and attach the library to your cluster (see the Databricks guide).
  37. Now we are ready to use BigDL on Databricks.
复制代码

使用道具

藤椅
ReneeBK 发表于 2017-5-28 02:48:37 |只看作者 |坛友微信交流群
  1. Initialization
  2. We start by initializing the BigDL Engine and setting up some parameters. The BigDL Engine expects two parameters:
  3. nodeNumber: the number of executor nodes
  4. coreNumber: the number of cores per executor node (expected to be uniform across executors)
  5. BigDL will launch one task across all executors, where each executor runs a multi-threaded operation processing a part of the data.
  6. We also need to specify batchSize which will be used later to split up the data into mini batches. BigDL requires the batch size to be a multiple of nodeNumber * coreNumber.
  7. import com.intel.analytics.bigdl.utils.Engine
  8. val nodeNumber = 1
  9. val coreNumber = 4
  10. val mult = 64
  11. val batchSize = nodeNumber * coreNumber * mult
  12. Engine.init(nodeNumber, coreNumber, true /* env == "spark" */)
复制代码

使用道具

板凳
ReneeBK 发表于 2017-5-28 02:49:07 |只看作者 |坛友微信交流群
  1. Preparing the data
  2. First we download the MNIST training and test data, unzip, and upload to DBFS.
  3. val dir = "/tmp/mnist-byte"
  4. val dbfsDir = s"dbfs:$dir"

  5. val trainDataFile = "train-images-idx3-ubyte"
  6. val trainLabelFile = "train-labels-idx1-ubyte"
  7. val validationDataFile = "t10k-images-idx3-ubyte"
  8. val validationLabelFile = "t10k-labels-idx1-ubyte"
复制代码

使用道具

报纸
ReneeBK 发表于 2017-5-28 02:50:14 |只看作者 |坛友微信交流群
  1. Training : learning the neural network parameters
  2. With the data loaded, we'll run Optimizer.optimize() to learn the parameters for the LeNet network model. We can specify a couple of parameters for SGD (learningRate, maxEpoch). Tweaking the values from the BigDL lenet example, setting the learning rate proportional to batchSize seems to work well.
  3. Note: The training step may take a while on smaller nodes (e.g. in the Community Edition). Using a larger instance and deploying a cluster via the REST API (see the example at the beginning of the Notebook) will speed up training significantly.
  4. import com.intel.analytics.bigdl.models.lenet._
  5. import com.intel.analytics.bigdl.nn._
  6. import com.intel.analytics.bigdl.optim._
  7. import com.intel.analytics.bigdl.utils._

  8. val state = T("learningRate" -> 0.05 / 4 * mult)
  9. val maxEpoch = 15

  10. val initialModel = LeNet5(10)  // 10 digit classes
  11. val optimizer = Optimizer(
  12.   model = initialModel,
  13.   dataset = trainSet,
  14.   criterion = ClassNLLCriterion[Float]())
  15. val trainedModel = optimizer
  16.   .setValidation(
  17.     trigger = Trigger.everyEpoch,
  18.     dataset = validationSet,
  19.     vMethods = Array(new Top1Accuracy))
  20.   .setState(state)
  21.   .setEndWhen(Trigger.maxEpoch(maxEpoch))
  22.   .optimize()
复制代码

使用道具

地板
ReneeBK 发表于 2017-5-28 02:50:35 |只看作者 |坛友微信交流群
  1. Evaluating the learned model
  2. BigDL provides a set of metrics to evaluate the model via Validator and ValidationMethod classes. Here is an example of how to use them.
  3. import com.intel.analytics.bigdl.optim.{LocalValidator, Top1Accuracy, Validator}
  4. val validator = Validator(trainedModel, validationSet)
  5. val result = validator.test(Array(new Top1Accuracy[Float]))
  6. result.foreach(r => {
  7.   println(s"${r._2} is ${r._1}")
  8. })
复制代码

使用道具

7
ReneeBK 发表于 2017-5-28 02:50:58 |只看作者 |坛友微信交流群
  1. Cleaning up
  2. Clean up the MNIST data files from dbfs if no longer needed.
  3. dbutils.fs.rm(dbfsDir, true)
复制代码

使用道具

8
钱学森64 发表于 2017-5-28 10:59:20 |只看作者 |坛友微信交流群
谢谢分享

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-4-27 03:57