Intel recently released its BigDL project for distributed deep learning on Apache Spark. BigDL has native Spark integration, allowing it to leverage Spark during model training, prediction, and tuning. This blog post gives highlights of BigDL and a tutorial showing how to get started with BigDL on Databricks.
First, here are the steps to make BigDL available in a Databricks notebook:
Build the BigDL jar by following the instructions on the BigDL build page. It is recommended to use Java 8, Spark 2.0 and Scala 2.11.
Set up a Databricks cluster using the same Spark and Scala versions as those used to build the jar file. While setting up a cluster in the Databricks Cluster UI will work, to take full advantage of BigDL (i.e. make it run fast), we can set up a cluster via Databricks' REST API and incorporate the recommended BigDL & Spark settings. Here is an example call that can be made from a terminal (from the settings in 1, 2, 3):
We start by initializing the BigDL Engine and setting up some parameters. The BigDL Engine expects two parameters:
nodeNumber: the number of executor nodes
coreNumber: the number of cores per executor node (expected to be uniform across executors)
BigDL will launch one task across all executors, where each executor runs a multi-threaded operation processing a part of the data.
We also need to specify batchSize which will be used later to split up the data into mini batches. BigDL requires the batch size to be a multiple of nodeNumber * coreNumber.
With the data loaded, we'll run Optimizer.optimize() to learn the parameters for the LeNet network model. We can specify a couple of parameters for SGD (learningRate, maxEpoch). Tweaking the values from the BigDL lenet example, setting the learning rate proportional to batchSize seems to work well.
Note: The training step may take a while on smaller nodes (e.g. in the Community Edition). Using a larger instance and deploying a cluster via the REST API (see the example at the beginning of the Notebook) will speed up training significantly.