楼主: Lisrelchen
1094 1

ND4J: Scientific Computing on the JVM [推广有奖]

  • 0关注
  • 62粉丝

VIP

院士

67%

还不是VIP/贵宾

-

TA的文库  其他...

Bayesian NewOccidental

Spatial Data Analysis

东西方数据挖掘

威望
0
论坛币
49957 个
通用积分
79.5487
学术水平
253 点
热心指数
300 点
信用等级
208 点
经验
41518 点
帖子
3256
精华
14
在线时间
766 小时
注册时间
2006-5-4
最后登录
2022-11-6

相似文件 换一批

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
ND4J: Scientific Computing on the JVM

ND4J is an Apache2 Licensed open-sourced scientific computing library for the JVM. It is meant to be used in production environments rather than as a research tool, which means routines are designed to run fast with minimum RAM requirements.

Please search for the latest version on search.maven.org.

Or use the versions displayed in: https://github.com/deeplearning4j/dl4j-0.4-examples/blob/master/pom.xml


Main Features
  • Versatile n-dimensional array object
  • Multiplatform functionality including GPUs
  • Linear algebra and signal processing functions

Specifics

  • Supports GPUs via with the CUDA backend nd4j-cuda-7.5 and Native via nd4j-native.
  • All of this is wrapped in a unifying interface.
  • The API mimics the semantics of Numpy, Matlab and scikit-learn.

Modules

Several of these modules are different backend options for ND4J (including GPUs).

  • api = core
  • instrumentation
  • jdbc = Java Database Connectivity
  • jocl-parent = Java bindings for OpenCL
  • scala-api = API for Scala users
  • scala-notebook = Integration with Scala Notebook

Documentation

Documentation is available at nd4j.org. Access the JavaDocs for more detail.

本帖隐藏的内容

nd4j-master.zip (3.39 MB)

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Scientific computing Comput SCIE comp scientific computing research designed version

本帖被以下文库推荐

沙发
Lisrelchen 发表于 2016-7-1 10:40:40 |只看作者 |坛友微信交流群
  1. Implementations with ND4J

  2. As there are many cases where ND4J alone can be used conveniently, let's briefly grasp how to use ND4J before looking into the explanation of DL4J. If you would like to use ND4J alone, once you create a new Maven project, then you can use ND4J by adding the following code to pom.xml:

  3. <properties>
  4.    <nd4j.version>0.4-rc3.6</nd4j.version>
  5. </properties>

  6. <dependencies>
  7.    <dependency>
  8.        <groupId>org.nd4j</groupId>
  9.        <artifactId>nd4j-jblas</artifactId>
  10.        <version>${nd4j.version}</version>
  11.    </dependency>
  12.    <dependency>
  13.        <groupId>org.nd4j</groupId>
  14.        <artifactId>nd4j-perf</artifactId>
  15.        <version>${nd4j.version}</version>
  16.    </dependency>
  17. </dependencies>
  18. Here, <nd4j.version> describes the latest version of ND4J, but please check whether it is updated when you actually implement the code. Also, switching from CPU to GPU is easy while working with ND4J. If you have CUDA installed with version 7.0, then what you do is just define artifactId as follows:

  19. <dependency>
  20.    <groupId>org.nd4j</groupId>
  21.    <artifactId>nd4j-jcublas-7.0</artifactId>
  22.    <version>${nd4j.version}</version>
  23. </dependency>
  24. You can replace the version of <artifactId> depending on your configuration.

  25. Let's look at a simple example of what calculations are possible with ND4J. The type we utilize with ND4J is INDArray, that is, an extended type of Array. We begin by importing the following dependencies:

  26. import org.nd4j.linalg.api.ndarray.INDArray;
  27. import org.nd4j.linalg.factory.Nd4j;
  28. Then, we define INDArray as follows:

  29. INDArray x = Nd4j.create(new double[]{1, 2, 3, 4, 5, 6}, new int[]{3, 2});
  30. System.out.println(x);
  31. Nd4j.create takes two arguments. The former defines the actual values within INDArray, and the latter defines the shape of the vector (matrix). By running this code, you get the following result:

  32. [[1.00,2.00]
  33. [3.00,4.00]
  34. [5.00,6.00]]
  35. Since INDArray can output its values with System.out.print, it's easy to debug. Calculation with scalar can also be done with ease. Add 1 to x as shown here:

  36. x.add(1);
  37. Then, you will get the following output:

  38. [[2.00,3.00]
  39. [4.00,5.00]
  40. [6.00,7.00]]
  41. Also, the calculation within INDArray can be done easily, as shown in the following example:

  42. INDArray y = Nd4j.create(new double[]{6, 5, 4, 3, 2, 1}, new int[]{3, 2});
  43. Then, basic arithmetic operations can be represented as follows:

  44. x.add(y)
  45. x.sub(y)
  46. x.mul(y)
  47. x.div(y)
  48. These will return the following result:

  49. [[7.00,7.00]
  50. [7.00,7.00]
  51. [7.00,7.00]]
  52. [[-5.00,-3.00]
  53. [-1.00,1.00]
  54. [3.00,5.00]]
  55. [[6.00,10.00]
  56. [12.00,12.00]
  57. [10.00,6.00]]
  58. [[0.17,0.40]
  59. [0.75,1.33]
  60. [2.50,6.00]]
  61. Also, ND4J has destructive arithmetic operators. When you write the x.addi(y) command, x changes its own values so that System.out.println(x); will return the following output:

  62. [[7.00,7.00]
  63. [7.00,7.00]
  64. [7.00,7.00]]
  65. Likewise, subi, muli, and divi are also destructive operators. There are also many other methods that can conveniently perform calculations between vectors or matrices. For more information, you can refer to http://nd4j.org/documentation.html, http://nd4j.org/doc/ and http://nd4j.org/apidocs/.

  66. Let's look at one more example to see how machine learning algorithms can be written with ND4J. We'll implement the easiest example, perceptrons, based on the source code written in Chapter 2, Algorithms for Machine Learning – Preparing for Deep Learning. We set the package name DLWJ.examples.ND4J and the file (class) name Perceptrons.java.

  67. First, let's add these two lines to import from ND4J:

  68. import org.nd4j.linalg.api.ndarray.INDArray;
  69. import org.nd4j.linalg.factory.Nd4j;
  70. The model has two parameters: num of the input layer and the weight. The former doesn't change from the previous code; however, the latter isn't Array but INDArray:

  71. public int nIn;       // dimensions of input data
  72. public INDArray w;
  73. You can see from the constructor that since the weight of the perceptrons is represented as a vector, the number of rows is set to the number of units in the input layer and the number of columns to 1. This definition is written here:

  74. public Perceptrons(int nIn) {

  75.    this.nIn = nIn;
  76.    w = Nd4j.create(new double[nIn], new int[]{nIn, 1});

  77. }
  78. Then, because we define the model parameter as INDArray, we also define the demo data, training data, and test data as INDArray. You can see these definitions at the beginning of the main method:

  79. INDArray train_X = Nd4j.create(new double[train_N * nIn], new int[]{train_N, nIn});  // input data for training
  80. INDArray train_T = Nd4j.create(new double[train_N], new int[]{train_N, 1});          // output data (label) for training

  81. INDArray test_X = Nd4j.create(new double[test_N * nIn], new int[]{test_N, nIn});  // input data for test
  82. INDArray test_T = Nd4j.create(new double[test_N], new int[]{test_N, 1});          // label of inputs
  83. INDArray predicted_T = Nd4j.create(new double[test_N], new int[]{test_N, 1});     // output data predicted by the model
  84. When we substitute a value into INDArray, we use put. Please be careful that any value we can set with put is only the values of the scalar type:

  85. train_X.put(i, 0, Nd4j.scalar(g1.random()));
  86. train_X.put(i, 1, Nd4j.scalar(g2.random()));
  87. train_T.put(i, Nd4j.scalar(1));
  88. The flow from a model building and training is the same as the previous code:

  89. // construct perceptrons
  90. Perceptrons classifier = new Perceptrons(nIn);

  91. // train models
  92. while (true) {
  93.    int classified_ = 0;

  94.    for (int i=0; i < train_N; i++) {
  95.        classified_ += classifier.train(train_X.getRow(i), train_T.getRow(i), learningRate);
  96.    }

  97.    if (classified_ == train_N) break;  // when all data classified correctly

  98.    epoch++;
  99.    if (epoch > epochs) break;
  100. }
  101. Each piece of training data is given to the train method by getRow(). First, let's see the entire content of the train method:

  102. public int train(INDArray x, INDArray t, double learningRate) {

  103.    int classified = 0;

  104.    // check if the data is classified correctly
  105.    double c = x.mmul(w).getDouble(0) * t.getDouble(0);

  106.    // apply steepest descent method if the data is wrongly classified
  107.    if (c > 0) {
  108.        classified = 1;
  109.    } else {
  110.        w.addi(x.transpose().mul(t).mul(learningRate));
  111.    }

  112.    return classified;
  113. }
  114. We first focus our attention on the following code:

  115.    // check if the data is classified correctly
  116.    double c = x.mmul(w).getDouble(0) * t.getDouble(0);
  117. This is the part that checks whether the data is classified correctly by perceptions, as shown in the following equation:

  118. Implementations with ND4J
  119. You can see from the code that .mmul() is for the multiplication between vectors or matrices. We wrote this part of the calculation in Chapter 2, Algorithms for Machine Learning – Preparing for Deep Learning, as follows:

  120.    double c = 0.;

  121.    // check if the data is classified correctly
  122.    for (int i = 0; i < nIn; i++) {
  123.        c += w[i] * x[i] * t;
  124.    }
  125. By comparing both codes, you can see that multiplication between vectors or matrices can be written easily with INDArray, and so you can implement the algorithm intuitively just by following the equations.

  126. The equation to update the model parameters is as follows:

  127.        w.addi(x.transpose().mul(t).mul(learningRate));
  128. Here, again, you can implement the code like you write a math equation. The equation is represented as follows:

  129. Implementations with ND4J
  130. The last time we implemented this part, we wrote it with a for loop:

  131. for (int i = 0; i < nIn; i++) {
  132.    w[i] += learningRate * x[i] * t;
  133. }
  134. Furthermore, the prediction after the training is also the standard forward activation, shown as the following equation:

  135. Implementations with ND4J
  136. Here:

  137. Implementations with ND4J
  138. We can simply define the predict method with just a single line inside, as follows:

  139. public int predict(INDArray x) {

  140.    return step(x.mmul(w).getDouble(0));
  141. }
  142. When you run the program, you can see its precision and accuracy, and the recall is the same as we get with the previous code.

  143. Thus, it'll greatly help that you implement the algorithms analogous to mathematical equations. We only implement perceptrons here, but please try other algorithms by yourself.
复制代码

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加JingGuanBbs
拉您进交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-4-27 05:46