| 所在主题: | |
| 文件名: neuralnetworks-master (1).zip | |
| 资料下载链接地址: https://bbs.pinggu.org/a-2095652.html | |
| 附件大小: | |
|
Deep Neural Networks with GPU Support
This is a Java implementation of some of the algorithms for training deep neural networks. GPU support is provided via the OpenCL and Aparapi. The architecture is designed with modularity, extensibility and pluggability in mind. Git structureI'm using the git-flow model. The most stable (but older) sources are available in the master branch, while the latest ones are in the develop branch. If you want to use the previous Java 7 compatible version you can check out this release. Neural network types
All the algorithms support GPU execution. Out of the box supported datasets are MNIST, CIFAR-10/CIFAR-100 (experimental, not much testing), IRIS and XOR, but you can easily implement your own. Experimental support of RGB image preprocessing operations - affine transformations, cropping, and color scaling (see Generaltest.java -> testImageInputProvider). Activation functions
All the functions support GPU execution. They can be applied to all types of networks and all training algorithms. You can also implement new activations. How to build the library
The samples are organized as unit tests. If you want see examples on various popular datasets you can go to nn-samples/src/test/java/com/github/neuralnetworks/samples/. Library structureThere are two projects:
The software design is tiered, each tier depending on the previous ones. Network architectureThis is the first "tier". Each network is defined by a list of layers. Each layer has a set of connections that link it to the other layers of the network, making the network a directed acyclic graph. This structure can accommodate simple feedforwad nets, but also more complex architectures like http://www.cs.toronto.edu/~hinton/absps/imagenet.pdf. You can build your own specific network. Data propagationThis tier is propagating data through the network. It takes advantage of it's graph structure. There are two main base components:
Most of the ConnectionCalculator implementations are optimized for GPU execution. Aparapi imposes some important restrictions on the code that can be executed on the GPU. The most significant are:
Therefore before each GPU calculation all the data is converted to one-dim arrays and primitive type variables. Because of this all Aparapi neuron types are using either AparapiWeightedSum (for fully connected layers and weighted sum input functions),AparapiSubsampling2D (for subsampling layers) or AparapiConv2D (for convolutional layers). Most of the data is represented as one-dimensional array by default (for example Matrix). TrainingAll the trainers are using the Trainer base class. They are optimized to run on the GPU, but you can plug-in other implementations and new training algorithms. The training procedure has training and testing phases. Each Trainer receives parameters (for example learning rate, momentum, etc) via Properties (a HashMap). For the supported properties for each trainer please check the TrainerFactory class. Input dataInput is provided to the neural network by the trainers via TrainingInputProvider interface. Each TrainingInputProvider provides training samples in the form of TrainingInputData (default implementation is TrainingInputDataImpl). The input can be modified by a list of modifiers - for example MeanInputFunction (for subtracting the mean value) and ScalingInputFunction (scaling within a range). Currently MnistInputProvider and IrisInputProvider are implemented. AuthorIvan Vasilev (ivanvasilev [at] gmail (dot) com) License[hide][/hide] |
|
熟悉论坛请点击新手指南
|
|
| 下载说明 | |
|
1、论坛支持迅雷和网际快车等p2p多线程软件下载,请在上面选择下载通道单击右健下载即可。 2、论坛会定期自动批量更新下载地址,所以请不要浪费时间盗链论坛资源,盗链地址会很快失效。 3、本站为非盈利性质的学术交流网站,鼓励和保护原创作品,拒绝未经版权人许可的上传行为。本站如接到版权人发出的合格侵权通知,将积极的采取必要措施;同时,本站也将在技术手段和能力范围内,履行版权保护的注意义务。 (如有侵权,欢迎举报) |
|
京ICP备16021002号-2 京B2-20170662号
京公网安备 11010802022788号
论坛法律顾问:王进律师
知识产权保护声明
免责及隐私声明