https://github.com/deeplearning4j/dl4j-examples
Raw File
Tip revision: bf53c259f6ba09f10e8fb03def6d1c797cb84f7b authored by Shams Ul Azeem on 24 November 2017, 06:31:35 UTC
Updated: Determining cloud cover notebook + added ipynb format
Tip revision: bf53c25
04. Feed-forward.json
{"paragraphs":[{"text":"%md\n### Note\n\nPlease view the [README](https://github.com/deeplearning4j/dl4j-examples/tree/overhaul_tutorials/tutorials/README.md) to learn about installing, setting up dependencies, and importing notebooks in Zeppelin","user":"anonymous","dateUpdated":"2017-10-20T07:39:44+0000","config":{"colWidth":12,"fontSize":9,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<div class=\"markdown-body\">\n<h3>Note</h3>\n<p>Please view the <a href=\"https://github.com/deeplearning4j/dl4j-examples/tree/overhaul_tutorials/tutorials/README.md\">README</a> to learn about installing, setting up dependencies, and importing notebooks in Zeppelin</p>\n</div>"}]},"apps":[],"jobName":"paragraph_1508482916926_889358545","id":"20171020-070156_1850232313","dateCreated":"2017-10-20T07:01:56+0000","dateStarted":"2017-10-20T07:39:41+0000","dateFinished":"2017-10-20T07:39:41+0000","status":"FINISHED","progressUpdateIntervalMs":500,"focus":true,"$$hashKey":"object:102"},{"text":"%md\n\n### Background\n\nIn our previous tutorial, we learned about a very simple neural network model - the logistic regression model. Although you can solve many tasks with a simple model like that, most of the problems require a much complex network configuration. Typical Deep leaning model consists of many layers between the inputs and outputs. In this tutorial, we are going to learn about one of those configuration i.e. Feed-forward neural networks.\n\n### Feed-Forward Networks\n\nFeed-forward networks are those in which there is not cyclic connection between the network layers. The input flows forward towards the output after going through several intermediate layers. A typical feed-forward network looks like this:\n\n|---|---|---|\n|**Feed-forward network** | ![A typical feed-forward network](https://upload.wikimedia.org/wikipedia/en/5/54/Feed_forward_neural_net.gif) | [Source](https://upload.wikimedia.org/wikipedia/en/5/54/Feed_forward_neural_net.gif) |\n\nHere you can see a different layer named as a hidden layer. The layers in between our input and output layers are called hidden layers. It's called hidden because we don't directly deal with them and hence not visible. There can be more than one hidden layer in the network.\n\nJust as our softmax activation after our output layer in the previous tutorial, there can be activation functions between each layer of the network. They are responsible to allow (activate) or disallow our network output to the next layer node. There are different activation functions such as sigmoid and relu etc.","user":"anonymous","dateUpdated":"2017-10-20T07:39:44+0000","config":{"colWidth":12,"fontSize":9,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<div class=\"markdown-body\">\n<h3>Background</h3>\n<p>In our previous tutorial, we learned about a very simple neural network model - the logistic regression model. Although you can solve many tasks with a simple model like that, most of the problems require a much complex network configuration. Typical Deep leaning model consists of many layers between the inputs and outputs. In this tutorial, we are going to learn about one of those configuration i.e. Feed-forward neural networks.</p>\n<h3>Feed-Forward Networks</h3>\n<p>Feed-forward networks are those in which there is not cyclic connection between the network layers. The input flows forward towards the output after going through several intermediate layers. A typical feed-forward network looks like this:</p>\n<table>\n  <tbody>\n    <tr>\n      <td><strong>Feed-forward network</strong> </td>\n      <td><img src=\"https://upload.wikimedia.org/wikipedia/en/5/54/Feed_forward_neural_net.gif\" alt=\"A typical feed-forward network\" /> </td>\n      <td><a href=\"https://upload.wikimedia.org/wikipedia/en/5/54/Feed_forward_neural_net.gif\">Source</a> </td>\n    </tr>\n  </tbody>\n</table>\n<p>Here you can see a different layer named as a hidden layer. The layers in between our input and output layers are called hidden layers. It&rsquo;s called hidden because we don&rsquo;t directly deal with them and hence not visible. There can be more than one hidden layer in the network.</p>\n<p>Just as our softmax activation after our output layer in the previous tutorial, there can be activation functions between each layer of the network. They are responsible to allow (activate) or disallow our network output to the next layer node. There are different activation functions such as sigmoid and relu etc.</p>\n</div>"}]},"apps":[],"jobName":"paragraph_1508482928493_-1550196229","id":"20171020-070208_2069142559","dateCreated":"2017-10-20T07:02:08+0000","dateStarted":"2017-10-20T07:39:41+0000","dateFinished":"2017-10-20T07:39:41+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:103"},{"text":"%md\n\n### Imports","user":"anonymous","dateUpdated":"2017-10-20T07:39:44+0000","config":{"colWidth":12,"fontSize":9,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<div class=\"markdown-body\">\n<h3>Imports</h3>\n</div>"}]},"apps":[],"jobName":"paragraph_1508483230210_1223213707","id":"20171020-070710_1843650237","dateCreated":"2017-10-20T07:07:10+0000","dateStarted":"2017-10-20T07:39:41+0000","dateFinished":"2017-10-20T07:39:41+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:104"},{"text":"import org.deeplearning4j.nn.api.OptimizationAlgorithm\nimport org.deeplearning4j.nn.conf.graph.MergeVertex\nimport org.deeplearning4j.nn.conf.layers.{DenseLayer, GravesLSTM, OutputLayer, RnnOutputLayer}\nimport org.deeplearning4j.nn.conf.{ComputationGraphConfiguration, MultiLayerConfiguration, NeuralNetConfiguration, Updater}\nimport org.deeplearning4j.nn.graph.ComputationGraph\nimport org.deeplearning4j.nn.multilayer.MultiLayerNetwork\nimport org.deeplearning4j.nn.weights.WeightInit\nimport org.nd4j.linalg.activations.Activation\nimport org.nd4j.linalg.learning.config.Nesterovs\nimport org.nd4j.linalg.lossfunctions.LossFunctions","user":"anonymous","dateUpdated":"2017-10-20T07:39:51+0000","config":{"colWidth":12,"fontSize":9,"enabled":true,"results":{},"editorSetting":{"language":"scala","editOnDblClick":true},"editorMode":"ace/mode/scala","editorHide":false,"tableHide":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"import org.deeplearning4j.nn.api.OptimizationAlgorithm\nimport org.deeplearning4j.nn.conf.graph.MergeVertex\nimport org.deeplearning4j.nn.conf.layers.{DenseLayer, GravesLSTM, OutputLayer, RnnOutputLayer}\nimport org.deeplearning4j.nn.conf.{ComputationGraphConfiguration, MultiLayerConfiguration, NeuralNetConfiguration, Updater}\nimport org.deeplearning4j.nn.graph.ComputationGraph\nimport org.deeplearning4j.nn.multilayer.MultiLayerNetwork\nimport org.deeplearning4j.nn.weights.WeightInit\nimport org.nd4j.linalg.activations.Activation\nimport org.nd4j.linalg.learning.config.Nesterovs\nimport org.nd4j.linalg.lossfunctions.LossFunctions\n"}]},"apps":[],"jobName":"paragraph_1508483583551_393267404","id":"20171020-071303_1517144370","dateCreated":"2017-10-20T07:13:03+0000","dateStarted":"2017-10-20T07:39:41+0000","dateFinished":"2017-10-20T07:39:42+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:105"},{"text":"%md\n\n### Let's create the feed-forward network configuration","user":"anonymous","dateUpdated":"2017-10-20T07:39:44+0000","config":{"colWidth":12,"fontSize":9,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"apps":[],"jobName":"paragraph_1508484128136_-2034033769","id":"20171020-072208_966782035","dateCreated":"2017-10-20T07:22:08+0000","dateStarted":"2017-10-20T07:39:43+0000","dateFinished":"2017-10-20T07:39:43+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:106","errorMessage":""},{"text":"val conf = new NeuralNetConfiguration.Builder()\n    .seed(12345)\n    .iterations(1)\n    .weightInit(WeightInit.XAVIER)\n    .updater(Updater.ADAGRAD)\n    .activation(Activation.RELU)\n    .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)\n    .learningRate(0.05)\n    .regularization(true).l2(0.0001)\n    .list()\n    .layer(0, new DenseLayer.Builder().nIn(784).nOut(250).weightInit(WeightInit.XAVIER).activation(Activation.RELU) //First hidden layer\n            .build())\n    .layer(1, new OutputLayer.Builder().nIn(250).nOut(10).weightInit(WeightInit.XAVIER).activation(Activation.SOFTMAX) //Output layer\n            .lossFunction(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)\n            .build())\n    .pretrain(false).backprop(true)\n    .build()","user":"anonymous","dateUpdated":"2017-10-20T07:39:56+0000","config":{"colWidth":12,"fontSize":9,"enabled":true,"results":{},"editorSetting":{"language":"scala"},"editorMode":"ace/mode/scala","tableHide":true,"editorHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"conf: org.deeplearning4j.nn.conf.MultiLayerConfiguration =\n{\n  \"backprop\" : true,\n  \"backpropType\" : \"Standard\",\n  \"cacheMode\" : \"NONE\",\n  \"confs\" : [ {\n    \"cacheMode\" : \"NONE\",\n    \"iterationCount\" : 0,\n    \"l1ByParam\" : { },\n    \"l2ByParam\" : { },\n    \"layer\" : {\n      \"dense\" : {\n        \"activationFn\" : {\n          \"ReLU\" : { }\n        },\n        \"adamMeanDecay\" : \"NaN\",\n        \"adamVarDecay\" : \"NaN\",\n        \"biasInit\" : 0.0,\n        \"biasLearningRate\" : 0.05,\n        \"dist\" : null,\n        \"dropOut\" : 0.0,\n        \"epsilon\" : 1.0E-6,\n        \"gradientNormalization\" : \"None\",\n        \"gradientNormalizationThreshold\" : 1.0,\n        \"iupdater\" : {\n          \"@class\" : \"org.nd4j.linalg.learning.config.AdaGrad\",\n          \"epsilon\" : 1.0E-6,\n          \"learningRate\" : 0.05\n        },..."}]},"apps":[],"jobName":"paragraph_1508483629357_588411336","id":"20171020-071349_473511535","dateCreated":"2017-10-20T07:13:49+0000","dateStarted":"2017-10-20T07:39:43+0000","dateFinished":"2017-10-20T07:39:53+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:107"},{"text":"%md\n\n### What we did here?\n\nAs you can see above that we have made a feed-forward network configuration with one hidden layer. We have used a RELU activation between our hidden and output layer. RELUs are one of the most popularly used activation functions. Activation functions also introduce non-linearities in our network so that we can learn on more complex features present in our data. Hidden layers can learn features from the input layer and it can send those features to be analyzed by our output layer to get the corresponding outputs.\n\nYou can similarly make network configurations with more hidden layers as:","user":"anonymous","dateUpdated":"2017-10-20T07:39:44+0000","config":{"colWidth":12,"fontSize":9,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<div class=\"markdown-body\">\n<h3>What we did here?</h3>\n<p>As you can see above that we have made a feed-forward network configuration with one hidden layer. We have used a RELU activation between our hidden and output layer. RELUs are one of the most popularly used activation functions. Activation functions also introduce non-linearities in our network so that we can learn on more complex features present in our data. Hidden layers can learn features from the input layer and it can send those features to be analyzed by our output layer to get the corresponding outputs.</p>\n<p>You can similarly make network configurations with more hidden layers as:</p>\n</div>"}]},"apps":[],"jobName":"paragraph_1508484461487_-527791133","id":"20171020-072741_712403311","dateCreated":"2017-10-20T07:27:41+0000","dateStarted":"2017-10-20T07:39:53+0000","dateFinished":"2017-10-20T07:39:53+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:108"},{"text":"//Just make sure the number of inputs of the next layer equals to the number of outputs in the previous layer.\nval conf = new NeuralNetConfiguration.Builder()\n    .seed(12345)\n    .iterations(1)\n    .weightInit(WeightInit.XAVIER)\n    .updater(Updater.ADAGRAD)\n    .activation(Activation.RELU)\n    .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)\n    .learningRate(0.05)\n    .regularization(true).l2(0.0001)\n    .list()\n    .layer(0, new DenseLayer.Builder().nIn(784).nOut(250).weightInit(WeightInit.XAVIER).activation(Activation.RELU) //First hidden layer\n            .build())\n    .layer(1, new OutputLayer.Builder().nIn(250).nOut(100).weightInit(WeightInit.XAVIER).activation(Activation.RELU) //Second hidden layer\n            .build())\n    .layer(2, new OutputLayer.Builder().nIn(100).nOut(50).weightInit(WeightInit.XAVIER).activation(Activation.RELU) //Third hidden layer\n            .build())\n    .layer(3, new OutputLayer.Builder().nIn(50).nOut(10).weightInit(WeightInit.XAVIER).activation(Activation.SOFTMAX) //Output layer\n            .lossFunction(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)\n            .build())\n    .pretrain(false).backprop(true)\n    .build()","user":"anonymous","dateUpdated":"2017-10-20T07:42:03+0000","config":{"colWidth":12,"fontSize":9,"enabled":true,"results":{},"editorSetting":{"language":"scala","editOnDblClick":true},"editorMode":"ace/mode/scala","editorHide":false,"tableHide":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"conf: org.deeplearning4j.nn.conf.MultiLayerConfiguration =\n{\n  \"backprop\" : true,\n  \"backpropType\" : \"Standard\",\n  \"cacheMode\" : \"NONE\",\n  \"confs\" : [ {\n    \"cacheMode\" : \"NONE\",\n    \"iterationCount\" : 0,\n    \"l1ByParam\" : { },\n    \"l2ByParam\" : { },\n    \"layer\" : {\n      \"dense\" : {\n        \"activationFn\" : {\n          \"ReLU\" : { }\n        },\n        \"adamMeanDecay\" : \"NaN\",\n        \"adamVarDecay\" : \"NaN\",\n        \"biasInit\" : 0.0,\n        \"biasLearningRate\" : 0.05,\n        \"dist\" : null,\n        \"dropOut\" : 0.0,\n        \"epsilon\" : 1.0E-6,\n        \"gradientNormalization\" : \"None\",\n        \"gradientNormalizationThreshold\" : 1.0,\n        \"iupdater\" : {\n          \"@class\" : \"org.nd4j.linalg.learning.config.AdaGrad\",\n          \"epsilon\" : 1.0E-6,\n          \"learningRate\" : 0.05\n        },..."}]},"apps":[],"jobName":"paragraph_1508484841551_1351196860","id":"20171020-073401_1145697495","dateCreated":"2017-10-20T07:34:01+0000","dateStarted":"2017-10-20T07:41:59+0000","dateFinished":"2017-10-20T07:41:59+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:109"},{"text":"%md\n\n### What's next?\n\n- Check out all of our tutorials available [on Github](https://github.com/deeplearning4j/dl4j-examples/tree/master/tutorials). Notebooks are numbered for easy following.","user":"anonymous","dateUpdated":"2017-10-20T07:39:44+0000","config":{"colWidth":12,"fontSize":9,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<div class=\"markdown-body\">\n<h3>What&rsquo;s next?</h3>\n<ul>\n  <li>Check out all of our tutorials available <a href=\"https://github.com/deeplearning4j/dl4j-examples/tree/master/tutorials\">on Github</a>. Notebooks are numbered for easy following.</li>\n</ul>\n</div>"}]},"apps":[],"jobName":"paragraph_1508484111504_-375532888","id":"20171020-072151_195526063","dateCreated":"2017-10-20T07:21:51+0000","dateStarted":"2017-10-20T07:39:54+0000","dateFinished":"2017-10-20T07:39:54+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:110"},{"text":"%md\n","user":"anonymous","dateUpdated":"2017-10-20T07:39:44+0000","config":{"colWidth":12,"fontSize":9,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true},"settings":{"params":{},"forms":{}},"apps":[],"jobName":"paragraph_1508484118092_-539924348","id":"20171020-072158_2072802023","dateCreated":"2017-10-20T07:21:58+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:111"}],"name":"Feed-forward","id":"2CWZMRAJQ","angularObjects":{"2CYUXYA5D:shared_process":[],"2CWKFWRMT:shared_process":[],"2CXBS1NKG:shared_process":[],"2CWGSEBDH:shared_process":[],"2CY5MAMP7:shared_process":[],"2CY8FT68P:shared_process":[],"2CYT5E816:shared_process":[],"2CVKF762K:shared_process":[],"2CVFFNGQ1:shared_process":[],"2CVXT54KH:shared_process":[],"2CWNUX3BY:shared_process":[],"2CVKHQX96:shared_process":[],"2CWYXKD9P:shared_process":[],"2CVJR5WZH:shared_process":[],"2CVFW4QC1:shared_process":[],"2CWZFE3AE:shared_process":[],"2CYQ734ZP:shared_process":[],"2CVHTA1KV:shared_process":[],"2CXUYR18D:shared_process":[],"2CWVYEYGS:shared_process":[]},"config":{"looknfeel":"default","personalizedMode":"false"},"info":{}}
back to top