https://github.com/deeplearning4j/dl4j-examples
Raw File
Tip revision: 17f1905ffb20058e4f84b172d36bbd8a93760ba4 authored by Max Pumperla on 18 December 2018, 09:32:03 UTC
wrap up DCGAN
Tip revision: 17f1905
17. Instacart Multitask Example.json
{"paragraphs":[{"text":"%md\n### Note\n\nPlease view the [README](https://github.com/deeplearning4j/deeplearning4j/tree/master/dl4j-examples/tutorials/README.md) to learn about installing, setting up dependencies, and importing notebooks in Zeppelin","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"markdown","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/markdown","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<h3>Note</h3>\n<p>Please view the <a href=\"https://github.com/deeplearning4j/deeplearning4j/tree/master/dl4j-examples/tutorials/README.md\">README</a> to learn about installing, setting up dependencies, and importing notebooks in Zeppelin</p>\n"}]},"apps":[],"jobName":"paragraph_1529916335327_-1331712214","id":"20180427-083911_1128930448","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"focus":true,"$$hashKey":"object:9852"},{"text":"%md\n\n### Background\n\nIn this tutorial we will use a LSTM neural network to predict instacart users' purchasing behavior given a history of their past orders. The data originially comes from a Kaggle challenge (kaggle.com/c/instacart-market-basket-analysis). We first removed users that only made 1 order using the instacart app and then took 5000 users out of the remaining to be part of the data for this tutorial. \n\nFor each order, we have information on the product the user purchased. For example, there is information on the product name, what aisle it is found in, and the department it falls under. To construct features, we extracted indicators representing whether or not a user purchased a product in the given aisles for each order. In total there are 134 aisles. The targets were whether or not a user will buy a product in the breakfast department in the next order. We also used auxiliary targets to train this LSTM. The auxiliary targets were whether or not a user will buy a product in the dairy department in the next order.\n\nWe suspect that a LSTM will be effective for this task, because of the temporal dependencies in the data.","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"markdown","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/markdown","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<h3>Background</h3>\n<p>In this tutorial we will use a LSTM neural network to predict instacart users' purchasing behavior given a history of their past orders. The data originially comes from a Kaggle challenge (kaggle.com/c/instacart-market-basket-analysis). We first removed users that only made 1 order using the instacart app and then took 5000 users out of the remaining to be part of the data for this tutorial.</p>\n<p>For each order, we have information on the product the user purchased. For example, there is information on the product name, what aisle it is found in, and the department it falls under. To construct features, we extracted indicators representing whether or not a user purchased a product in the given aisles for each order. In total there are 134 aisles. The targets were whether or not a user will buy a product in the breakfast department in the next order. We also used auxiliary targets to train this LSTM. The auxiliary targets were whether or not a user will buy a product in the dairy department in the next order.</p>\n<p>We suspect that a LSTM will be effective for this task, because of the temporal dependencies in the data.</p>\n"}]},"apps":[],"jobName":"paragraph_1529916335328_-1345947923","id":"20180427-085703_34966348","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9853"},{"text":"%md\n### Imports\n","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"markdown","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/markdown","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<h3>Imports</h3>\n"}]},"apps":[],"jobName":"paragraph_1529916335328_-1345947923","id":"20180427-085648_929291696","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9854"},{"text":"import org.deeplearning4j.nn.api.OptimizationAlgorithm;\nimport org.deeplearning4j.nn.conf.NeuralNetConfiguration;\nimport org.deeplearning4j.nn.conf.Updater;\nimport org.deeplearning4j.nn.conf.layers.LSTM;\nimport org.deeplearning4j.nn.weights.WeightInit;\nimport org.nd4j.linalg.activations.Activation;\nimport org.deeplearning4j.nn.conf.layers.RnnOutputLayer;\nimport org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction;\nimport org.deeplearning4j.nn.conf.GradientNormalization;\nimport org.deeplearning4j.eval.ROC;\nimport org.datavec.api.records.reader.impl.csv.CSVSequenceRecordReader;\nimport org.datavec.api.records.reader.SequenceRecordReader;\nimport org.datavec.api.split.NumberedFileInputSplit;\nimport org.deeplearning4j.datasets.datavec.RecordReaderMultiDataSetIterator;\nimport org.nd4j.linalg.dataset.api.iterator.MultiDataSetIterator;\nimport org.deeplearning4j.nn.conf.ComputationGraphConfiguration;\nimport org.deeplearning4j.nn.graph.ComputationGraph;\nimport org.nd4j.linalg.dataset.api.MultiDataSet;\nimport org.nd4j.linalg.api.ndarray.INDArray;\nimport java.io.File;\nimport java.net.URL;\nimport java.io.BufferedInputStream;\nimport java.io.FileInputStream;\nimport java.io.BufferedOutputStream;\nimport java.io.FileOutputStream;\nimport org.apache.commons.io.FilenameUtils;\nimport org.apache.commons.io.FileUtils;\nimport org.apache.commons.compress.archivers.tar.TarArchiveInputStream;\nimport org.apache.commons.compress.compressors.gzip.GzipCompressorInputStream;\nimport org.apache.commons.compress.archivers.tar.TarArchiveEntry;","dateUpdated":"2018-06-25T08:45:35+0000","config":{"colWidth":12,"editorMode":"ace/mode/scala","results":{},"enabled":true,"editorSetting":{"language":"scala"}},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"import org.deeplearning4j.nn.api.OptimizationAlgorithm\nimport org.deeplearning4j.nn.conf.NeuralNetConfiguration\nimport org.deeplearning4j.nn.conf.Updater\nimport org.deeplearning4j.nn.conf.layers.GravesLSTM\nimport org.deeplearning4j.nn.weights.WeightInit\nimport org.nd4j.linalg.activations.Activation\nimport org.deeplearning4j.nn.conf.layers.RnnOutputLayer\nimport org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction\nimport org.deeplearning4j.nn.conf.GradientNormalization\nimport org.deeplearning4j.eval.ROC\nimport org.datavec.api.records.reader.impl.csv.CSVSequenceRecordReader\nimport org.datavec.api.records.reader.SequenceRecordReader\nimport org.datavec.api.split.NumberedFileInputSplit\nimport org.deeplearning4j.datasets.datavec.RecordReaderMultiDataSetIterator\nimport org.nd4j.linalg.dataset.api.iterator.MultiDataSetIterator\nimport org.deeplearning4j.nn.conf.ComputationGraphConfiguration\nimport org.deeplearning4j.nn.graph.ComputationGraph\nimport org.nd4j.linalg.dataset.api.MultiDataSet\nimport org.nd4j.linalg.api.ndarray.INDArray\nimport java.io.File\nimport java.net.URL\nimport java.io.BufferedInputStream\nimport java.io.FileInputStream\nimport java.io.BufferedOutputStream\nimport java.io.FileOutputStream\nimport org.apache.commons.io.FilenameUtils\nimport org.apache.commons.io.FileUtils\nimport org.apache.commons.compress.archivers.tar.TarArchiveInputStream\nimport org.apache.commons.compress.compressors.gzip.GzipCompressorInputStream\nimport org.apache.commons.compress.archivers.tar.TarArchiveEntry\n"}]},"apps":[],"jobName":"paragraph_1529916335328_-1345947923","id":"20180427-083930_917505582","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9855"},{"text":"%md \n\n### Download Data","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"markdown","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/markdown","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<h3>Download Data</h3>\n"}]},"apps":[],"jobName":"paragraph_1529916335329_-1346332672","id":"20180427-083933_1234393367","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9856"},{"text":"%md\nTo download the data, we will create a temporary directory that will store the data files, extract the tar.gz file from the url, and place it in the specified directory.","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"markdown","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/markdown","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>To download the data, we will create a temporary directory that will store the data files, extract the tar.gz file from the url, and place it in the specified directory.</p>\n"}]},"apps":[],"jobName":"paragraph_1529916335329_-1346332672","id":"20180427-084225_1613961353","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9857"},{"text":"val DATA_URL = \"https://bpstore1.blob.core.windows.net/tutorials/instacart.tar.gz\"\nval DATA_PATH = FilenameUtils.concat(System.getProperty(\"java.io.tmpdir\"), \"dl4j_instacart/\")","dateUpdated":"2018-06-25T08:45:35+0000","config":{"colWidth":12,"editorMode":"ace/mode/scala","results":{},"enabled":true,"editorSetting":{"language":"scala"}},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"DATA_URL: String = https://bpstore1.blob.core.windows.net/tutorials/instacart.tar.gz\nDATA_PATH: String = /tmp/dl4j_instacart/\n"}]},"apps":[],"jobName":"paragraph_1529916335329_-1346332672","id":"20180427-084247_2085419306","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9858"},{"text":"val directory = new File(DATA_PATH)\ndirectory.mkdir() \n\nval archizePath = DATA_PATH + \"instacart.tar.gz\"\nval archiveFile = new File(archizePath)\nval extractedPath = DATA_PATH + \"instacart\" \nval extractedFile = new File(extractedPath)\n\nFileUtils.copyURLToFile(new URL(DATA_URL), archiveFile) ","dateUpdated":"2018-06-25T08:45:35+0000","config":{"colWidth":12,"editorMode":"ace/mode/scala","results":{},"enabled":true,"editorSetting":{"language":"scala"}},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"directory: java.io.File = /tmp/dl4j_instacart\nres4: Boolean = false\narchizePath: String = /tmp/dl4j_instacart/instacart.tar.gz\narchiveFile: java.io.File = /tmp/dl4j_instacart/instacart.tar.gz\nextractedPath: String = /tmp/dl4j_instacart/instacart\nextractedFile: java.io.File = /tmp/dl4j_instacart/instacart\n"}]},"apps":[],"jobName":"paragraph_1529916335329_-1346332672","id":"20180427-084313_1474080225","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9859"},{"text":"%md\nWe will then extract the data from the tar.gz file, recreate directories within the tar.gz file into our temporary directories, and copy the files from the tar.gz file.","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"markdown","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/markdown","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>We will then extract the data from the tar.gz file, recreate directories within the tar.gz file into our temporary directories, and copy the files from the tar.gz file.</p>\n"}]},"apps":[],"jobName":"paragraph_1529916335330_-1345178426","id":"20180427-084416_1688783838","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9860"},{"text":"var fileCount = 0\nvar dirCount = 0\nval BUFFER_SIZE = 4096\nval tais = new TarArchiveInputStream(new GzipCompressorInputStream( new BufferedInputStream( new FileInputStream(archizePath))))\n\nvar entry = tais.getNextEntry().asInstanceOf[TarArchiveEntry]\n\nwhile(entry != null){\n    if (entry.isDirectory()) {\n        new File(DATA_PATH + entry.getName()).mkdirs()\n        dirCount = dirCount + 1\n        fileCount = 0\n    }\n    else {\n        \n        val data = new Array[scala.Byte](4 * BUFFER_SIZE)\n\n        val fos = new FileOutputStream(DATA_PATH + entry.getName());\n        val dest = new BufferedOutputStream(fos, BUFFER_SIZE);\n        var count = tais.read(data, 0, BUFFER_SIZE)\n        \n        while (count != -1) {\n            dest.write(data, 0, count)\n            count = tais.read(data, 0, BUFFER_SIZE)\n        }\n        \n        dest.close()\n        fileCount = fileCount + 1\n    }\n    if(fileCount % 1000 == 0){\n        print(\".\")\n    }\n    \n    entry = tais.getNextEntry().asInstanceOf[TarArchiveEntry]\n}","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"scala","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/scala","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"fileCount: Int = 0\ndirCount: Int = 0\nBUFFER_SIZE: Int = 4096\ntais: org.apache.commons.compress.archivers.tar.TarArchiveInputStream = org.apache.commons.compress.archivers.tar.TarArchiveInputStream@202693b3\nentry: org.apache.commons.compress.archivers.tar.TarArchiveEntry = org.apache.commons.compress.archivers.tar.TarArchiveEntry@35281714\n..................."}]},"apps":[],"jobName":"paragraph_1529916335330_-1345178426","id":"20180427-084719_19470875","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9861"},{"text":"%md\n### DataSetIterators","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"markdown","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/markdown","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<h3>DataSetIterators</h3>\n"}]},"apps":[],"jobName":"paragraph_1529916335330_-1345178426","id":"20180427-084743_1654181620","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9862"},{"text":"%md\nNext we will convert the raw data (csv files) into DataSetIterators, which will be fed into a neural network. Our training data will have 4000 examples which will be represented by a single DataSetIterator, and the testing data will have 1000 examples which will be represented by a separate DataSetIterator.","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"markdown","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/markdown","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>Next we will convert the raw data (csv files) into DataSetIterators, which will be fed into a neural network. Our training data will have 4000 examples which will be represented by a single DataSetIterator, and the testing data will have 1000 examples which will be represented by a separate DataSetIterator.</p>\n"}]},"apps":[],"jobName":"paragraph_1529916335330_-1345178426","id":"20180427-084756_600711638","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9863"},{"text":"val path = FilenameUtils.concat(DATA_PATH, \"instacart/\") // set parent directory\n\nval featureBaseDir = FilenameUtils.concat(path, \"features\") // set feature directory\nval targetsBaseDir = FilenameUtils.concat(path, \"breakfast\") // set label directory\nval auxilBaseDir = FilenameUtils.concat(path, \"dairy\") // set futures directory","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"scala","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/scala","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"path: String = /tmp/dl4j_instacart/instacart/\nfeatureBaseDir: String = /tmp/dl4j_instacart/instacart/features\ntargetsBaseDir: String = /tmp/dl4j_instacart/instacart/breakfast\nauxilBaseDir: String = /tmp/dl4j_instacart/instacart/dairy\n"}]},"apps":[],"jobName":"paragraph_1529916335331_-1345563175","id":"20180427-084821_1455072896","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9864"},{"text":"%md\n\nWe first initialize CSVSequenceRecordReaders, which will parse the raw data into record-like format. Because we will be using multitask learning, we will use two outputs. Thus we need three RecordReaders in total: one for the input, another for the first target, and the last for the second target. Next, we will need the RecordreaderMultiDataSetIterator, since we now have two outputs. We can add our SequenceRecordReaders using the addSequenceReader methods and specify the input and both outputs. The ALIGN_END alignment mode is used, since the sequences for each example vary in length.\n\nWe will create DataSetIterators for both the training data and the test data.","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"markdown","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/markdown","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>We first initialize CSVSequenceRecordReaders, which will parse the raw data into record-like format. Because we will be using multitask learning, we will use two outputs. Thus we need three RecordReaders in total: one for the input, another for the first target, and the last for the second target. Next, we will need the RecordreaderMultiDataSetIterator, since we now have two outputs. We can add our SequenceRecordReaders using the addSequenceReader methods and specify the input and both outputs. The ALIGN_END alignment mode is used, since the sequences for each example vary in length.</p>\n<p>We will create DataSetIterators for both the training data and the test data.</p>\n"}]},"apps":[],"jobName":"paragraph_1529916335331_-1345563175","id":"20180427-091115_1295205739","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9865"},{"text":"val trainFeatures = new CSVSequenceRecordReader(1, \",\");\ntrainFeatures.initialize( new NumberedFileInputSplit(featureBaseDir + \"/%d.csv\", 1, 4000));\n\nval trainBreakfast = new CSVSequenceRecordReader(1, \",\");\ntrainBreakfast.initialize( new NumberedFileInputSplit(targetsBaseDir + \"/%d.csv\", 1, 4000));\n\nval trainDairy = new CSVSequenceRecordReader(1, \",\");\ntrainDairy.initialize(new NumberedFileInputSplit(auxilBaseDir + \"/%d.csv\", 1, 4000));\n\nval train =  new RecordReaderMultiDataSetIterator.Builder(20)\n    .addSequenceReader(\"rr1\", trainFeatures).addInput(\"rr1\")\n    .addSequenceReader(\"rr2\",trainBreakfast).addOutput(\"rr2\")\n    .addSequenceReader(\"rr3\",trainDairy).addOutput(\"rr3\")\n    .sequenceAlignmentMode(RecordReaderMultiDataSetIterator.AlignmentMode.ALIGN_END)\n    .build();","dateUpdated":"2018-06-25T08:45:35+0000","config":{"colWidth":12,"editorMode":"ace/mode/scala","results":{},"enabled":true,"editorSetting":{"language":"scala"}},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"trainFeatures: org.datavec.api.records.reader.impl.csv.CSVSequenceRecordReader = org.datavec.api.records.reader.impl.csv.CSVSequenceRecordReader@583d2354\ntrainBreakfast: org.datavec.api.records.reader.impl.csv.CSVSequenceRecordReader = org.datavec.api.records.reader.impl.csv.CSVSequenceRecordReader@6b568ac2\ntrainDairy: org.datavec.api.records.reader.impl.csv.CSVSequenceRecordReader = org.datavec.api.records.reader.impl.csv.CSVSequenceRecordReader@2745630d\ntrain: org.deeplearning4j.datasets.datavec.RecordReaderMultiDataSetIterator = org.deeplearning4j.datasets.datavec.RecordReaderMultiDataSetIterator@1ecfbd53\n"}]},"apps":[],"jobName":"paragraph_1529916335331_-1345563175","id":"20180427-084856_1924041114","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9866"},{"text":"val testFeatures = new CSVSequenceRecordReader(1, \",\");\ntestFeatures.initialize( new NumberedFileInputSplit(featureBaseDir + \"/%d.csv\", 4001, 5000));\n\nval testBreakfast = new CSVSequenceRecordReader(1, \",\");\ntestBreakfast.initialize( new NumberedFileInputSplit(targetsBaseDir + \"/%d.csv\", 4001, 5000));\n\nval testDairy = new CSVSequenceRecordReader(1, \",\");\ntestDairy.initialize(new NumberedFileInputSplit(auxilBaseDir + \"/%d.csv\", 4001, 5000));\n\nval test =  new RecordReaderMultiDataSetIterator.Builder(20)\n    .addSequenceReader(\"rr1\", testFeatures).addInput(\"rr1\")\n    .addSequenceReader(\"rr2\",testBreakfast).addOutput(\"rr2\")\n    .addSequenceReader(\"rr3\",testDairy).addOutput(\"rr3\")\n    .sequenceAlignmentMode(RecordReaderMultiDataSetIterator.AlignmentMode.ALIGN_END)\n    .build();","dateUpdated":"2018-06-25T08:45:35+0000","config":{"colWidth":12,"editorMode":"ace/mode/scala","results":{},"enabled":true,"editorSetting":{"language":"scala"}},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"testFeatures: org.datavec.api.records.reader.impl.csv.CSVSequenceRecordReader = org.datavec.api.records.reader.impl.csv.CSVSequenceRecordReader@229f2a24\ntestBreakfast: org.datavec.api.records.reader.impl.csv.CSVSequenceRecordReader = org.datavec.api.records.reader.impl.csv.CSVSequenceRecordReader@2ae7bfa4\ntestDairy: org.datavec.api.records.reader.impl.csv.CSVSequenceRecordReader = org.datavec.api.records.reader.impl.csv.CSVSequenceRecordReader@274837cc\ntest: org.deeplearning4j.datasets.datavec.RecordReaderMultiDataSetIterator = org.deeplearning4j.datasets.datavec.RecordReaderMultiDataSetIterator@77008e55\n"}]},"apps":[],"jobName":"paragraph_1529916335331_-1345563175","id":"20180427-085034_247955046","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9867"},{"text":"%md \n\n### Neural Network","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"markdown","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/markdown","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<h3>Neural Network</h3>\n"}]},"apps":[],"jobName":"paragraph_1529916335332_-1347486919","id":"20180427-091302_1119629475","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9868"},{"text":"%md\nThe next task is to set up the neural network configuration. We see below that the ComputationGraph class is used to create a LSTM with two outputs. We can set the outputs using the setOutputs method of the NeuralNetConfiguraitonBuilder. One GravesLSTM layer and two RnnOutputLayers will be used. We will also set other hyperparameters of the model, such as dropout, weight initialization, updaters, and activation functions.\n","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"markdown","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/markdown","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>The next task is to set up the neural network configuration. We see below that the ComputationGraph class is used to create a LSTM with two outputs. We can set the outputs using the setOutputs method of the NeuralNetConfiguraitonBuilder. One GravesLSTM layer and two RnnOutputLayers will be used. We will also set other hyperparameters of the model, such as dropout, weight initialization, updaters, and activation functions.</p>\n"}]},"apps":[],"jobName":"paragraph_1529916335332_-1347486919","id":"20180427-091406_433497358","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9869"},{"text":" val conf = new NeuralNetConfiguration.Builder()\n    .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)\n    .seed(12345)\n    .weightInit(WeightInit.XAVIER)\n    .dropOut(0.25)\n    .graphBuilder()\n    .addInputs(\"input\")\n    .addLayer(\"L1\", new LSTM.Builder()\n        .nIn(134).nOut(150)\n        .updater(Updater.ADAM)\n        .gradientNormalization(GradientNormalization.ClipElementWiseAbsoluteValue)\n        .gradientNormalizationThreshold(10)\n        .activation(Activation.TANH)\n        .build(), \"input\")\n    .addLayer(\"out1\", new RnnOutputLayer.Builder(LossFunction.XENT)\n        .updater(Updater.ADAM)\n        .gradientNormalization(GradientNormalization.ClipElementWiseAbsoluteValue)\n        .gradientNormalizationThreshold(10)\n        .activation(Activation.SIGMOID)\n        .nIn(150).nOut(1).build(), \"L1\")\n    .addLayer(\"out2\", new RnnOutputLayer.Builder(LossFunction.XENT)\n        .updater(Updater.ADAM)\n        .gradientNormalization(GradientNormalization.ClipElementWiseAbsoluteValue)\n        .gradientNormalizationThreshold(10)\n        .activation(Activation.SIGMOID)\n        .nIn(150).nOut(1).build(), \"L1\")\n    .setOutputs(\"out1\",\"out2\")\n    .pretrain(false).backprop(true)\n    .build();","dateUpdated":"2018-06-25T08:45:35+0000","config":{"colWidth":12,"editorMode":"ace/mode/scala","results":{},"enabled":true,"editorSetting":{"language":"scala"}},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"warning: there were 3 deprecation warning(s); re-run with -deprecation for details\nconf: org.deeplearning4j.nn.conf.ComputationGraphConfiguration = \n{\n  \"backprop\" : true,\n  \"backpropType\" : \"Standard\",\n  \"cacheMode\" : \"NONE\",\n  \"defaultConfiguration\" : {\n    \"cacheMode\" : \"NONE\",\n    \"epochCount\" : 0,\n    \"iterationCount\" : 0,\n    \"l1ByParam\" : { },\n    \"l2ByParam\" : { },\n    \"layer\" : null,\n    \"maxNumLineSearchIterations\" : 5,\n    \"miniBatch\" : true,\n    \"minimize\" : true,\n    \"optimizationAlgo\" : \"STOCHASTIC_GRADIENT_DESCENT\",\n    \"pretrain\" : false,\n    \"seed\" : 12345,\n    \"stepFunction\" : null,\n    \"variables\" : [ ]\n  },\n  \"epochCount\" : 0,\n  \"inferenceWorkspaceMode\" : \"SEPARATE\",\n  \"iterationCount\" : 0,\n  \"networkInputs\" : [ \"input\" ],\n  \"networkOutputs\" : [ \"out1\", \"out2\" ],\n  \"pretrain\" : false,\n  \"tbpttBackLength\" : 20,\n  \"tbpttFwdLength\" : 20,\n  \"trainingWo..."}]},"apps":[],"jobName":"paragraph_1529916335332_-1347486919","id":"20180427-085254_1614475634","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9870"},{"text":"%md\n\nWe must then initialize the neural network.","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"markdown","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/markdown","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>We must then initialize the neural network.</p>\n"}]},"apps":[],"jobName":"paragraph_1529916335333_-1347871668","id":"20180427-091531_1654516559","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9871"},{"text":"val net = new ComputationGraph(conf);\nnet.init();","dateUpdated":"2018-06-25T08:45:35+0000","config":{"colWidth":12,"editorMode":"ace/mode/scala","results":{},"enabled":true,"editorSetting":{"language":"scala"}},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"net: org.deeplearning4j.nn.graph.ComputationGraph = org.deeplearning4j.nn.graph.ComputationGraph@798636b2\n"}]},"apps":[],"jobName":"paragraph_1529916335333_-1347871668","id":"20180427-085416_2130664426","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9872"},{"text":"%md\n\n### Model Training","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"markdown","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/markdown","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<h3>Model Training</h3>\n"}]},"apps":[],"jobName":"paragraph_1529916335333_-1347871668","id":"20180427-091609_209488885","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9873"},{"text":"%md\nTo train the model, we use 5 epochs with a for loop and simply call the fit method of the ComputationGraph.","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"markdown","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/markdown","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>To train the model, we use 5 epochs with a for loop and simply call the fit method of the ComputationGraph.</p>\n"}]},"apps":[],"jobName":"paragraph_1529916335334_-1346717421","id":"20180427-091612_690496444","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9874"},{"text":"for( epoch <- 1 to 5){\n    println(\"Epoch \"+ epoch);\n    net.fit( train );\n    train.reset();\n}","dateUpdated":"2018-06-25T08:45:35+0000","config":{"colWidth":12,"editorMode":"ace/mode/scala","results":{},"enabled":true,"editorSetting":{"language":"scala"}},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"Epoch 1\nEpoch 2\nEpoch 3\nEpoch 4\nEpoch 5\n"}]},"apps":[],"jobName":"paragraph_1529916335334_-1346717421","id":"20180427-085448_34088845","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9875"},{"text":"%md\n### Model Evaluation","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"markdown","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/markdown","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<h3>Model Evaluation</h3>\n"}]},"apps":[],"jobName":"paragraph_1529916335334_-1346717421","id":"20180427-091640_1250025060","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9876"},{"text":"%md\nWe will now evaluate our trained model on the original task, which was predicting whether or not a user will purchase a product in the breakfast department. Note that we will use the area under the curve (AUC) metric of the ROC curve.  ","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"markdown","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/markdown","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>We will now evaluate our trained model on the original task, which was predicting whether or not a user will purchase a product in the breakfast department. Note that we will use the area under the curve (AUC) metric of the ROC curve.</p>\n"}]},"apps":[],"jobName":"paragraph_1529916335334_-1346717421","id":"20180427-092017_454140891","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9877"},{"text":" // Evaluate model\n\nval roc = new ROC();\n\ntest.reset();\n\nwhile(test.hasNext()){\n    val next = test.next();\n    val features =  next.getFeatures();\n    val output = net.output(features(0));\n    roc.evalTimeSeries(next.getLabels()(0), output(0));\n}\n\nprintln(roc.calculateAUC());","dateUpdated":"2018-06-25T08:45:35+0000","config":{"colWidth":12,"editorMode":"ace/mode/scala","results":{},"enabled":true,"editorSetting":{"language":"scala"}},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"roc: org.deeplearning4j.eval.ROC = ROC(thresholdSteps=0, countActualPositive=0, countActualNegative=0, counts={}, auc=NaN, auprc=NaN, isExact=true, exampleCount=0, rocRemoveRedundantPts=true)\n0.7508926892386332\n"}]},"apps":[],"jobName":"paragraph_1529916335335_-1347102170","id":"20180427-085523_840659","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9878"},{"text":"%md\n\nWe achieve a AUC of 0.75!","dateUpdated":"2018-06-25T08:45:35+0000","config":{"tableHide":false,"editorSetting":{"language":"markdown","editOnDblClick":true},"colWidth":12,"editorMode":"ace/mode/markdown","editorHide":true,"results":{},"enabled":true},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>We achieve a AUC of 0.75!</p>\n"}]},"apps":[],"jobName":"paragraph_1529916335335_-1347102170","id":"20180427-092829_1280525034","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9879"},{"text":"%md\n","dateUpdated":"2018-06-25T08:45:35+0000","config":{"colWidth":12,"editorMode":"ace/mode/markdown","results":{},"enabled":true,"editorSetting":{"language":"markdown","editOnDblClick":true}},"settings":{"params":{},"forms":{}},"apps":[],"jobName":"paragraph_1529916335335_-1347102170","id":"20180427-093020_2025344795","dateCreated":"2018-06-25T08:45:35+0000","status":"READY","errorMessage":"","progressUpdateIntervalMs":500,"$$hashKey":"object:9880"}],"name":"Instacart","id":"2DHYZZQ28","angularObjects":{"2DKTVHEQG:existing_process":[],"2DJ4SFCPD:existing_process":[],"2DJJJ8C1V:existing_process":[],"2DHBWPF6M:existing_process":[],"2DJB51UJ1:existing_process":[],"2DHQZP5Q3:existing_process":[],"2DKS7J9U9:existing_process":[],"2DHPXD7E1:existing_process":[]},"config":{"looknfeel":"default","personalizedMode":"false"},"info":{}}
back to top