c995994 | Amit Agarwal | 11 December 2015, 19:38:56 UTC | Fixed a bug in device selection enforcement. The enforcement function was file static instead of global due to which each source file was getting its own copy of the function and the static variable inside it. | 11 December 2015, 19:48:04 UTC |
598dcc7 | Frank Seide | 11 December 2015, 19:31:11 UTC | removed the workaround in GetNumSamplesWithLabel() | 11 December 2015, 19:31:11 UTC |
6e8b5e7 | Mark Hillebrand | 11 December 2015, 19:18:06 UTC | Scripts/build-and-test: fix test for successful test execution (CPU and GPU targets are upper-cased in CNTK's run log) | 11 December 2015, 19:19:27 UTC |
241bf17 | Gaizka Navarro | 11 December 2015, 17:05:07 UTC | Fixed issue with working paths between tests. | 11 December 2015, 17:07:04 UTC |
4312576 | Frank Seide | 11 December 2015, 16:05:08 UTC | bug fix: TrainOneEpoch() must not call GetNumSamplesWithLabel() when GetMinibatchIntoNetwork() returns false | 11 December 2015, 16:05:08 UTC |
f0ea36c | Frank Seide | 11 December 2015, 15:58:18 UTC | (comments) | 11 December 2015, 15:58:18 UTC |
8426e81 | Gaizka Navarro | 11 December 2015, 10:56:49 UTC | Updated config files to match BS guidelines | 11 December 2015, 12:27:29 UTC |
2441425 | Gaizka Navarro | 11 December 2015, 10:15:38 UTC | Switched tests to use AN4 instead of TIMIT. | 11 December 2015, 12:27:28 UTC |
0896400 | Gaizka Navarro | 10 December 2015, 09:18:40 UTC | Commented out tests that trigger an assertion in Debug | 11 December 2015, 12:27:14 UTC |
79b20ec | Gaizka Navarro | 09 December 2015, 15:00:18 UTC | Pointed test to use environment variable for located test data | 11 December 2015, 12:27:13 UTC |
565dc49 | Gaizka Navarro | 23 November 2015, 15:28:24 UTC | Added ReaderTests project | 11 December 2015, 12:27:12 UTC |
f5b6f0e | Mark Hillebrand | 10 December 2015, 15:07:21 UTC | Scripts/build-and-test: for "--target cpu" also do CPU-only build on Linux CPU-only build output will go the build/cpu/{debug,release} directories. Note: test and clean-after functionality needs to be adapted in future changes. | 11 December 2015, 10:45:13 UTC |
29cd86e | Amit Agarwal | 11 December 2015, 10:20:00 UTC | Implemented sharing of node output value matrices sharing which hugely reduces the amount of GPU memory required for evaluating/training a CNTK model. Currently this feature is off by default and needs to be enabled through a boolean config setting named shareNodeValueMatrices. After this feature has been tested more throughly, this will be turned on by default | 11 December 2015, 10:20:00 UTC |
01bb4d3 | Philipp Kranen | 11 December 2015, 09:17:51 UTC | Brought back stderr in MNIST configs and reset to 30 epochs (cf Alekey K.) | 11 December 2015, 09:17:51 UTC |
d3192b6 | Frank Seide | 11 December 2015, 00:39:59 UTC | Merge branch 'master' of https://git.codeplex.com/cntk into fseide/mblayout | 11 December 2015, 00:39:59 UTC |
5299231 | Frank Seide | 11 December 2015, 00:16:47 UTC | bug fix in HTKMLFReader: MB sequence entries were not set correctly in frame mode (using the new method) | 11 December 2015, 00:16:47 UTC |
fd0ecb5 | bmitra | 11 December 2015, 00:01:11 UTC | Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes | 11 December 2015, 00:01:11 UTC |
f970c00 | Amit Agarwal | 10 December 2015, 23:40:16 UTC | Merge branch 'fseide/outputValuesMemShare' of https://git01.codeplex.com/cntk into amitaga/outputValuesMemShare Conflicts: MachineLearning/CNTKComputationNetworkLib/ComputationNetwork.h MachineLearning/CNTKComputationNetworkLib/ComputationNetworkAnalysis.cpp | 10 December 2015, 23:50:19 UTC |
8142116 | Frank Seide | 10 December 2015, 23:42:03 UTC | fixed DecimateMinibatch() to work with new AddSequence() method | 10 December 2015, 23:42:03 UTC |
f1175b9 | Frank Seide | 10 December 2015, 23:05:18 UTC | FormEvalOrder(), GetEvalOrder(), and FormRecurrentLoops() now accept a nullptr as the argument, to denote a global eval order that includes all nodes of the network. This is to support Amit's work on memshare for output values | 10 December 2015, 23:05:18 UTC |
811db95 | bmitra | 10 December 2015, 22:55:48 UTC | Remove redundant line-break. | 10 December 2015, 22:55:48 UTC |
d5df5df | Frank Seide | 10 December 2015, 22:39:12 UTC | new method MBLayout::GetAllSequences(), needed for recreating a layout after decimation | 10 December 2015, 22:39:12 UTC |
571bc7f | Chris Basoglu | 10 December 2015, 22:29:19 UTC | Account for minibatch per epoch | 10 December 2015, 22:29:19 UTC |
50a75b6 | bmitra | 10 December 2015, 22:20:06 UTC | Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes | 10 December 2015, 22:20:06 UTC |
b2aad5e | bmitra | 10 December 2015, 22:19:43 UTC | Fix reshape image layout bug. | 10 December 2015, 22:19:43 UTC |
3a76d4e | Chris Basoglu | 10 December 2015, 20:43:52 UTC | Change the digits of precision on the percenation part of minibatch log to be variable dependent on epoch size | 10 December 2015, 20:43:52 UTC |
29ddca1 | bmitra | 10 December 2015, 16:33:22 UTC | Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes | 10 December 2015, 16:33:22 UTC |
0cbb719 | bmitra | 10 December 2015, 15:44:36 UTC | SparsePCReader changes. | 10 December 2015, 15:44:36 UTC |
47ffab3 | bmitra | 10 December 2015, 15:43:57 UTC | Minor changes to reshape kernel. | 10 December 2015, 15:43:57 UTC |
f8fda57 | bmitra | 10 December 2015, 15:42:14 UTC | Removing unreachable code. | 10 December 2015, 15:42:14 UTC |
048c91b | Philipp Kranen | 10 December 2015, 12:14:14 UTC | Minor changes to configs in Demo/Speech/ based on Dong's comments | 10 December 2015, 12:14:14 UTC |
17cd7a8 | Frank Seide | 10 December 2015, 09:25:16 UTC | Merge branch 'fseide/network' of https://git.codeplex.com/cntk into fseide/network | 10 December 2015, 09:25:16 UTC |
c0d4e86 | Frank Seide | 10 December 2015, 09:24:36 UTC | towards implementing MBLayout not as dense bits but an explicit set of sequences (which will be needed for sequence-to-sequence, and woiuld also make fix DelayedValueBase for m_timeStep > 1): flags can now ONLY be set through AddSequence() or AddGap(), i.e. in full sequences (MBLayout::Set() is now private, and SetWithourOr() and Mask() are commented out); HTKMLFReader and LUSequenceReader have been modified to follow the new method (also heavily commented that code); BatchSequenceReader (LMSequenceReader project) not so much: It did not set end or gap flags, so it could not be fixed (and likely did not work before this change, either). Instead, it now throws at those places; EvalReader did not maintain the needed state, so the fix will have incorrectness for DelayedValueNodes with m_timeStep > 1; RecurrentNode currently disabled for m_timeStep > 1, as that will be fixed very differently once this is complete; DecimateMinibatch() temporarily disabled. We need a new method in MBLayout to support this | 10 December 2015, 09:24:36 UTC |
7b08e39 | Frank Seide | 10 December 2015, 04:37:48 UTC | (comment) | 10 December 2015, 04:37:48 UTC |
780b9ee | Frank Seide | 10 December 2015, 01:53:26 UTC | MBLayout::AddSequence() now also remembers per-sequence distance to boundaries | 10 December 2015, 01:53:26 UTC |
6755d97 | Frank Seide | 10 December 2015, 01:29:04 UTC | added a workaround for a bug in distributed reading (returning an inconsistent MBLayout at end of epoch), which the recenly fixed to GetNumSamplesWithLabel() to return a wrong value (the old version returned the right value out of pure luck); deleted unused function ComputationNetwork::SetNodeValue(); added some code to MBLayout::AddSequence() w.r.t moving a way from the bit masks | 10 December 2015, 01:29:04 UTC |
b0c1156 | Yongqiang Wang | 10 December 2015, 01:10:09 UTC | Add support of subminibatch for sequence training. | 10 December 2015, 01:10:09 UTC |
189e161 | Frank Seide | 10 December 2015, 00:27:08 UTC | (one more check added for Jenkins) | 10 December 2015, 00:27:08 UTC |
799f9ed | Frank Seide | 10 December 2015, 00:25:25 UTC | (brought back old GetNumSamplesWithLabel() to see where it differs in Jenkins) | 10 December 2015, 00:25:25 UTC |
9ea21ab | Frank Seide | 09 December 2015, 23:59:25 UTC | clarified and enforced the contract that GetMinibatchIntoNetwork() thinks it has with GetMinibatch() regarding the meaning of the return value | 09 December 2015, 23:59:25 UTC |
d2ac9bb | Frank Seide | 09 December 2015, 21:59:02 UTC | one more bug fix | 09 December 2015, 21:59:02 UTC |
06ebc74 | Frank Seide | 09 December 2015, 21:52:59 UTC | bug fix in AddSequence(), a comparison was off by 1 | 09 December 2015, 21:52:59 UTC |
2be7f9b | Frank Seide | 09 December 2015, 20:14:58 UTC | changed MBLayou::SetAsSentence() to AddSequence() which now also takes an utterance id; deleted MBLayout::GetNumSamplesWithLabel() as it did the same thing as DetermineActualNumSamples() | 09 December 2015, 20:14:58 UTC |
084b843 | Frank Seide | 09 December 2015, 19:06:31 UTC | (moved GetNumSamplesWithLabel() into MBLayout) | 09 December 2015, 19:06:31 UTC |
5c073d1 | Frank Seide | 09 December 2015, 17:43:35 UTC | Merge branch 'master' of https://git.codeplex.com/cntk into fseide/network | 09 December 2015, 17:43:35 UTC |
a7cdbff | Frank Seide | 09 December 2015, 17:42:47 UTC | FrameRange can now hold/specify an additional time offset (which will allow to access time offsets outside the actual minibatch range, to better support truncated BPTT); further cleanup/simplification of network-analysis code | 09 December 2015, 17:42:47 UTC |
f4fea38 | Wolfgang Manousek | 09 December 2015, 11:04:22 UTC | fix errors in CPUONLY build for Windows and Linux | 09 December 2015, 12:16:21 UTC |
f612afa | Philipp Kranen | 09 December 2015, 11:33:50 UTC | Minor fixes in demos (MB rescaling, tabs, image link, Speech/LSTM) | 09 December 2015, 11:33:50 UTC |
0f1238c | Amit Agarwal | 09 December 2015, 06:51:15 UTC | Removed some MPI barriers from the gradient aggregation code that were added for better IRecv/ISend perf with OpenMPI 1.8.5 but are found to cause perf issues with OpenMPI 1.10.0 | 09 December 2015, 06:51:15 UTC |
c2f0a98 | Mark Hillebrand | 09 December 2015, 06:07:08 UTC | README.md: add intro from CNTK main page | 09 December 2015, 06:07:08 UTC |
a401f59 | Chris Basoglu | 09 December 2015, 00:28:13 UTC | Merge branch 'cbasoglu/testFix' of https://git.codeplex.com/cntk into cbasoglu/testFix | 09 December 2015, 00:28:13 UTC |
99dd3ac | Chris Basoglu | 08 December 2015, 20:18:07 UTC | Fix build-and-test that got broken after Demos move | 09 December 2015, 00:27:48 UTC |
e2cb15d | Chris Basoglu | 08 December 2015, 20:18:07 UTC | Fix build-and-test that got broken after Demos move | 08 December 2015, 20:18:07 UTC |
b2d1405 | Mark Hillebrand | 08 December 2015, 20:09:09 UTC | Address CR comment for 7c553a96b61fbdcd7793b6ff0fdd4034c6c59d33 | 08 December 2015, 20:09:09 UTC |
438f72d | Wolfgang Manousek | 08 December 2015, 18:44:41 UTC | Commented out testcase failung on graphic cards not configure in TCC mode | 08 December 2015, 18:52:19 UTC |
223282b | Frank Seide | 08 December 2015, 16:46:16 UTC | Merge branch 'master' of https://git.codeplex.com/cntk into fseide/network | 08 December 2015, 16:46:16 UTC |
92dea80 | Frank Seide | 08 December 2015, 16:45:54 UTC | removed ValidateNetwork(), BuildAndValidateSubNetwork(), and BuiltAndValidatedSubNetwork() in lieu of new method VerifyIsCompiled() which merely verifies | 08 December 2015, 16:45:54 UTC |
4bc7fb9 | Alexey Reznichenko | 08 December 2015, 15:39:20 UTC | Revert accidenatlly pushed "Address uniform random inconsistencies" This reverts commit 643139e5e9896b08583c35c586f9e1380c1c28ee. | 08 December 2015, 15:39:20 UTC |
643139e | Alexey Reznichenko | 08 December 2015, 09:28:34 UTC | Address uniform random inconsistencies * Use mt19937 instead of ranlux64_base_01. Replace std random with boost random. * Fix floating point issues in _rescaleToRange. Flip range to [min, max). Add CUDA intrinsics and a unit test for doubles. | 08 December 2015, 09:31:57 UTC |
98b4b3d | Frank Seide | 08 December 2015, 02:12:50 UTC | (bug fix: previous check-in had a wrong type parameter which caused it to fail for precision 'double') | 08 December 2015, 02:12:50 UTC |
e6583bf | Frank Seide | 08 December 2015, 01:54:34 UTC | FormNestedNetwork() now only creates it, but one must now use the new non-lazy GetNestedNetwork() method to get it; deleted m_cacheGradientCalcOrders | 08 December 2015, 01:54:34 UTC |
4869583 | Frank Seide | 08 December 2015, 01:29:35 UTC | ProcessPassNDLScript() did one too many ValidateNetwork(), which conflicted with the new CompileNetwork() approach | 08 December 2015, 01:29:35 UTC |
98ed2c9 | Frank Seide | 08 December 2015, 01:06:20 UTC | made gcc happy | 08 December 2015, 01:06:20 UTC |
1b19264 | Frank Seide | 08 December 2015, 00:57:19 UTC | GetEvalOrder() is no longer lazy, instead must call FormEvalOrder() before (in CompileNetwork()); deleted GetGradientCalcOrder() because its result is now always the straight reverse of GetEvalOrder(). EnumerateNodes() no longer needs to know whether to go forward or backward; bug fix: CompileNetwork() now calls CollectInputAndLearnableParameters() after FormEvalOrder() since GetEvalOrder() is no longer lazy | 08 December 2015, 00:57:19 UTC |
233b452 | Frank Seide | 07 December 2015, 23:57:47 UTC | Merge branch 'master' of https://git.codeplex.com/cntk into fseide/network | 07 December 2015, 23:57:47 UTC |
df0d890 | Frank Seide | 07 December 2015, 23:57:28 UTC | ComputationNetworkBuilder::NewNode() and related functions no longer return nullptr upon failure, but throw (their return value was not checked everywhere) | 07 December 2015, 23:57:28 UTC |
76f8114 | Frank Seide | 07 December 2015, 23:09:29 UTC | changed how the network is prepared for computation. The goal is to move away from lazy creation of the various evaluation structures: new method ComputationNetwork::CompileNetwork() which precomputes everything, and is called after a network was created or loaded (or modified in case of old MEL); renamed UpdateEvalTimeStamp() to BumpEvalTimeStamp() and ResetEvalTimeStamp() to ResetEvalTimeStamps(); removed lots of ResetEvalTimeStamps() calls from SimpleNetworkBuilder functions, since there is now a global CompileNetwork() call at the end for all network types, which does this; renamed ClearNet() to ClearNetwork(); Load() and LoadPersistableParameters(), which were 80% identical, now share a common sub-function; renamed m_recurrentInfo to m_allSEQNodes; renamed GetOuterLoopNode() to FormNestedNetwork(); bug fix in ErrorPredictionNode: lacked an UpdateFunctionMBSize() overload. This was a reason why we had to allocate matrices early on | 07 December 2015, 23:09:29 UTC |
1d51095 | Philipp Kranen | 07 December 2015, 20:31:13 UTC | Fixed license markdown for codeplex rendering | 07 December 2015, 20:31:13 UTC |
106e606 | Philipp Kranen | 07 December 2015, 20:22:26 UTC | Changed license file to markdown format | 07 December 2015, 20:22:26 UTC |
81068c4 | bmitra | 07 December 2015, 10:54:09 UTC | Removing duplicate include. | 07 December 2015, 10:54:09 UTC |
5047bd6 | bmitra | 07 December 2015, 10:49:16 UTC | Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes | 07 December 2015, 10:49:16 UTC |
32ca46d | Philipp Kranen | 07 December 2015, 10:12:55 UTC | Updated baselines for text demo | 07 December 2015, 10:20:19 UTC |
949581c | bmitra | 07 December 2015, 09:16:53 UTC | Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes | 07 December 2015, 09:16:53 UTC |
a1947b4 | Philipp Kranen | 07 December 2015, 08:19:27 UTC | Added demos to VS solution Polished README | 07 December 2015, 09:14:46 UTC |
9337acb | Philipp Kranen | 05 December 2015, 20:47:00 UTC | Minor changes to cntk README and renamed to README.md | 07 December 2015, 08:24:27 UTC |
f04f8c0 | Yinggong Zhao (Person Consulting) | 06 December 2015, 08:50:27 UTC | Fix NCE backprop issue during minibatch mode | 06 December 2015, 08:50:27 UTC |
3fdfa4d | bmitra | 06 December 2015, 00:41:02 UTC | Fixing merge issue. | 06 December 2015, 00:41:02 UTC |
cf5e9e9 | bmitra | 05 December 2015, 23:57:04 UTC | Fixing error message. | 05 December 2015, 23:57:04 UTC |
470923e | bmitra | 05 December 2015, 23:56:46 UTC | Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes Conflicts: BrainScript/BrainScriptEvaluator.cpp MachineLearning/CNTKComputationNetworkLib/ConvolutionalNodes.h Math/Math/Matrix.h | 05 December 2015, 23:56:46 UTC |
74234a6 | bmitra | 05 December 2015, 23:50:04 UTC | Enabling ColumnElementTimesNode in BrainScript. | 05 December 2015, 23:50:04 UTC |
a576c2d | bmitra | 05 December 2015, 23:04:39 UTC | Temporarily disable special logic for 1D Convolution for GPU-Sparse and update unite tests. | 05 December 2015, 23:04:39 UTC |
f3ba2af | Frank Seide | 05 December 2015, 22:31:26 UTC | made gcc happy | 05 December 2015, 22:31:26 UTC |
6fe88b3 | Frank Seide | 05 December 2015, 22:27:28 UTC | some clean-up in SGD.cpp | 05 December 2015, 22:27:28 UTC |
194b131 | bmitra | 05 December 2015, 22:19:04 UTC | Re-enabling MatrixVectorMax test. | 05 December 2015, 22:19:04 UTC |
778b900 | Frank Seide | 05 December 2015, 22:14:44 UTC | further renaming: GradientValues -> Gradient; Output -> Value (Input(i)->Output didn't look good); | 05 December 2015, 22:14:44 UTC |
f0eed2f | Amit Agarwal | 05 December 2015, 20:31:54 UTC | Merge branch 'master' of https://git01.codeplex.com/cntk into amitaga/bufferedAsyncGradientAggregation | 05 December 2015, 20:31:54 UTC |
95d9c23 | Amit | 05 December 2015, 19:19:43 UTC | Linux build fix and formatting updates to the Linux baselines for buffered async gradient aggregation test | 05 December 2015, 19:19:43 UTC |
2f7a8c1 | Philipp Kranen | 23 November 2015, 14:31:10 UTC | Initial demo sample structure and content and addressed CR comments | 05 December 2015, 19:18:18 UTC |
f531371 | Amit Agarwal | 05 December 2015, 18:56:57 UTC | Some refactoring/minor perf improvements in buffered async gradient aggregation code | 05 December 2015, 18:56:57 UTC |
c128267 | Amit Agarwal | 05 December 2015, 09:45:32 UTC | Merge branch 'master' of https://git01.codeplex.com/cntk into amitaga/bufferedAsyncGradientAggregation | 05 December 2015, 09:45:32 UTC |
eb27987 | Amit Agarwal | 05 December 2015, 08:29:51 UTC | Use a separate compute stream for gradient aggregation kernels when performing buffered async gradietn aggregation | 05 December 2015, 09:45:03 UTC |
20546a8 | Frank Seide | 05 December 2015, 07:58:07 UTC | tidied up ComputationNetwork.h, better grouping of methods; fixed one more broken file path | 05 December 2015, 07:58:07 UTC |
ae7da7a | Frank Seide | 05 December 2015, 07:23:12 UTC | fixed a pathname in all reader projects | 05 December 2015, 07:23:12 UTC |
ed7c943 | Frank Seide | 05 December 2015, 07:03:08 UTC | (a comment) | 05 December 2015, 07:03:08 UTC |
0f3badc | Frank Seide | 05 December 2015, 07:01:24 UTC | (a comment) | 05 December 2015, 07:01:24 UTC |
6a8cad2 | Frank Seide | 05 December 2015, 06:58:53 UTC | disabled CreateSparseLearnableParameter node, which had never been completely implemented | 05 December 2015, 06:58:53 UTC |
98b2476 | Frank Seide | 05 December 2015, 06:54:57 UTC | (a comment) | 05 December 2015, 06:54:57 UTC |
1c2f34a | Frank Seide | 05 December 2015, 06:51:20 UTC | moved PairNode to EsotericNodes.h | 05 December 2015, 06:51:20 UTC |
fc3edbe | Frank Seide | 05 December 2015, 06:45:57 UTC | renamed 'frameRange' to 'fr' | 05 December 2015, 06:45:57 UTC |
aa65ae1 | Amit Agarwal | 05 December 2015, 06:40:57 UTC | Merge branch 'master' of https://git01.codeplex.com/cntk into amitaga/bufferedAsyncGradientAggregation Conflicts: MachineLearning/CNTKSGDLib/DataReaderHelpers.h | 05 December 2015, 06:40:57 UTC |