https://github.com/Microsoft/CNTK

sort by:
Revision Author Date Message Commit Date
c995994 Fixed a bug in device selection enforcement. The enforcement function was file static instead of global due to which each source file was getting its own copy of the function and the static variable inside it. 11 December 2015, 19:48:04 UTC
598dcc7 removed the workaround in GetNumSamplesWithLabel() 11 December 2015, 19:31:11 UTC
6e8b5e7 Scripts/build-and-test: fix test for successful test execution (CPU and GPU targets are upper-cased in CNTK's run log) 11 December 2015, 19:19:27 UTC
241bf17 Fixed issue with working paths between tests. 11 December 2015, 17:07:04 UTC
4312576 bug fix: TrainOneEpoch() must not call GetNumSamplesWithLabel() when GetMinibatchIntoNetwork() returns false 11 December 2015, 16:05:08 UTC
f0ea36c (comments) 11 December 2015, 15:58:18 UTC
8426e81 Updated config files to match BS guidelines 11 December 2015, 12:27:29 UTC
2441425 Switched tests to use AN4 instead of TIMIT. 11 December 2015, 12:27:28 UTC
0896400 Commented out tests that trigger an assertion in Debug 11 December 2015, 12:27:14 UTC
79b20ec Pointed test to use environment variable for located test data 11 December 2015, 12:27:13 UTC
565dc49 Added ReaderTests project 11 December 2015, 12:27:12 UTC
f5b6f0e Scripts/build-and-test: for "--target cpu" also do CPU-only build on Linux CPU-only build output will go the build/cpu/{debug,release} directories. Note: test and clean-after functionality needs to be adapted in future changes. 11 December 2015, 10:45:13 UTC
29cd86e Implemented sharing of node output value matrices sharing which hugely reduces the amount of GPU memory required for evaluating/training a CNTK model. Currently this feature is off by default and needs to be enabled through a boolean config setting named shareNodeValueMatrices. After this feature has been tested more throughly, this will be turned on by default 11 December 2015, 10:20:00 UTC
01bb4d3 Brought back stderr in MNIST configs and reset to 30 epochs (cf Alekey K.) 11 December 2015, 09:17:51 UTC
d3192b6 Merge branch 'master' of https://git.codeplex.com/cntk into fseide/mblayout 11 December 2015, 00:39:59 UTC
5299231 bug fix in HTKMLFReader: MB sequence entries were not set correctly in frame mode (using the new method) 11 December 2015, 00:16:47 UTC
fd0ecb5 Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes 11 December 2015, 00:01:11 UTC
f970c00 Merge branch 'fseide/outputValuesMemShare' of https://git01.codeplex.com/cntk into amitaga/outputValuesMemShare Conflicts: MachineLearning/CNTKComputationNetworkLib/ComputationNetwork.h MachineLearning/CNTKComputationNetworkLib/ComputationNetworkAnalysis.cpp 10 December 2015, 23:50:19 UTC
8142116 fixed DecimateMinibatch() to work with new AddSequence() method 10 December 2015, 23:42:03 UTC
f1175b9 FormEvalOrder(), GetEvalOrder(), and FormRecurrentLoops() now accept a nullptr as the argument, to denote a global eval order that includes all nodes of the network. This is to support Amit's work on memshare for output values 10 December 2015, 23:05:18 UTC
811db95 Remove redundant line-break. 10 December 2015, 22:55:48 UTC
d5df5df new method MBLayout::GetAllSequences(), needed for recreating a layout after decimation 10 December 2015, 22:39:12 UTC
571bc7f Account for minibatch per epoch 10 December 2015, 22:29:19 UTC
50a75b6 Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes 10 December 2015, 22:20:06 UTC
b2aad5e Fix reshape image layout bug. 10 December 2015, 22:19:43 UTC
3a76d4e Change the digits of precision on the percenation part of minibatch log to be variable dependent on epoch size 10 December 2015, 20:43:52 UTC
29ddca1 Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes 10 December 2015, 16:33:22 UTC
0cbb719 SparsePCReader changes. 10 December 2015, 15:44:36 UTC
47ffab3 Minor changes to reshape kernel. 10 December 2015, 15:43:57 UTC
f8fda57 Removing unreachable code. 10 December 2015, 15:42:14 UTC
048c91b Minor changes to configs in Demo/Speech/ based on Dong's comments 10 December 2015, 12:14:14 UTC
17cd7a8 Merge branch 'fseide/network' of https://git.codeplex.com/cntk into fseide/network 10 December 2015, 09:25:16 UTC
c0d4e86 towards implementing MBLayout not as dense bits but an explicit set of sequences (which will be needed for sequence-to-sequence, and woiuld also make fix DelayedValueBase for m_timeStep > 1): flags can now ONLY be set through AddSequence() or AddGap(), i.e. in full sequences (MBLayout::Set() is now private, and SetWithourOr() and Mask() are commented out); HTKMLFReader and LUSequenceReader have been modified to follow the new method (also heavily commented that code); BatchSequenceReader (LMSequenceReader project) not so much: It did not set end or gap flags, so it could not be fixed (and likely did not work before this change, either). Instead, it now throws at those places; EvalReader did not maintain the needed state, so the fix will have incorrectness for DelayedValueNodes with m_timeStep > 1; RecurrentNode currently disabled for m_timeStep > 1, as that will be fixed very differently once this is complete; DecimateMinibatch() temporarily disabled. We need a new method in MBLayout to support this 10 December 2015, 09:24:36 UTC
7b08e39 (comment) 10 December 2015, 04:37:48 UTC
780b9ee MBLayout::AddSequence() now also remembers per-sequence distance to boundaries 10 December 2015, 01:53:26 UTC
6755d97 added a workaround for a bug in distributed reading (returning an inconsistent MBLayout at end of epoch), which the recenly fixed to GetNumSamplesWithLabel() to return a wrong value (the old version returned the right value out of pure luck); deleted unused function ComputationNetwork::SetNodeValue(); added some code to MBLayout::AddSequence() w.r.t moving a way from the bit masks 10 December 2015, 01:29:04 UTC
b0c1156 Add support of subminibatch for sequence training. 10 December 2015, 01:10:09 UTC
189e161 (one more check added for Jenkins) 10 December 2015, 00:27:08 UTC
799f9ed (brought back old GetNumSamplesWithLabel() to see where it differs in Jenkins) 10 December 2015, 00:25:25 UTC
9ea21ab clarified and enforced the contract that GetMinibatchIntoNetwork() thinks it has with GetMinibatch() regarding the meaning of the return value 09 December 2015, 23:59:25 UTC
d2ac9bb one more bug fix 09 December 2015, 21:59:02 UTC
06ebc74 bug fix in AddSequence(), a comparison was off by 1 09 December 2015, 21:52:59 UTC
2be7f9b changed MBLayou::SetAsSentence() to AddSequence() which now also takes an utterance id; deleted MBLayout::GetNumSamplesWithLabel() as it did the same thing as DetermineActualNumSamples() 09 December 2015, 20:14:58 UTC
084b843 (moved GetNumSamplesWithLabel() into MBLayout) 09 December 2015, 19:06:31 UTC
5c073d1 Merge branch 'master' of https://git.codeplex.com/cntk into fseide/network 09 December 2015, 17:43:35 UTC
a7cdbff FrameRange can now hold/specify an additional time offset (which will allow to access time offsets outside the actual minibatch range, to better support truncated BPTT); further cleanup/simplification of network-analysis code 09 December 2015, 17:42:47 UTC
f4fea38 fix errors in CPUONLY build for Windows and Linux 09 December 2015, 12:16:21 UTC
f612afa Minor fixes in demos (MB rescaling, tabs, image link, Speech/LSTM) 09 December 2015, 11:33:50 UTC
0f1238c Removed some MPI barriers from the gradient aggregation code that were added for better IRecv/ISend perf with OpenMPI 1.8.5 but are found to cause perf issues with OpenMPI 1.10.0 09 December 2015, 06:51:15 UTC
c2f0a98 README.md: add intro from CNTK main page 09 December 2015, 06:07:08 UTC
a401f59 Merge branch 'cbasoglu/testFix' of https://git.codeplex.com/cntk into cbasoglu/testFix 09 December 2015, 00:28:13 UTC
99dd3ac Fix build-and-test that got broken after Demos move 09 December 2015, 00:27:48 UTC
e2cb15d Fix build-and-test that got broken after Demos move 08 December 2015, 20:18:07 UTC
b2d1405 Address CR comment for 7c553a96b61fbdcd7793b6ff0fdd4034c6c59d33 08 December 2015, 20:09:09 UTC
438f72d Commented out testcase failung on graphic cards not configure in TCC mode 08 December 2015, 18:52:19 UTC
223282b Merge branch 'master' of https://git.codeplex.com/cntk into fseide/network 08 December 2015, 16:46:16 UTC
92dea80 removed ValidateNetwork(), BuildAndValidateSubNetwork(), and BuiltAndValidatedSubNetwork() in lieu of new method VerifyIsCompiled() which merely verifies 08 December 2015, 16:45:54 UTC
4bc7fb9 Revert accidenatlly pushed "Address uniform random inconsistencies" This reverts commit 643139e5e9896b08583c35c586f9e1380c1c28ee. 08 December 2015, 15:39:20 UTC
643139e Address uniform random inconsistencies * Use mt19937 instead of ranlux64_base_01. Replace std random with boost random. * Fix floating point issues in _rescaleToRange. Flip range to [min, max). Add CUDA intrinsics and a unit test for doubles. 08 December 2015, 09:31:57 UTC
98b4b3d (bug fix: previous check-in had a wrong type parameter which caused it to fail for precision 'double') 08 December 2015, 02:12:50 UTC
e6583bf FormNestedNetwork() now only creates it, but one must now use the new non-lazy GetNestedNetwork() method to get it; deleted m_cacheGradientCalcOrders 08 December 2015, 01:54:34 UTC
4869583 ProcessPassNDLScript() did one too many ValidateNetwork(), which conflicted with the new CompileNetwork() approach 08 December 2015, 01:29:35 UTC
98ed2c9 made gcc happy 08 December 2015, 01:06:20 UTC
1b19264 GetEvalOrder() is no longer lazy, instead must call FormEvalOrder() before (in CompileNetwork()); deleted GetGradientCalcOrder() because its result is now always the straight reverse of GetEvalOrder(). EnumerateNodes() no longer needs to know whether to go forward or backward; bug fix: CompileNetwork() now calls CollectInputAndLearnableParameters() after FormEvalOrder() since GetEvalOrder() is no longer lazy 08 December 2015, 00:57:19 UTC
233b452 Merge branch 'master' of https://git.codeplex.com/cntk into fseide/network 07 December 2015, 23:57:47 UTC
df0d890 ComputationNetworkBuilder::NewNode() and related functions no longer return nullptr upon failure, but throw (their return value was not checked everywhere) 07 December 2015, 23:57:28 UTC
76f8114 changed how the network is prepared for computation. The goal is to move away from lazy creation of the various evaluation structures: new method ComputationNetwork::CompileNetwork() which precomputes everything, and is called after a network was created or loaded (or modified in case of old MEL); renamed UpdateEvalTimeStamp() to BumpEvalTimeStamp() and ResetEvalTimeStamp() to ResetEvalTimeStamps(); removed lots of ResetEvalTimeStamps() calls from SimpleNetworkBuilder functions, since there is now a global CompileNetwork() call at the end for all network types, which does this; renamed ClearNet() to ClearNetwork(); Load() and LoadPersistableParameters(), which were 80% identical, now share a common sub-function; renamed m_recurrentInfo to m_allSEQNodes; renamed GetOuterLoopNode() to FormNestedNetwork(); bug fix in ErrorPredictionNode: lacked an UpdateFunctionMBSize() overload. This was a reason why we had to allocate matrices early on 07 December 2015, 23:09:29 UTC
1d51095 Fixed license markdown for codeplex rendering 07 December 2015, 20:31:13 UTC
106e606 Changed license file to markdown format 07 December 2015, 20:22:26 UTC
81068c4 Removing duplicate include. 07 December 2015, 10:54:09 UTC
5047bd6 Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes 07 December 2015, 10:49:16 UTC
32ca46d Updated baselines for text demo 07 December 2015, 10:20:19 UTC
949581c Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes 07 December 2015, 09:16:53 UTC
a1947b4 Added demos to VS solution Polished README 07 December 2015, 09:14:46 UTC
9337acb Minor changes to cntk README and renamed to README.md 07 December 2015, 08:24:27 UTC
f04f8c0 Fix NCE backprop issue during minibatch mode 06 December 2015, 08:50:27 UTC
3fdfa4d Fixing merge issue. 06 December 2015, 00:41:02 UTC
cf5e9e9 Fixing error message. 05 December 2015, 23:57:04 UTC
470923e Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes Conflicts: BrainScript/BrainScriptEvaluator.cpp MachineLearning/CNTKComputationNetworkLib/ConvolutionalNodes.h Math/Math/Matrix.h 05 December 2015, 23:56:46 UTC
74234a6 Enabling ColumnElementTimesNode in BrainScript. 05 December 2015, 23:50:04 UTC
a576c2d Temporarily disable special logic for 1D Convolution for GPU-Sparse and update unite tests. 05 December 2015, 23:04:39 UTC
f3ba2af made gcc happy 05 December 2015, 22:31:26 UTC
6fe88b3 some clean-up in SGD.cpp 05 December 2015, 22:27:28 UTC
194b131 Re-enabling MatrixVectorMax test. 05 December 2015, 22:19:04 UTC
778b900 further renaming: GradientValues -> Gradient; Output -> Value (Input(i)->Output didn't look good); 05 December 2015, 22:14:44 UTC
f0eed2f Merge branch 'master' of https://git01.codeplex.com/cntk into amitaga/bufferedAsyncGradientAggregation 05 December 2015, 20:31:54 UTC
95d9c23 Linux build fix and formatting updates to the Linux baselines for buffered async gradient aggregation test 05 December 2015, 19:19:43 UTC
2f7a8c1 Initial demo sample structure and content and addressed CR comments 05 December 2015, 19:18:18 UTC
f531371 Some refactoring/minor perf improvements in buffered async gradient aggregation code 05 December 2015, 18:56:57 UTC
c128267 Merge branch 'master' of https://git01.codeplex.com/cntk into amitaga/bufferedAsyncGradientAggregation 05 December 2015, 09:45:32 UTC
eb27987 Use a separate compute stream for gradient aggregation kernels when performing buffered async gradietn aggregation 05 December 2015, 09:45:03 UTC
20546a8 tidied up ComputationNetwork.h, better grouping of methods; fixed one more broken file path 05 December 2015, 07:58:07 UTC
ae7da7a fixed a pathname in all reader projects 05 December 2015, 07:23:12 UTC
ed7c943 (a comment) 05 December 2015, 07:03:08 UTC
0f3badc (a comment) 05 December 2015, 07:01:24 UTC
6a8cad2 disabled CreateSparseLearnableParameter node, which had never been completely implemented 05 December 2015, 06:58:53 UTC
98b2476 (a comment) 05 December 2015, 06:54:57 UTC
1c2f34a moved PairNode to EsotericNodes.h 05 December 2015, 06:51:20 UTC
fc3edbe renamed 'frameRange' to 'fr' 05 December 2015, 06:45:57 UTC
aa65ae1 Merge branch 'master' of https://git01.codeplex.com/cntk into amitaga/bufferedAsyncGradientAggregation Conflicts: MachineLearning/CNTKSGDLib/DataReaderHelpers.h 05 December 2015, 06:40:57 UTC
back to top