https://github.com/Microsoft/CNTK

sort by:
Revision Author Date Message Commit Date
334d645 cudnn: added bias forward/backprop implementation for default engine. 12 December 2015, 17:59:19 UTC
3023f11 cudnn: enabled build on Linux with cuDNN. 12 December 2015, 17:59:09 UTC
895c10a cudnn: merge with master, fix Linux compile errors. 12 December 2015, 17:58:58 UTC
6c7c617 cudnn: implemented batch norm backprop, updated samples, added VGG_E net. 12 December 2015, 17:58:48 UTC
39a8a8c cudnn: added batch norm forward implementation. 12 December 2015, 17:58:38 UTC
8430f88 cudnn: added BatchNormalizationNode. 12 December 2015, 17:58:27 UTC
5698333 cudnn: bug fixes, samples update. 12 December 2015, 17:58:17 UTC
80ebfb4 cudnn: added bias forward/backward. 12 December 2015, 17:58:06 UTC
7300306 cudnn: refactored to use NCHW format. 12 December 2015, 17:57:56 UTC
afae131 cudnn: completed pooling nodes implementation, fixed bugs, added unit tests. 12 December 2015, 17:57:46 UTC
f1eb5d8 cudnn: added pooling engine, unit tests and refactoring. 12 December 2015, 17:57:35 UTC
c2736bf cudnn: added padding support, clean up and refactoring. 12 December 2015, 17:57:25 UTC
91e3378 cudnn: added auto-tuning. 12 December 2015, 17:57:14 UTC
5db082e cudnn: added backprop data/filter implementation, unit tests. 12 December 2015, 17:57:04 UTC
da822fd cudnn: added filter format conversion and backprop. 12 December 2015, 17:56:54 UTC
3f7a3ac cudnn: minor changes to tensor/filter formats. 12 December 2015, 17:56:43 UTC
57eea8c cudnn: add forward implementation, unit tests. 12 December 2015, 17:56:33 UTC
1e9b061 cudnn: add filter descriptor, refactor 12 December 2015, 17:56:23 UTC
106dd17 cudnn: Fixed linker issue. 12 December 2015, 17:56:12 UTC
f241eb2 Add missing files. 12 December 2015, 17:56:02 UTC
aee952e Add cuDNN to VS project. 12 December 2015, 17:55:52 UTC
f703cc2 cuDNN: introduce tensor, convolition options and some ConvNode refactoring. 12 December 2015, 17:55:41 UTC
ede0ff0 MBLayout::Get(t) also folded into Get(FrameRange). Only operator== and a legacy specialized operation stop us from removing the old flags 12 December 2015, 09:00:23 UTC
4a92bd7 Merge branch 'master' of https://git.codeplex.com/cntk into fseide/mblayout 12 December 2015, 08:16:12 UTC
37a827f made Release build happy 12 December 2015, 08:15:47 UTC
f55e6e4 adding an MPI init test in case of that MPI was initialized repeatedly 12 December 2015, 07:33:44 UTC
8787174 Merge branch 'master' of https://git01.codeplex.com/cntk into amitaga/outputValuesMemShare 12 December 2015, 02:11:53 UTC
439b8f1 Removed an unneeded change accidentally added 12 December 2015, 02:06:01 UTC
270077e MBLayout::Get(s,t) now implemented by calling Get(FrameRange) to allow for time offsets 12 December 2015, 00:53:14 UTC
9b41d1b implemented MBLayout::Get() to use the new structure and validate against the old 12 December 2015, 00:38:32 UTC
a8c2e1a Fixed linux build issues 11 December 2015, 22:17:14 UTC
8f0c8c7 updated all frame-mode readers to initialize MBLayout following the new AddSequence() style 11 December 2015, 22:11:28 UTC
60c32fb Merge branch 'master' of https://git01.codeplex.com/cntk into amitaga/outputValuesMemShare 11 December 2015, 22:04:28 UTC
ff6444b Merge branch 'fseide/getmbfix' of https://git.codeplex.com/cntk into fseide/mblayout 11 December 2015, 21:47:47 UTC
f785ff4 Merge branch 'master' of https://git.codeplex.com/cntk into fseide/mblayout 11 December 2015, 21:43:48 UTC
f729a8e MBLayout now validates that AddSequence() was called for all samples, and also keeps track of a gap count to accelerate HasGaps(); new method MBLayout::InitAsFrameMode() for easy updating of frame-mode readers 11 December 2015, 21:41:34 UTC
f366d2e Merge branch 'master' of https://git01.codeplex.com/cntk into amitaga/fixGPUDeviceSelection 11 December 2015, 21:11:53 UTC
3f6d50d Merge branch 'master' of https://git.codeplex.com/cntk into fseide/getmbfix 11 December 2015, 20:54:46 UTC
7a038d9 Merge branch 'master' of https://git.codeplex.com/cntk into fseide/mblayout 11 December 2015, 20:53:54 UTC
fc348af made MBLayout::IsAllNode() private 11 December 2015, 20:53:41 UTC
c995994 Fixed a bug in device selection enforcement. The enforcement function was file static instead of global due to which each source file was getting its own copy of the function and the static variable inside it. 11 December 2015, 19:48:04 UTC
598dcc7 removed the workaround in GetNumSamplesWithLabel() 11 December 2015, 19:31:11 UTC
6e8b5e7 Scripts/build-and-test: fix test for successful test execution (CPU and GPU targets are upper-cased in CNTK's run log) 11 December 2015, 19:19:27 UTC
241bf17 Fixed issue with working paths between tests. 11 December 2015, 17:07:04 UTC
4312576 bug fix: TrainOneEpoch() must not call GetNumSamplesWithLabel() when GetMinibatchIntoNetwork() returns false 11 December 2015, 16:05:08 UTC
f0ea36c (comments) 11 December 2015, 15:58:18 UTC
8426e81 Updated config files to match BS guidelines 11 December 2015, 12:27:29 UTC
2441425 Switched tests to use AN4 instead of TIMIT. 11 December 2015, 12:27:28 UTC
0896400 Commented out tests that trigger an assertion in Debug 11 December 2015, 12:27:14 UTC
79b20ec Pointed test to use environment variable for located test data 11 December 2015, 12:27:13 UTC
565dc49 Added ReaderTests project 11 December 2015, 12:27:12 UTC
f5b6f0e Scripts/build-and-test: for "--target cpu" also do CPU-only build on Linux CPU-only build output will go the build/cpu/{debug,release} directories. Note: test and clean-after functionality needs to be adapted in future changes. 11 December 2015, 10:45:13 UTC
29cd86e Implemented sharing of node output value matrices sharing which hugely reduces the amount of GPU memory required for evaluating/training a CNTK model. Currently this feature is off by default and needs to be enabled through a boolean config setting named shareNodeValueMatrices. After this feature has been tested more throughly, this will be turned on by default 11 December 2015, 10:20:00 UTC
01bb4d3 Brought back stderr in MNIST configs and reset to 30 epochs (cf Alekey K.) 11 December 2015, 09:17:51 UTC
d3192b6 Merge branch 'master' of https://git.codeplex.com/cntk into fseide/mblayout 11 December 2015, 00:39:59 UTC
5299231 bug fix in HTKMLFReader: MB sequence entries were not set correctly in frame mode (using the new method) 11 December 2015, 00:16:47 UTC
fd0ecb5 Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes 11 December 2015, 00:01:11 UTC
f970c00 Merge branch 'fseide/outputValuesMemShare' of https://git01.codeplex.com/cntk into amitaga/outputValuesMemShare Conflicts: MachineLearning/CNTKComputationNetworkLib/ComputationNetwork.h MachineLearning/CNTKComputationNetworkLib/ComputationNetworkAnalysis.cpp 10 December 2015, 23:50:19 UTC
8142116 fixed DecimateMinibatch() to work with new AddSequence() method 10 December 2015, 23:42:03 UTC
f1175b9 FormEvalOrder(), GetEvalOrder(), and FormRecurrentLoops() now accept a nullptr as the argument, to denote a global eval order that includes all nodes of the network. This is to support Amit's work on memshare for output values 10 December 2015, 23:05:18 UTC
811db95 Remove redundant line-break. 10 December 2015, 22:55:48 UTC
d5df5df new method MBLayout::GetAllSequences(), needed for recreating a layout after decimation 10 December 2015, 22:39:12 UTC
571bc7f Account for minibatch per epoch 10 December 2015, 22:29:19 UTC
50a75b6 Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes 10 December 2015, 22:20:06 UTC
b2aad5e Fix reshape image layout bug. 10 December 2015, 22:19:43 UTC
3a76d4e Change the digits of precision on the percenation part of minibatch log to be variable dependent on epoch size 10 December 2015, 20:43:52 UTC
29ddca1 Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes 10 December 2015, 16:33:22 UTC
0cbb719 SparsePCReader changes. 10 December 2015, 15:44:36 UTC
47ffab3 Minor changes to reshape kernel. 10 December 2015, 15:43:57 UTC
f8fda57 Removing unreachable code. 10 December 2015, 15:42:14 UTC
048c91b Minor changes to configs in Demo/Speech/ based on Dong's comments 10 December 2015, 12:14:14 UTC
17cd7a8 Merge branch 'fseide/network' of https://git.codeplex.com/cntk into fseide/network 10 December 2015, 09:25:16 UTC
c0d4e86 towards implementing MBLayout not as dense bits but an explicit set of sequences (which will be needed for sequence-to-sequence, and woiuld also make fix DelayedValueBase for m_timeStep > 1): flags can now ONLY be set through AddSequence() or AddGap(), i.e. in full sequences (MBLayout::Set() is now private, and SetWithourOr() and Mask() are commented out); HTKMLFReader and LUSequenceReader have been modified to follow the new method (also heavily commented that code); BatchSequenceReader (LMSequenceReader project) not so much: It did not set end or gap flags, so it could not be fixed (and likely did not work before this change, either). Instead, it now throws at those places; EvalReader did not maintain the needed state, so the fix will have incorrectness for DelayedValueNodes with m_timeStep > 1; RecurrentNode currently disabled for m_timeStep > 1, as that will be fixed very differently once this is complete; DecimateMinibatch() temporarily disabled. We need a new method in MBLayout to support this 10 December 2015, 09:24:36 UTC
7b08e39 (comment) 10 December 2015, 04:37:48 UTC
780b9ee MBLayout::AddSequence() now also remembers per-sequence distance to boundaries 10 December 2015, 01:53:26 UTC
6755d97 added a workaround for a bug in distributed reading (returning an inconsistent MBLayout at end of epoch), which the recenly fixed to GetNumSamplesWithLabel() to return a wrong value (the old version returned the right value out of pure luck); deleted unused function ComputationNetwork::SetNodeValue(); added some code to MBLayout::AddSequence() w.r.t moving a way from the bit masks 10 December 2015, 01:29:04 UTC
189e161 (one more check added for Jenkins) 10 December 2015, 00:27:08 UTC
799f9ed (brought back old GetNumSamplesWithLabel() to see where it differs in Jenkins) 10 December 2015, 00:25:25 UTC
9ea21ab clarified and enforced the contract that GetMinibatchIntoNetwork() thinks it has with GetMinibatch() regarding the meaning of the return value 09 December 2015, 23:59:25 UTC
d2ac9bb one more bug fix 09 December 2015, 21:59:02 UTC
06ebc74 bug fix in AddSequence(), a comparison was off by 1 09 December 2015, 21:52:59 UTC
2be7f9b changed MBLayou::SetAsSentence() to AddSequence() which now also takes an utterance id; deleted MBLayout::GetNumSamplesWithLabel() as it did the same thing as DetermineActualNumSamples() 09 December 2015, 20:14:58 UTC
084b843 (moved GetNumSamplesWithLabel() into MBLayout) 09 December 2015, 19:06:31 UTC
5c073d1 Merge branch 'master' of https://git.codeplex.com/cntk into fseide/network 09 December 2015, 17:43:35 UTC
a7cdbff FrameRange can now hold/specify an additional time offset (which will allow to access time offsets outside the actual minibatch range, to better support truncated BPTT); further cleanup/simplification of network-analysis code 09 December 2015, 17:42:47 UTC
f4fea38 fix errors in CPUONLY build for Windows and Linux 09 December 2015, 12:16:21 UTC
f612afa Minor fixes in demos (MB rescaling, tabs, image link, Speech/LSTM) 09 December 2015, 11:33:50 UTC
0f1238c Removed some MPI barriers from the gradient aggregation code that were added for better IRecv/ISend perf with OpenMPI 1.8.5 but are found to cause perf issues with OpenMPI 1.10.0 09 December 2015, 06:51:15 UTC
c2f0a98 README.md: add intro from CNTK main page 09 December 2015, 06:07:08 UTC
a401f59 Merge branch 'cbasoglu/testFix' of https://git.codeplex.com/cntk into cbasoglu/testFix 09 December 2015, 00:28:13 UTC
99dd3ac Fix build-and-test that got broken after Demos move 09 December 2015, 00:27:48 UTC
e2cb15d Fix build-and-test that got broken after Demos move 08 December 2015, 20:18:07 UTC
b2d1405 Address CR comment for 7c553a96b61fbdcd7793b6ff0fdd4034c6c59d33 08 December 2015, 20:09:09 UTC
438f72d Commented out testcase failung on graphic cards not configure in TCC mode 08 December 2015, 18:52:19 UTC
223282b Merge branch 'master' of https://git.codeplex.com/cntk into fseide/network 08 December 2015, 16:46:16 UTC
92dea80 removed ValidateNetwork(), BuildAndValidateSubNetwork(), and BuiltAndValidatedSubNetwork() in lieu of new method VerifyIsCompiled() which merely verifies 08 December 2015, 16:45:54 UTC
4bc7fb9 Revert accidenatlly pushed "Address uniform random inconsistencies" This reverts commit 643139e5e9896b08583c35c586f9e1380c1c28ee. 08 December 2015, 15:39:20 UTC
643139e Address uniform random inconsistencies * Use mt19937 instead of ranlux64_base_01. Replace std random with boost random. * Fix floating point issues in _rescaleToRange. Flip range to [min, max). Add CUDA intrinsics and a unit test for doubles. 08 December 2015, 09:31:57 UTC
98b4b3d (bug fix: previous check-in had a wrong type parameter which caused it to fail for precision 'double') 08 December 2015, 02:12:50 UTC
e6583bf FormNestedNetwork() now only creates it, but one must now use the new non-lazy GetNestedNetwork() method to get it; deleted m_cacheGradientCalcOrders 08 December 2015, 01:54:34 UTC
back to top