https://github.com/Microsoft/CNTK

sort by:
Revision Author Date Message Commit Date
59d382f License change 18 January 2016, 08:31:47 UTC
69ccd3f Remove debug info generation option for CUDA compilation in debug flavor builds and also enable fast-math optimizations . These changes have been done to eliminate differences in GPU results for the E2E tests between debug and release flavors. Setting environment variable CNTK_CUDA_DEVICE_DEBUGINFO=1 will enable debug info generation. The baselines for all E2E tests have also been updated in accordance with this change 24 October 2015, 21:34:01 UTC
204d4be Added newline to a meesage absence of which was causing test comparisons against baselines to fail 24 October 2015, 04:36:05 UTC
978dc4f Fix for make_unique in GCC. 22 October 2015, 05:37:29 UTC
4ba13c2 Merge branch 'master' of https://git01.codeplex.com/cntk into amitaga/removeGPUMatrixDefaultDeviceId 22 October 2015, 02:01:56 UTC
abc12c4 Improving on DumpNode command. Adding an nodeNameRegex option so that user can specify a gropu of nodes (determined by the regex) that he/she wants to dump. 22 October 2015, 01:36:48 UTC
8d68e33 Bug fix: Fixed an edge case in gradient aggregation during parallel training, when gradient matrices are unsized due to a rank not having processed any samples 22 October 2015, 01:08:31 UTC
6a635f1 Fixed a reader bug in the handling of no data being available from the source 22 October 2015, 00:42:15 UTC
e72b4fd Bug fix: The guard for filling lattice was checking for presence of lattice on an updated mbiter instead of the previous one from which the utterance was actually filled 21 October 2015, 22:18:11 UTC
7979372 Optimized copy performance between CNTK matrices and SSE matrices in sequence training gamma calculation code - improves DNN sequence training performance further by 10% 21 October 2015, 19:11:54 UTC
d85c26e Bug fix: The sequence training code for 1 parallel utterance case was not populating the logLLs on the CPU whis is required in the forwardbackward computation 21 October 2015, 18:03:56 UTC
68c1fe5 Another performance fix for DNN sequence training 21 October 2015, 07:35:12 UTC
507922b Performance improvement for DNN sequence training 21 October 2015, 01:41:07 UTC
f9418c3 Added a MPI barrier after the Training epoch loop to make sure model writing is finished before proceeding to the next command 20 October 2015, 22:38:22 UTC
265f9d1 Fix minibatch progress percenatge 20 October 2015, 21:42:24 UTC
7b4b257 Moved NDL/MEL config files for the DPT speech test from the data directory to the test directory where the main config already exists 19 October 2015, 22:15:02 UTC
5723c5c Fix optional return value 19 October 2015, 19:04:50 UTC
17951ea Fix return value 19 October 2015, 18:41:14 UTC
4832e65 Fix windows conversion warning 19 October 2015, 18:28:03 UTC
bf5af27 Add progress measurement features 19 October 2015, 17:58:35 UTC
2c72bc3 Allow users to specify a PrefixPathInSCP option , so there is no need to revise scp after moving 18 October 2015, 22:57:39 UTC
4777f5f Rename SequenceTraining project to CNTKSequenceTrainingLib to be consistent with other libs 18 October 2015, 19:40:43 UTC
180fae5 Removed the no longer needed MANAGEDEXTERN device option 18 October 2015, 04:53:51 UTC
794911d Removed default parameter value of 0 for deviceId in GPUMatrix. This is error prone and may accidentally lead to mismatch in deviceIds for different matrixes. The deviceId to a GPUMatrix should always be explicitly specified 18 October 2015, 02:10:02 UTC
4731a74 Fixed a bug in sequence training code when copying of data between CNTK and SSE aligned matrices - the code previously was not accounting for alignment padding in the SSE matrix 16 October 2015, 22:02:42 UTC
03358d9 Disable legacy usage. 16 October 2015, 18:32:20 UTC
96cb4b9 Temporary disabling Release-GPU configuration for Speech/DNN/DiscriminativePreTraining test because of a known bug causing large variance between Release and Debug configurations for this test. This will be re-enabled after the bug has been addressed. This only effect the nighly-suite and keeps the BVT checkin suite unchanged 16 October 2015, 00:51:23 UTC
3b59002 Added run-test-common EOL setting to .gitattributes to make cygwin happy 15 October 2015, 23:16:46 UTC
08119b8 Added command line spew in the --dry-run option for TestDriver and also refactored common code across run-test scripts for different tests into a run-test-common script 15 October 2015, 22:10:01 UTC
547015a bug fix: RowStackNode should call AssignToRowSliceValuesOf() instead of AssignRowSliceValuesOf() (see the difference?) 15 October 2015, 21:04:50 UTC
601b43e merged GPUMatrix.cu 15 October 2015, 20:26:10 UTC
c0e3c41 updated DNN test case to narrower tolerances again 15 October 2015, 20:08:49 UTC
124d7f3 CNTK now includes its own EXE path in the command line it logs; added code, currently #if-0'ed out, to set the random seed correctly (but slowly) for GPU 15 October 2015, 20:00:51 UTC
2fae709 Fix for ImageReader and MBLayout. 15 October 2015, 18:33:19 UTC
3a6fc85 disabled initOnCPU again 15 October 2015, 04:06:39 UTC
dd41e50 increased per-MB tolerance even more 15 October 2015, 03:30:40 UTC
ba8ad50 changed DPT tolerances to 5%. Even between the baselines the first minibatch already differs by 3.2% 15 October 2015, 03:03:25 UTC
2ebc529 set tolerance of DPT test to 1% until we figured out why Debug and Release are so (relatively) different. 15 October 2015, 02:34:22 UTC
e0b17e5 made gcc happy 15 October 2015, 01:30:47 UTC
76d9539 bug fix: MeanNode and InvStdDevNode must mask gaps in multi-sequence input to zero before reducing over them; revived MBLayout::DetermineActualNumSamples() 15 October 2015, 01:18:24 UTC
146140e merged with fseide/winlstm; fixed a few error throws to XXXError() calls; moved one Validate() dimension check under isFinal condition 14 October 2015, 22:16:52 UTC
a4ed652 changed Parameter initialization in ndl\macros.txt (used by DNN DPT test) to do randomization on the CPU, to avoid differences between configurations; added a comment to document the unexpected behavior of Parse() and what the previous fix works around 14 October 2015, 22:02:58 UTC
8127874 added a bad hack to Parse() to stop it from nuking leading . and .. in pathnames--aargh; added Speech\DiscriminativePreTraining cmd line to README.txt for easy debugging 14 October 2015, 21:31:06 UTC
f829b4e added the new test files to the Solution 14 October 2015, 20:52:40 UTC
fc9897b Merge fix. 14 October 2015, 19:30:31 UTC
a0e60f6 Correct handling of OpenCV path. 14 October 2015, 19:30:21 UTC
bb62af7 ImageReader fixes for Linux build. 14 October 2015, 19:30:11 UTC
085c29f Add CUB to Linux build. 14 October 2015, 19:30:01 UTC
78d2e0b Add CUB path. 14 October 2015, 19:29:51 UTC
64fec93 Enabled check for CUB and conditional build for OpenCV. 14 October 2015, 19:29:41 UTC
3a6abed Merge fixes. 14 October 2015, 19:29:31 UTC
edab352 Merge fixes. 14 October 2015, 19:29:21 UTC
797c727 Fix merge issues. 14 October 2015, 19:29:11 UTC
a5acf9d Refactor to use conc_stack. 14 October 2015, 19:29:01 UTC
98324b8 Add unit tests for TopK-related Matrix changes. 14 October 2015, 19:28:50 UTC
e0ee2b3 Fix for NoGPU. 14 October 2015, 19:28:40 UTC
3129a35 Minor changes in ImageReader and TopK eval. 14 October 2015, 19:28:30 UTC
7b751bf Add TopK error evaluation 14 October 2015, 19:28:20 UTC
fa4383e Add TopK error evaluation 14 October 2015, 19:28:10 UTC
35d4a54 Minor changes 14 October 2015, 19:28:00 UTC
6cd863e Refactoring to use transforms 14 October 2015, 19:27:50 UTC
ecdd9ae Add crop transform 14 October 2015, 19:27:39 UTC
9aba364 Add ImageReader 14 October 2015, 19:27:29 UTC
264f3c7 further simplified RowStackNode, e.g. we actually do not need m_startRowIndices[num children] 14 October 2015, 15:22:17 UTC
19e448d RowStackNode() now uses Matrix::AssignRowSliceValuesOf() instead of AssignRowStackValuesOf() because that allows to use ValueSlice() like all other nodes. The previous method passed the column indices directly, but it is no longer of nodes' concern that they operate on column slices (and this caused a bug for Abdo's config after eliminating RowStackNode::EveluateThisNodeMap()); also #if-0'ed out AssignRowStackValuesOf() from all matrix libs as it is both very complex and no longer used; removed RowStackNode::m_inputMatrices[] since it was redundant 14 October 2015, 15:03:48 UTC
0c7eb75 added a workaround for Check_t() (which will be fixed properly after memshare merge) 14 October 2015, 01:09:34 UTC
b0b4278 added missing DiagonalToDense() stub to NoGPP.cpp; parallelforwardbackward.cpp now includes BestGpu.h which prevents it from referencing CNTKMathCUDA.lib in CPUONLY builds 13 October 2015, 23:16:16 UTC
91f85af Merge branch 'master' of https://git.codeplex.com/cntk into fseide/winlstm 13 October 2015, 21:59:38 UTC
864d1c1 removed all calls to EvaluateThisNodeMap(), all are now using the FrameRange code branch 13 October 2015, 21:58:16 UTC
f904d91 Removed testing for EvalErr in each mini-batch - too flaky 13 October 2015, 21:29:47 UTC
f02cf9c Loosen the tolerance of the Discriminative pretraining E2E test 13 October 2015, 17:23:16 UTC
450b16b Fixed a bug where labelIDs assigment was incorrectly being done unconditionally when it only applies when lattices exist 13 October 2015, 05:12:21 UTC
3a90db9 Merge branch 'master' of https://git.codeplex.com/cntk into amitaga/newSpeechTests 13 October 2015, 03:59:24 UTC
57cd8b8 Added a more comprehensive Speech DNN training test that uses discriminative pre-training, NDL and MEL 13 October 2015, 01:59:23 UTC
5d13127 Merge branch 'master' of https://git.codeplex.com/cntk into fseide/winlstm 13 October 2015, 01:36:49 UTC
d4e5455 Fixed linux build 13 October 2015, 01:26:32 UTC
d80046b fixed a refactoring error that caused variable shadowing in ComputeGradient() which broke the last commit 13 October 2015, 01:15:45 UTC
5dfda1f bug fixes in Debug mode/NaN checks--now resets gaps to 0 before computing a gradient. This is a stop-gap, the correct encapsulation is to clear gradient gap columns wherever they are *used* instead of where they are *generated*; If the final model exists, SGD now prints the path to log; ValidateSubNetwork() now tracks changes in m_needsGradient flag 12 October 2015, 21:57:33 UTC
c9b4987 Merge branch 'master' of https://git.codeplex.com/cntk into fseide/winlstm 11 October 2015, 05:02:02 UTC
0da6aec split m_needGradient into m_parameterUpdateRequired (node says whether it is to be updated) and m_needsGradient (network says whether this node or any of its children require a gradient to be passed in and/or passed through). m_needsGradient is propagated in Validate() (also in its old init location, which is wrong); added noexcept to ConfigValuePtr constructor and assignment 11 October 2015, 04:56:25 UTC
acc741d Fixed Linux warnings 11 October 2015, 01:58:55 UTC
ae528fb Fixed linux build 11 October 2015, 01:45:45 UTC
b8adcb5 Replaced all bare throw calls with Centralized exception functions that print the call stack 11 October 2015, 01:38:59 UTC
5998260 Readded lost latticefunctionskernels.h. What is wrong with VS?? 11 October 2015, 01:29:41 UTC
d4ca7b2 removed an extern definition of fileno(), which somehow seems to no longer be needed for Linux (probably some other change brought in a missing header) 11 October 2015, 01:23:44 UTC
8c6ca65 added _wchdir() emulation for Linux to Platform.h 11 October 2015, 01:18:59 UTC
704faf3 added NaN checks (commented out) and NaN gap-blasting to gradient computation; added NaN check to SGD right before UpdateWeights() (_DEBUG only) 11 October 2015, 01:14:26 UTC
f4bcfc5 reapplied the fix to atomicLogAdd() where the result of atomicCAS() cannot be compared as floats, but must be compared as bit patterns, as to make it work for NaNs 11 October 2015, 00:45:00 UTC
9cb50e3 bug workaround: MinusNode had m_needGradient false while a child had it true, which caused use of an empty gradient matrix. Worked around by going back to old CNTK way of always resizing gradients as well; UpdateFunctionAndGradientMBSize() new command-line option currentDirectory=... to set the CD. Use this as the first option. This allows for self-contained debug command lines; implemented Yu Zhang's recent bug fix differently by moving the MB reset into LoadPersistableParametersFromFile() 11 October 2015, 00:19:40 UTC
df5269a Merge branch 'master' of https://git.codeplex.com/cntk into fseide/winlstm 10 October 2015, 23:14:49 UTC
d9cdf9e Replace use of basetypes.h header in HTKMLFReader with Basics.h 10 October 2015, 19:12:16 UTC
807f021 Fixed linux build 10 October 2015, 09:13:25 UTC
4d31376 Merged Common/basetypes.h and HTKMLFReader/basetypes.h and deleted the HTKMLFReader/basetypes.h copy 10 October 2015, 09:06:03 UTC
3d2d626 Centralized PrintCallStack calls 10 October 2015, 07:22:36 UTC
9a4c84b Fixed Linux build 10 October 2015, 06:54:37 UTC
0150884 Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes 10 October 2015, 06:37:44 UTC
542f704 Adapted the new DiagonalNode to recent structural changes in computation node pattern 10 October 2015, 06:35:18 UTC
64ee4ba Reset the MBLayout when we reload the best model. Hasn't carefully read the MBLayout class, so just call the init function to set it to (1,0,false). Also let the loadFromNetwork function in delay node do not resize the cols. 10 October 2015, 06:14:48 UTC
349f07f Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes Conflicts: Common/Include/Basics.h DataReader/LMSequenceReader/LMSequenceReader.vcxproj DataReader/SparsePCReader/SparsePCReader.vcxproj MachineLearning/CNTKComputationNetworkLib/NonlinearityNodes.h Math/Math/CNTKMathCUDA.vcxproj.filters Math/Math/CommonMatrix.h Math/Math/GPUMatrix.cu Math/Math/GPUSparseMatrix.cu Math/Math/Matrix.h 10 October 2015, 06:07:34 UTC
4a133d2 Merge branch 'guoguo/linuxBuildFix' of https://git.codeplex.com/cntk into fseide/winlstm 10 October 2015, 00:05:31 UTC
back to top