https://github.com/Microsoft/CNTK

sort by:
Revision Author Date Message Commit Date
d2287a0 License change 18 January 2016, 08:37:45 UTC
07341f2 Fix for multi GPU to share all parameters required to adjust learning rate. 13 January 2016, 00:05:50 UTC
a924014 Propogated information to improve multi-GPU support 13 January 2016, 00:02:22 UTC
2935362 Fixed a bug where the m_elemSizeAllocated was used instead of m_nz 13 January 2016, 00:02:21 UTC
7b0159a Added Python conversion script, updated readme.txt. 12 January 2016, 22:53:46 UTC
92e8a4d Added BN eval mode to MEL. Updated samples. 12 January 2016, 22:47:32 UTC
9e25b7e Removed Resize from BN code. Updated samples. 12 January 2016, 22:02:51 UTC
cc2a836 Updated samples, added ResNet-50. 12 January 2016, 22:02:43 UTC
f52e80c Added CMA to BN node, updated samples. 12 January 2016, 22:02:37 UTC
f764123 Bug workaround: The m_columnsValidityMask matrix in MBLayout type was being default initialized resulting in incorrectly selecting a bad GPU device. 12 January 2016, 06:45:17 UTC
08e7d59 moved all actions (DoXXX()) from CNTK.cpp to ActionsLib (no code change); reenabled tensor lib (undid accidental commit) 10 January 2016, 02:13:31 UTC
69b5477 (made sure non-tensor version still compiles) 10 January 2016, 01:03:20 UTC
0826c1c moved non-tensor versions of PlusNode, MinusNode, and ElementTimesNode to EsoreticNodes.h (no code change) 10 January 2016, 00:58:26 UTC
a0fc021 DataTensorFor() refactored further 10 January 2016, 00:51:45 UTC
c886f32 factored out new function TensorSliceWithMBLayoutFor() from DataTensorFor(), for use by ShiftNode 10 January 2016, 00:30:25 UTC
db9de92 un-optimized DataTensorFor(), to again use the full tensor dimension (without pre-multiplying) 08 January 2016, 22:29:25 UTC
ed5d40a changed GetRecurrenceDirections() back to operate on a singly dimension only (multiple dimensions can be realized with BrainScript); towards implementing DataTensorFor() using tensor slices, so that we can reuse that for ShiftNode 08 January 2016, 19:19:50 UTC
c1c818c made gcc happy 08 January 2016, 05:12:39 UTC
08e6fc1 towards multi-dimensional loops: new interface IRecurrentNode::GetRecurrenceDirections() implemented by old DelayedValueNode, ShiftNode, and partially by analysis code 08 January 2016, 05:11:02 UTC
a32704d made gcc happy 08 January 2016, 02:20:03 UTC
7520910 Merge branch 'master' of https://git.codeplex.com/cntk into fseide/tensors 08 January 2016, 02:11:30 UTC
41fb427 fixed a compiler warning in Release (unused variable) 08 January 2016, 02:10:07 UTC
6baed48 made gcc happy 08 January 2016, 01:13:27 UTC
a961d71 Merge branch 'master' of https://git.codeplex.com/cntk into fseide/tensors 08 January 2016, 01:07:06 UTC
b770bdc bug fix: incorrect Resize() in DefaultConvolutionEngine::Forward() destroyed content of matrix; bug fix: last update of non-linearities must also correctly set OutputUsedInComputingInputNodesGradients() and likewise input; bug fix: DelayedValueNode::ForwardProp() must correctly handle gaps also for first call before EndForwardProp(); beginnings of much extended DelayedValue node, called ShiftNode (not functional yet); changed IStatefulNode::ImportState() take an rvalue reference (take ownership) 08 January 2016, 01:06:33 UTC
de6ac08 Fix for non-spatial BN dimensions. Updated samples. 07 January 2016, 18:02:27 UTC
dda0524 Fixed issue with MEL and convo engines, updated samples. 07 January 2016, 02:50:28 UTC
98d5b92 Updated samples. 07 January 2016, 02:50:17 UTC
71881dc Updated samples. 07 January 2016, 02:50:07 UTC
82dafa2 Fixed LearnableParameter and samples to work properly with cuDNN. 07 January 2016, 02:49:56 UTC
40ce1af Added the ReleaseMatricesAfterBackprop function to sequence training node to release the temp matrices not needed after back propagation is complete for all the children of the nodes 07 January 2016, 02:02:32 UTC
87206fc Fixed a corner case bug in async buffered gradient aggregation code 07 January 2016, 01:49:49 UTC
cb8cedd changed non-linearity gradients to be computed from node output instead of node input (better for mem-sharing/in-place) 06 January 2016, 23:11:02 UTC
306e857 renamed DataTensor.h to TensorShape.h. No code changes 06 January 2016, 16:18:32 UTC
4650ead Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes Conflicts: Source/ComputationNetworkLib/ReshapingNodes.h 06 January 2016, 07:22:14 UTC
aeb4a5a accidentally disabled cudnn for Jenkins Image test 06 January 2016, 04:36:24 UTC
993f2ce made gcc happy (wrong printf format) 06 January 2016, 03:47:16 UTC
51b7614 added derived Reshape() variants ReshapeDimension(), FlattenDimensions(), and SplitDimension to BrainScript; undid accidental commit of a config file modified for a debug test 06 January 2016, 03:42:08 UTC
f8e7779 implemented new ReshapeNode (which works with arbitrary tensor dims, but cannot swap dimensions into time--use TransposeNode for that in the future); old ReshapeNode renamed to DeprecatedReshapeNode. Still available from BrainScript for now, while new implementation is invoked as NewReshape(). This will change soon; bug fix: a few Validate() functions did not check isFinalValidationPass and failed on temporary input; bug fix: upwards-compatible Save/Load in convolution nodes by spltting one size_t into two uint32_t got the order wrong; bug fix: convolution nodes should only create convolution engines in final validation 06 January 2016, 03:21:03 UTC
4711562 (fixed an error message) 05 January 2016, 22:36:28 UTC
30018b7 added a new name "ImageParameter" for LearnableParameter that expects 3 input dimensions which are interpreted as WHC and mapped to actual tensor dimensions according to the optional imageLayout parameter. Adapted the NDL of the Image/QuickE2E test 05 January 2016, 22:19:16 UTC
2bf4d0a Add missing definition to NoGPU.cpp. 05 January 2016, 20:50:35 UTC
7e780c3 Past/FutureValue now takes a dimension tensor in BrainScript; removed cols/numImages parameter from InputValue and Past/FutureValue, since those always process data samples. Lots of little deletions in SimpleNetworkBuilder.cpp 05 January 2016, 20:11:27 UTC
d63576c Minor fixes in convolution engine. 05 January 2016, 18:44:46 UTC
6f9b664 (added comments) 04 January 2016, 22:20:38 UTC
c7dc6f1 Adding comment for ReshapeNode::InferTargetSampleLayout() 04 January 2016, 20:39:50 UTC
6899f5f Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes 04 January 2016, 18:23:01 UTC
ff9a916 Coup de grâce: the 3 basic elementwise arithmetic operation (plus, minus, element times) and the 6 main non-linearity nodes (all except softmax and GMM) have been replaced by the new tensor library. This makes the code significantly more compact and allows broadcasting along any dimension, e.g. for implementing the image bias 02 January 2016, 04:38:31 UTC
5fc6f34 bug fix in ConvolutionEngineTests.cpp: when testing CPU, use HWC layout 02 January 2016, 03:20:35 UTC
2752901 bug fix: Pooling nodes initialized wrong conv-engine factory class 02 January 2016, 03:14:10 UTC
22e8ce3 reenabled TensorView for PlusNode--nearly identical result Image/QuickE2E test 02 January 2016, 01:46:05 UTC
6d5df56 temporarily disabled TensorView in PlusNode 02 January 2016, 01:31:45 UTC
730440e added clipping of log and quotient, as SequenceTraining failed without 02 January 2016, 01:27:36 UTC
20f8334 made gcc happy 02 January 2016, 00:47:20 UTC
13e4ce9 replaced 6 non-linearity nodes by a single base class and 6 macros, as those 6 nodes were now structurally identical and only differed in TensorView opcodes 02 January 2016, 00:38:20 UTC
4ad2575 implemented most non-linearity nodes using the tensor lib 02 January 2016, 00:14:01 UTC
0c008a8 added imageLayout parameter to Convolution, MaxPooling, and AveragePooling (only Convolution had it, and BrainScrip only); bug fix: LearnableParameter must also serialize m_sampleLayout 01 January 2016, 23:05:19 UTC
ac02cb6 Image/QuickE2E now implements both legacy and cudnn layouts, selected by a command-line overridable parameter 'useCuDnn' that defaults to cudnn; added printfs to cunn tests 01 January 2016, 21:18:49 UTC
b007ad0 move ScaleNode, RowElementTimesNode, and ColumnElementTimesNode to EsotericNodes.h, to indicate that they are deprecated (which they will be once tensor lib is enabled generally); changed Image/QuickE2E test to use cudnn 01 January 2016, 21:04:59 UTC
57140d8 made gcc happy; changed ConvolutionEngineTests.cpp to use CHW layout since it only runs with cuDNN 01 January 2016, 20:45:08 UTC
8fc4b00 cleanedup ConvolutionNode vs. image interpretation of TensorShape; TensorOp() optimization to use SGEMM disabled for 'double' in Debug builds, so we get our code path tested once in a while; fixed ConvolutionEngineTests.cpp w.r.t. Create(); removed unused InputIsImage() methods 01 January 2016, 20:25:24 UTC
bb4f72c bug fix: that new call to cublas_gemm() should no lonver have the explicit casts to float* 01 January 2016, 18:29:39 UTC
12ae06b (comment) 31 December 2015, 05:20:11 UTC
57601e5 towards passing the ImageLayoutKind into ConvolutionNode and friends; ConvolutionEngineFactory::Create() now chooses the layout based on the user-specified ImageLayoutKind. Something is borked, this version mixes up dimensions somewhere 31 December 2015, 05:19:03 UTC
4e496b9 (deleted some obsolete debug code) 31 December 2015, 03:55:06 UTC
43f22ee bug fix: SetMaxTempMemSizeForCNN() was missing the 'double' case; added an optimization to unary TensorOp() to use cublasS/Dgemm() when we are reducing a matrix 31 December 2015, 03:51:28 UTC
81affdd switched ReduceElemType to ElemType instead of double while reenabling PlusNode TensorView--desparate to get Image/QuickE2E to pass 31 December 2015, 00:55:27 UTC
c87e2f7 disabled TensorView for PlusNode::BackpropTo(), as that causes a difference for Image/QuickE2E; GetTensorShape() now adds the column dimension as one more dimension 31 December 2015, 00:40:49 UTC
4ce3b1a switched inaccurate Sigmoid() and PlusNode::BackpropTo() back on; changed Image/QuickE2E to use BrainScript, in order to allow specifiyng tensor dimensions for the bias 31 December 2015, 00:00:02 UTC
96cde8e switched to new Sigmoid(), PlusNode::BackpropTo() still not TensorView 30 December 2015, 22:51:52 UTC
c7b5c5f re-disabled PlusNode::BackpropTo() once again 30 December 2015, 22:40:53 UTC
cb08479 reenabled ENABLE_BROADCASTING_ELEMENTTIMES 30 December 2015, 22:31:35 UTC
5a850c3 switched TensorOp.h's Sigmoid() to the less accurate version from the .cuh file, to remove failure in SequenceTraining test. This is not good! Reenabled all TensorView 30 December 2015, 22:30:30 UTC
3076962 disabled TensorView for PlusNode::BackpropTo() and SigmoidNode, now gives the same for Image/QuickE2E 30 December 2015, 22:15:00 UTC
35f774d InputValue and LearnableParameter C++ objects now take their dimensions as a general tensor (but not yet on BS level); InputValue no longer accepts a column argument (ignored in NDL, forbidden in BS); bug fix in ConfigArray::AsVector() 30 December 2015, 20:50:13 UTC
834775e new optional NDL/BS parameter 'imageLayout' to say according to which of the two layouts (cudnn or CNTL legacy) we should interpret a W, H, C specification. Currently implemented for InputImage; InputValue no longer accepts a 'cols' parameter. Instead, it ignores it and uses 0 instead, since InputValues must always be minibatches 30 December 2015, 19:35:40 UTC
d9d351d bug fix: CeilDiv() overflowed for b == INT_MAX 30 December 2015, 18:57:08 UTC
8b72bbc (fix in previous debug logging) 30 December 2015, 17:44:45 UTC
c14430a (added heavy logging to track down the crash) 30 December 2015, 17:43:10 UTC
a7b42c0 (added logging for tracking down SeqTrain problem) 30 December 2015, 17:13:30 UTC
526615a merged GPUMatrixCUDAKernels.cuh DEF_ELEMENT_PRIMITIVE macro with TensorOps.h OverloadUnaryMathFns, as both did the same thing; new #define ENABLE_BROADCASTING_ELEMENTTIMES to specifically select whether we want to replace ScaleNode etc with ElementTimesNode 30 December 2015, 17:11:02 UTC
42e39e2 (comments) 30 December 2015, 06:11:57 UTC
ef07ac2 new base node UnaryElementWiseNode; SigmoidNode implemented with that 30 December 2015, 06:01:37 UTC
37eb175 added new tensor operations: And, Or, Xor, ElementwiseProductWithSigmoidDerivative (SigmoidDerivative is only ever used in this context), and Clip 30 December 2015, 05:29:17 UTC
338cd77 reenabled all existing uses of TensorView; ScaleNode and Row/ColumnElementTimesNode now implemented as ElementTimesNode 30 December 2015, 04:50:54 UTC
e5013de cleaned up tensor reduction code 30 December 2015, 04:13:10 UTC
01f6e71 intermediate check-in of heavily instrumented reduction code 30 December 2015, 04:02:11 UTC
10b502b towards parallel reduction--works again with 1 block 29 December 2015, 23:45:34 UTC
3f10347 towards pallel reduction with multiple chunks 29 December 2015, 20:00:00 UTC
c3ab7a2 GridDim now distributes over multiprocs more evenly; fixed two Jenkins failures (a linux link error and a log error) 29 December 2015, 18:38:15 UTC
f1cb457 new condition for tensor lib: output cannot be in-place and inverse-broadcasting (reducing) at the same time. This makes reduction easier 29 December 2015, 07:48:40 UTC
76a6777 towards reductions that don't fit into __shared__ memory 29 December 2015, 07:07:36 UTC
16c1f8a (minor change) 29 December 2015, 05:35:57 UTC
1da8385 TensorOpElement::Compute() now uses tree-based reduction 29 December 2015, 04:53:04 UTC
da8f560 made gcc happy 29 December 2015, 04:03:50 UTC
22bab75 GPUMatrix.cu split: moved TensorView support to separate compilation unit (GPUMatrix had gotten too large for MSVC to compile the fatbin file). No code change otherwise 29 December 2015, 03:45:36 UTC
d27753d parallel reduction within one block now working 29 December 2015, 02:19:07 UTC
7881220 new kernel template instance for tensor lib parallel reduction 29 December 2015, 01:16:00 UTC
61e37a5 tensor GPU op: inRange flag and using y instead of x for reduction launch 29 December 2015, 00:11:39 UTC
20836a8 new nullary tensor op ConstOne; GridDim class now queries actual GPU information 28 December 2015, 22:51:35 UTC
back to top