https://github.com/Microsoft/CNTK

sort by:
Revision Author Date Message Commit Date
39d6581 Minor fix to transpose function. 11 January 2016, 09:57:56 UTC
665ad3e Bug fix for ConvertDBN command 10 January 2016, 07:17:51 UTC
f74d204 Move all data member in MBLayout to CPU 10 January 2016, 07:16:45 UTC
62507b3 removed multi-dim feature from ShiftNode, causes too many inconsistencies 10 January 2016, 05:34:44 UTC
d27ee26 moved all actions (DoXXX()) from CNTK.cpp to ActionsLib (no code change); reenabled tensor lib (undid accidental commit) 10 January 2016, 02:13:31 UTC
e449370 (made sure non-tensor version still compiles) 10 January 2016, 01:03:20 UTC
900c021 moved non-tensor versions of PlusNode, MinusNode, and ElementTimesNode to EsoreticNodes.h (no code change) 10 January 2016, 00:58:26 UTC
b15bf46 DataTensorFor() refactored further 10 January 2016, 00:51:45 UTC
482a9e8 factored out new function TensorSliceWithMBLayoutFor() from DataTensorFor(), for use by ShiftNode 10 January 2016, 00:30:25 UTC
d6ed0dc Remove reshapeInputToRowSize in SparsePCReader. 09 January 2016, 11:02:21 UTC
a84b3c6 Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes 09 January 2016, 10:42:11 UTC
8e07fb5 Changes to text convolution. 09 January 2016, 10:41:47 UTC
3823406 release after BP 08 January 2016, 22:36:32 UTC
6ff2472 release temp matrix in SE 08 January 2016, 22:36:19 UTC
c15508c frameskipv2 08 January 2016, 22:36:08 UTC
e3f1179 SE frameskip V2 temp 08 January 2016, 22:35:56 UTC
03a965c frameskip SE 08 January 2016, 22:35:36 UTC
ab137f1 un-optimized DataTensorFor(), to again use the full tensor dimension (without pre-multiplying) 08 January 2016, 22:29:25 UTC
e370325 (make gcc happy) 08 January 2016, 22:23:36 UTC
ca6fcf8 changed GetRecurrenceDirections() back to operate on a singly dimension only (multiple dimensions can be realized with BrainScript); towards implementing DataTensorFor() using tensor slices, so that we can reuse that for ShiftNode 08 January 2016, 19:19:50 UTC
5f7c9ad made gcc happy 08 January 2016, 05:12:39 UTC
ae4e4af towards multi-dimensional loops: new interface IRecurrentNode::GetRecurrenceDirections() implemented by old DelayedValueNode, ShiftNode, and partially by analysis code 08 January 2016, 05:11:02 UTC
51cabe6 made gcc happy 08 January 2016, 02:20:03 UTC
1e5fd75 Merge branch 'master' of https://git.codeplex.com/cntk into fseide/tensors 08 January 2016, 02:11:30 UTC
f458fdf fixed a compiler warning in Release (unused variable) 08 January 2016, 02:10:07 UTC
a3da2c7 made gcc happy 08 January 2016, 01:13:27 UTC
fd7109a Merge branch 'master' of https://git.codeplex.com/cntk into fseide/tensors 08 January 2016, 01:07:06 UTC
3919902 bug fix: incorrect Resize() in DefaultConvolutionEngine::Forward() destroyed content of matrix; bug fix: last update of non-linearities must also correctly set OutputUsedInComputingInputNodesGradients() and likewise input; bug fix: DelayedValueNode::ForwardProp() must correctly handle gaps also for first call before EndForwardProp(); beginnings of much extended DelayedValue node, called ShiftNode (not functional yet); changed IStatefulNode::ImportState() take an rvalue reference (take ownership) 08 January 2016, 01:06:33 UTC
b9b6ccd Made inclusion of quantized gradient aggregation header compile-time conditional. Removed dependency on top-level MatrixQuantizer type from the quantizer unti tests - the tests now work with the underlying MatrixQuantizerImpl type. 07 January 2016, 19:32:15 UTC
1bc3113 Fix for non-spatial BN dimensions. Updated samples. 07 January 2016, 18:02:27 UTC
ed0bec2 Minor fix in GPU sparse reshape kernel. 07 January 2016, 17:08:52 UTC
7a1a9c9 Added a E2E test for buffered async gradient aggregation without quantization 07 January 2016, 07:29:55 UTC
579eaa9 Fix CPUOnly build 07 January 2016, 05:32:33 UTC
ee109fa Fixed linux build 07 January 2016, 03:00:13 UTC
9824b74 Add a separate implementation for gradient aggregation without quantization to enable separating out the all reduce based quantized gradient aggregation code from core CNTK 07 January 2016, 02:55:13 UTC
dfcb81a Fixed issue with MEL and convo engines, updated samples. 07 January 2016, 02:50:28 UTC
ad2c6bc Updated samples. 07 January 2016, 02:50:17 UTC
3943020 Updated samples. 07 January 2016, 02:50:07 UTC
633b69a Fixed LearnableParameter and samples to work properly with cuDNN. 07 January 2016, 02:49:56 UTC
6d6818f Merge branch 'master' of https://git01.codeplex.com/cntk into amitaga/separate1bitDataParallelSGD 07 January 2016, 02:39:06 UTC
b9efbdf Added the ReleaseMatricesAfterBackprop function to sequence training node to release the temp matrices not needed after back propagation is complete for all the children of the nodes 07 January 2016, 02:02:32 UTC
e0e4950 changed non-linearity gradients to be computed from node output instead of node input (better for mem-sharing/in-place) 06 January 2016, 23:11:02 UTC
d632133 renamed DataTensor.h to TensorShape.h. No code changes 06 January 2016, 16:18:32 UTC
c25eb36 Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes Conflicts: Source/ComputationNetworkLib/ReshapingNodes.h 06 January 2016, 07:22:14 UTC
cea69dc accidentally disabled cudnn for Jenkins Image test 06 January 2016, 04:36:24 UTC
f70dd27 made gcc happy (wrong printf format) 06 January 2016, 03:47:16 UTC
0c9acd2 added derived Reshape() variants ReshapeDimension(), FlattenDimensions(), and SplitDimension to BrainScript; undid accidental commit of a config file modified for a debug test 06 January 2016, 03:42:08 UTC
e7c9ae7 implemented new ReshapeNode (which works with arbitrary tensor dims, but cannot swap dimensions into time--use TransposeNode for that in the future); old ReshapeNode renamed to DeprecatedReshapeNode. Still available from BrainScript for now, while new implementation is invoked as NewReshape(). This will change soon; bug fix: a few Validate() functions did not check isFinalValidationPass and failed on temporary input; bug fix: upwards-compatible Save/Load in convolution nodes by spltting one size_t into two uint32_t got the order wrong; bug fix: convolution nodes should only create convolution engines in final validation 06 January 2016, 03:21:03 UTC
36c7da7 (fixed an error message) 05 January 2016, 22:36:28 UTC
cae4c56 added a new name "ImageParameter" for LearnableParameter that expects 3 input dimensions which are interpreted as WHC and mapped to actual tensor dimensions according to the optional imageLayout parameter. Adapted the NDL of the Image/QuickE2E test 05 January 2016, 22:19:16 UTC
97e0459 Add missing definition to NoGPU.cpp. 05 January 2016, 20:50:35 UTC
d013344 Past/FutureValue now takes a dimension tensor in BrainScript; removed cols/numImages parameter from InputValue and Past/FutureValue, since those always process data samples. Lots of little deletions in SimpleNetworkBuilder.cpp 05 January 2016, 20:11:27 UTC
84c50f7 Fix CPUONLY build 05 January 2016, 19:15:58 UTC
cc448b8 Move the number of quantization bits parameter for gradient aggregation to the AllReduceGradientAggregator ctor instead of the AggregateGradients interface method to ensure the generality of the IDistGradAggregator interface 05 January 2016, 19:08:04 UTC
39ffd38 Minor fixes in convolution engine. 05 January 2016, 18:44:46 UTC
74951f7 Fixed linux build error 05 January 2016, 18:42:38 UTC
ff4b28a Some refactoring in preparation for separating out the 1bit gradient aggregation implementation from the core CNTK sources 05 January 2016, 06:54:13 UTC
d49ed07 (added comments) 04 January 2016, 22:20:38 UTC
2e0d8d3 Adding comment for ReshapeNode::InferTargetSampleLayout() 04 January 2016, 20:39:50 UTC
038e82c A stopgap to prevent reader to load matrices inconsistent with lattices. Will be gone once the bug is fixed. 04 January 2016, 20:26:53 UTC
fd47759 Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes 04 January 2016, 18:23:01 UTC
cfd4a8e Coup de grâce: the 3 basic elementwise arithmetic operation (plus, minus, element times) and the 6 main non-linearity nodes (all except softmax and GMM) have been replaced by the new tensor library. This makes the code significantly more compact and allows broadcasting along any dimension, e.g. for implementing the image bias 02 January 2016, 04:38:31 UTC
2d8b9e7 bug fix in ConvolutionEngineTests.cpp: when testing CPU, use HWC layout 02 January 2016, 03:20:35 UTC
7ee40e6 bug fix: Pooling nodes initialized wrong conv-engine factory class 02 January 2016, 03:14:10 UTC
5bfc54b reenabled TensorView for PlusNode--nearly identical result Image/QuickE2E test 02 January 2016, 01:46:05 UTC
61469c0 temporarily disabled TensorView in PlusNode 02 January 2016, 01:31:45 UTC
4adb66c added clipping of log and quotient, as SequenceTraining failed without 02 January 2016, 01:27:36 UTC
b5a5c86 made gcc happy 02 January 2016, 00:47:20 UTC
e09548f replaced 6 non-linearity nodes by a single base class and 6 macros, as those 6 nodes were now structurally identical and only differed in TensorView opcodes 02 January 2016, 00:38:20 UTC
351358e implemented most non-linearity nodes using the tensor lib 02 January 2016, 00:14:01 UTC
5ff5d68 added imageLayout parameter to Convolution, MaxPooling, and AveragePooling (only Convolution had it, and BrainScrip only); bug fix: LearnableParameter must also serialize m_sampleLayout 01 January 2016, 23:05:19 UTC
50218cd Image/QuickE2E now implements both legacy and cudnn layouts, selected by a command-line overridable parameter 'useCuDnn' that defaults to cudnn; added printfs to cunn tests 01 January 2016, 21:18:49 UTC
ec29314 move ScaleNode, RowElementTimesNode, and ColumnElementTimesNode to EsotericNodes.h, to indicate that they are deprecated (which they will be once tensor lib is enabled generally); changed Image/QuickE2E test to use cudnn 01 January 2016, 21:04:59 UTC
1506c67 made gcc happy; changed ConvolutionEngineTests.cpp to use CHW layout since it only runs with cuDNN 01 January 2016, 20:45:08 UTC
0c79c92 cleanedup ConvolutionNode vs. image interpretation of TensorShape; TensorOp() optimization to use SGEMM disabled for 'double' in Debug builds, so we get our code path tested once in a while; fixed ConvolutionEngineTests.cpp w.r.t. Create(); removed unused InputIsImage() methods 01 January 2016, 20:25:24 UTC
f369a8e bug fix: that new call to cublas_gemm() should no lonver have the explicit casts to float* 01 January 2016, 18:29:39 UTC
42ff20b (comment) 31 December 2015, 05:20:11 UTC
9db7cce towards passing the ImageLayoutKind into ConvolutionNode and friends; ConvolutionEngineFactory::Create() now chooses the layout based on the user-specified ImageLayoutKind. Something is borked, this version mixes up dimensions somewhere 31 December 2015, 05:19:03 UTC
d11532c (deleted some obsolete debug code) 31 December 2015, 03:55:06 UTC
9372b6a bug fix: SetMaxTempMemSizeForCNN() was missing the 'double' case; added an optimization to unary TensorOp() to use cublasS/Dgemm() when we are reducing a matrix 31 December 2015, 03:51:28 UTC
329c77c switched ReduceElemType to ElemType instead of double while reenabling PlusNode TensorView--desparate to get Image/QuickE2E to pass 31 December 2015, 00:55:27 UTC
9003f4c disabled TensorView for PlusNode::BackpropTo(), as that causes a difference for Image/QuickE2E; GetTensorShape() now adds the column dimension as one more dimension 31 December 2015, 00:40:49 UTC
411e21f switched inaccurate Sigmoid() and PlusNode::BackpropTo() back on; changed Image/QuickE2E to use BrainScript, in order to allow specifiyng tensor dimensions for the bias 31 December 2015, 00:00:02 UTC
ac61d22 switched to new Sigmoid(), PlusNode::BackpropTo() still not TensorView 30 December 2015, 22:51:52 UTC
6da94eb re-disabled PlusNode::BackpropTo() once again 30 December 2015, 22:40:53 UTC
ec4c08d reenabled ENABLE_BROADCASTING_ELEMENTTIMES 30 December 2015, 22:31:35 UTC
50159b7 switched TensorOp.h's Sigmoid() to the less accurate version from the .cuh file, to remove failure in SequenceTraining test. This is not good! Reenabled all TensorView 30 December 2015, 22:30:30 UTC
3f5e5b9 disabled TensorView for PlusNode::BackpropTo() and SigmoidNode, now gives the same for Image/QuickE2E 30 December 2015, 22:15:00 UTC
710e2b5 InputValue and LearnableParameter C++ objects now take their dimensions as a general tensor (but not yet on BS level); InputValue no longer accepts a column argument (ignored in NDL, forbidden in BS); bug fix in ConfigArray::AsVector() 30 December 2015, 20:50:13 UTC
c6d908e new optional NDL/BS parameter 'imageLayout' to say according to which of the two layouts (cudnn or CNTL legacy) we should interpret a W, H, C specification. Currently implemented for InputImage; InputValue no longer accepts a 'cols' parameter. Instead, it ignores it and uses 0 instead, since InputValues must always be minibatches 30 December 2015, 19:35:40 UTC
d0b5c8d bug fix: CeilDiv() overflowed for b == INT_MAX 30 December 2015, 18:57:08 UTC
da2b298 (fix in previous debug logging) 30 December 2015, 17:44:45 UTC
91667ac (added heavy logging to track down the crash) 30 December 2015, 17:43:10 UTC
159e380 (added logging for tracking down SeqTrain problem) 30 December 2015, 17:13:30 UTC
44c1e54 merged GPUMatrixCUDAKernels.cuh DEF_ELEMENT_PRIMITIVE macro with TensorOps.h OverloadUnaryMathFns, as both did the same thing; new #define ENABLE_BROADCASTING_ELEMENTTIMES to specifically select whether we want to replace ScaleNode etc with ElementTimesNode 30 December 2015, 17:11:02 UTC
cd86e1f (comments) 30 December 2015, 06:11:57 UTC
6018719 new base node UnaryElementWiseNode; SigmoidNode implemented with that 30 December 2015, 06:01:37 UTC
ed35440 added new tensor operations: And, Or, Xor, ElementwiseProductWithSigmoidDerivative (SigmoidDerivative is only ever used in this context), and Clip 30 December 2015, 05:29:17 UTC
c4356c1 reenabled all existing uses of TensorView; ScaleNode and Row/ColumnElementTimesNode now implemented as ElementTimesNode 30 December 2015, 04:50:54 UTC
e38b828 cleaned up tensor reduction code 30 December 2015, 04:13:10 UTC
back to top