https://github.com/Microsoft/CNTK

sort by:
Revision Author Date Message Commit Date
c1ee4e2 (further remove MarkValueNotSharable out of constructor) 12 January 2016, 00:51:45 UTC
8ca9c6d Move MarkValueNonsharable out of constuctors (make gcc happy) 12 January 2016, 00:51:28 UTC
d7264c5 Display CUB and CUDNN paths (if defined) in BuildInfo Print BuildInfo at the very begining of the program. convenient for checking build type. 12 January 2016, 00:49:00 UTC
d7fe2d1 Add an alternate option "numSubminibatches" for users to indicate how to split minibatches into subminibatches. 12 January 2016, 00:48:50 UTC
5635beb Fix a bug in MarkValueSharableNode 12 January 2016, 00:48:40 UTC
589ce7c Revise the condition of ReleaseMatricesAfterForwardProp: only ValueSharable nodes can be released after forwardprop 12 January 2016, 00:48:29 UTC
06e3c61 Fix MarkValueNotSharableNodes 12 January 2016, 00:48:13 UTC
70ca40d Revise the implementation of valueNotSharableNode. More to be revised. 12 January 2016, 00:47:56 UTC
ee4ab06 Replace CreateMatrixIfNull by MarkValueNonsharable() In the compiling the stage, we will mark nodes as nonsharable whose descendents are all learnable parameters. 12 January 2016, 00:47:46 UTC
d27ee26 moved all actions (DoXXX()) from CNTK.cpp to ActionsLib (no code change); reenabled tensor lib (undid accidental commit) 10 January 2016, 02:13:31 UTC
e449370 (made sure non-tensor version still compiles) 10 January 2016, 01:03:20 UTC
900c021 moved non-tensor versions of PlusNode, MinusNode, and ElementTimesNode to EsoreticNodes.h (no code change) 10 January 2016, 00:58:26 UTC
b15bf46 DataTensorFor() refactored further 10 January 2016, 00:51:45 UTC
482a9e8 factored out new function TensorSliceWithMBLayoutFor() from DataTensorFor(), for use by ShiftNode 10 January 2016, 00:30:25 UTC
ab137f1 un-optimized DataTensorFor(), to again use the full tensor dimension (without pre-multiplying) 08 January 2016, 22:29:25 UTC
ca6fcf8 changed GetRecurrenceDirections() back to operate on a singly dimension only (multiple dimensions can be realized with BrainScript); towards implementing DataTensorFor() using tensor slices, so that we can reuse that for ShiftNode 08 January 2016, 19:19:50 UTC
5f7c9ad made gcc happy 08 January 2016, 05:12:39 UTC
ae4e4af towards multi-dimensional loops: new interface IRecurrentNode::GetRecurrenceDirections() implemented by old DelayedValueNode, ShiftNode, and partially by analysis code 08 January 2016, 05:11:02 UTC
51cabe6 made gcc happy 08 January 2016, 02:20:03 UTC
1e5fd75 Merge branch 'master' of https://git.codeplex.com/cntk into fseide/tensors 08 January 2016, 02:11:30 UTC
f458fdf fixed a compiler warning in Release (unused variable) 08 January 2016, 02:10:07 UTC
a3da2c7 made gcc happy 08 January 2016, 01:13:27 UTC
fd7109a Merge branch 'master' of https://git.codeplex.com/cntk into fseide/tensors 08 January 2016, 01:07:06 UTC
3919902 bug fix: incorrect Resize() in DefaultConvolutionEngine::Forward() destroyed content of matrix; bug fix: last update of non-linearities must also correctly set OutputUsedInComputingInputNodesGradients() and likewise input; bug fix: DelayedValueNode::ForwardProp() must correctly handle gaps also for first call before EndForwardProp(); beginnings of much extended DelayedValue node, called ShiftNode (not functional yet); changed IStatefulNode::ImportState() take an rvalue reference (take ownership) 08 January 2016, 01:06:33 UTC
1bc3113 Fix for non-spatial BN dimensions. Updated samples. 07 January 2016, 18:02:27 UTC
dfcb81a Fixed issue with MEL and convo engines, updated samples. 07 January 2016, 02:50:28 UTC
ad2c6bc Updated samples. 07 January 2016, 02:50:17 UTC
3943020 Updated samples. 07 January 2016, 02:50:07 UTC
633b69a Fixed LearnableParameter and samples to work properly with cuDNN. 07 January 2016, 02:49:56 UTC
b9efbdf Added the ReleaseMatricesAfterBackprop function to sequence training node to release the temp matrices not needed after back propagation is complete for all the children of the nodes 07 January 2016, 02:02:32 UTC
e0e4950 changed non-linearity gradients to be computed from node output instead of node input (better for mem-sharing/in-place) 06 January 2016, 23:11:02 UTC
d632133 renamed DataTensor.h to TensorShape.h. No code changes 06 January 2016, 16:18:32 UTC
c25eb36 Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes Conflicts: Source/ComputationNetworkLib/ReshapingNodes.h 06 January 2016, 07:22:14 UTC
cea69dc accidentally disabled cudnn for Jenkins Image test 06 January 2016, 04:36:24 UTC
f70dd27 made gcc happy (wrong printf format) 06 January 2016, 03:47:16 UTC
0c9acd2 added derived Reshape() variants ReshapeDimension(), FlattenDimensions(), and SplitDimension to BrainScript; undid accidental commit of a config file modified for a debug test 06 January 2016, 03:42:08 UTC
e7c9ae7 implemented new ReshapeNode (which works with arbitrary tensor dims, but cannot swap dimensions into time--use TransposeNode for that in the future); old ReshapeNode renamed to DeprecatedReshapeNode. Still available from BrainScript for now, while new implementation is invoked as NewReshape(). This will change soon; bug fix: a few Validate() functions did not check isFinalValidationPass and failed on temporary input; bug fix: upwards-compatible Save/Load in convolution nodes by spltting one size_t into two uint32_t got the order wrong; bug fix: convolution nodes should only create convolution engines in final validation 06 January 2016, 03:21:03 UTC
36c7da7 (fixed an error message) 05 January 2016, 22:36:28 UTC
cae4c56 added a new name "ImageParameter" for LearnableParameter that expects 3 input dimensions which are interpreted as WHC and mapped to actual tensor dimensions according to the optional imageLayout parameter. Adapted the NDL of the Image/QuickE2E test 05 January 2016, 22:19:16 UTC
97e0459 Add missing definition to NoGPU.cpp. 05 January 2016, 20:50:35 UTC
d013344 Past/FutureValue now takes a dimension tensor in BrainScript; removed cols/numImages parameter from InputValue and Past/FutureValue, since those always process data samples. Lots of little deletions in SimpleNetworkBuilder.cpp 05 January 2016, 20:11:27 UTC
39ffd38 Minor fixes in convolution engine. 05 January 2016, 18:44:46 UTC
d49ed07 (added comments) 04 January 2016, 22:20:38 UTC
2e0d8d3 Adding comment for ReshapeNode::InferTargetSampleLayout() 04 January 2016, 20:39:50 UTC
fd47759 Merge branch 'master' of https://git01.codeplex.com/cntk into bmitra/Changes 04 January 2016, 18:23:01 UTC
cfd4a8e Coup de grâce: the 3 basic elementwise arithmetic operation (plus, minus, element times) and the 6 main non-linearity nodes (all except softmax and GMM) have been replaced by the new tensor library. This makes the code significantly more compact and allows broadcasting along any dimension, e.g. for implementing the image bias 02 January 2016, 04:38:31 UTC
2d8b9e7 bug fix in ConvolutionEngineTests.cpp: when testing CPU, use HWC layout 02 January 2016, 03:20:35 UTC
7ee40e6 bug fix: Pooling nodes initialized wrong conv-engine factory class 02 January 2016, 03:14:10 UTC
5bfc54b reenabled TensorView for PlusNode--nearly identical result Image/QuickE2E test 02 January 2016, 01:46:05 UTC
61469c0 temporarily disabled TensorView in PlusNode 02 January 2016, 01:31:45 UTC
4adb66c added clipping of log and quotient, as SequenceTraining failed without 02 January 2016, 01:27:36 UTC
b5a5c86 made gcc happy 02 January 2016, 00:47:20 UTC
e09548f replaced 6 non-linearity nodes by a single base class and 6 macros, as those 6 nodes were now structurally identical and only differed in TensorView opcodes 02 January 2016, 00:38:20 UTC
351358e implemented most non-linearity nodes using the tensor lib 02 January 2016, 00:14:01 UTC
5ff5d68 added imageLayout parameter to Convolution, MaxPooling, and AveragePooling (only Convolution had it, and BrainScrip only); bug fix: LearnableParameter must also serialize m_sampleLayout 01 January 2016, 23:05:19 UTC
50218cd Image/QuickE2E now implements both legacy and cudnn layouts, selected by a command-line overridable parameter 'useCuDnn' that defaults to cudnn; added printfs to cunn tests 01 January 2016, 21:18:49 UTC
ec29314 move ScaleNode, RowElementTimesNode, and ColumnElementTimesNode to EsotericNodes.h, to indicate that they are deprecated (which they will be once tensor lib is enabled generally); changed Image/QuickE2E test to use cudnn 01 January 2016, 21:04:59 UTC
1506c67 made gcc happy; changed ConvolutionEngineTests.cpp to use CHW layout since it only runs with cuDNN 01 January 2016, 20:45:08 UTC
0c79c92 cleanedup ConvolutionNode vs. image interpretation of TensorShape; TensorOp() optimization to use SGEMM disabled for 'double' in Debug builds, so we get our code path tested once in a while; fixed ConvolutionEngineTests.cpp w.r.t. Create(); removed unused InputIsImage() methods 01 January 2016, 20:25:24 UTC
f369a8e bug fix: that new call to cublas_gemm() should no lonver have the explicit casts to float* 01 January 2016, 18:29:39 UTC
42ff20b (comment) 31 December 2015, 05:20:11 UTC
9db7cce towards passing the ImageLayoutKind into ConvolutionNode and friends; ConvolutionEngineFactory::Create() now chooses the layout based on the user-specified ImageLayoutKind. Something is borked, this version mixes up dimensions somewhere 31 December 2015, 05:19:03 UTC
d11532c (deleted some obsolete debug code) 31 December 2015, 03:55:06 UTC
9372b6a bug fix: SetMaxTempMemSizeForCNN() was missing the 'double' case; added an optimization to unary TensorOp() to use cublasS/Dgemm() when we are reducing a matrix 31 December 2015, 03:51:28 UTC
329c77c switched ReduceElemType to ElemType instead of double while reenabling PlusNode TensorView--desparate to get Image/QuickE2E to pass 31 December 2015, 00:55:27 UTC
9003f4c disabled TensorView for PlusNode::BackpropTo(), as that causes a difference for Image/QuickE2E; GetTensorShape() now adds the column dimension as one more dimension 31 December 2015, 00:40:49 UTC
411e21f switched inaccurate Sigmoid() and PlusNode::BackpropTo() back on; changed Image/QuickE2E to use BrainScript, in order to allow specifiyng tensor dimensions for the bias 31 December 2015, 00:00:02 UTC
ac61d22 switched to new Sigmoid(), PlusNode::BackpropTo() still not TensorView 30 December 2015, 22:51:52 UTC
6da94eb re-disabled PlusNode::BackpropTo() once again 30 December 2015, 22:40:53 UTC
ec4c08d reenabled ENABLE_BROADCASTING_ELEMENTTIMES 30 December 2015, 22:31:35 UTC
50159b7 switched TensorOp.h's Sigmoid() to the less accurate version from the .cuh file, to remove failure in SequenceTraining test. This is not good! Reenabled all TensorView 30 December 2015, 22:30:30 UTC
3f5e5b9 disabled TensorView for PlusNode::BackpropTo() and SigmoidNode, now gives the same for Image/QuickE2E 30 December 2015, 22:15:00 UTC
710e2b5 InputValue and LearnableParameter C++ objects now take their dimensions as a general tensor (but not yet on BS level); InputValue no longer accepts a column argument (ignored in NDL, forbidden in BS); bug fix in ConfigArray::AsVector() 30 December 2015, 20:50:13 UTC
c6d908e new optional NDL/BS parameter 'imageLayout' to say according to which of the two layouts (cudnn or CNTL legacy) we should interpret a W, H, C specification. Currently implemented for InputImage; InputValue no longer accepts a 'cols' parameter. Instead, it ignores it and uses 0 instead, since InputValues must always be minibatches 30 December 2015, 19:35:40 UTC
d0b5c8d bug fix: CeilDiv() overflowed for b == INT_MAX 30 December 2015, 18:57:08 UTC
da2b298 (fix in previous debug logging) 30 December 2015, 17:44:45 UTC
91667ac (added heavy logging to track down the crash) 30 December 2015, 17:43:10 UTC
159e380 (added logging for tracking down SeqTrain problem) 30 December 2015, 17:13:30 UTC
44c1e54 merged GPUMatrixCUDAKernels.cuh DEF_ELEMENT_PRIMITIVE macro with TensorOps.h OverloadUnaryMathFns, as both did the same thing; new #define ENABLE_BROADCASTING_ELEMENTTIMES to specifically select whether we want to replace ScaleNode etc with ElementTimesNode 30 December 2015, 17:11:02 UTC
cd86e1f (comments) 30 December 2015, 06:11:57 UTC
6018719 new base node UnaryElementWiseNode; SigmoidNode implemented with that 30 December 2015, 06:01:37 UTC
ed35440 added new tensor operations: And, Or, Xor, ElementwiseProductWithSigmoidDerivative (SigmoidDerivative is only ever used in this context), and Clip 30 December 2015, 05:29:17 UTC
c4356c1 reenabled all existing uses of TensorView; ScaleNode and Row/ColumnElementTimesNode now implemented as ElementTimesNode 30 December 2015, 04:50:54 UTC
e38b828 cleaned up tensor reduction code 30 December 2015, 04:13:10 UTC
4f298fa intermediate check-in of heavily instrumented reduction code 30 December 2015, 04:02:11 UTC
40733c2 towards parallel reduction--works again with 1 block 29 December 2015, 23:45:34 UTC
5fedb69 towards pallel reduction with multiple chunks 29 December 2015, 20:00:00 UTC
a1affda GridDim now distributes over multiprocs more evenly; fixed two Jenkins failures (a linux link error and a log error) 29 December 2015, 18:38:15 UTC
cdcc4bb new condition for tensor lib: output cannot be in-place and inverse-broadcasting (reducing) at the same time. This makes reduction easier 29 December 2015, 07:48:40 UTC
01a33f7 towards reductions that don't fit into __shared__ memory 29 December 2015, 07:07:36 UTC
c3cf76f (minor change) 29 December 2015, 05:35:57 UTC
e37a053 TensorOpElement::Compute() now uses tree-based reduction 29 December 2015, 04:53:04 UTC
24900d0 made gcc happy 29 December 2015, 04:03:50 UTC
d8ce6b7 GPUMatrix.cu split: moved TensorView support to separate compilation unit (GPUMatrix had gotten too large for MSVC to compile the fatbin file). No code change otherwise 29 December 2015, 03:45:36 UTC
45c8a1c parallel reduction within one block now working 29 December 2015, 02:19:07 UTC
ca9e921 new kernel template instance for tensor lib parallel reduction 29 December 2015, 01:16:00 UTC
30ea4d4 tensor GPU op: inRange flag and using y instead of x for reduction launch 29 December 2015, 00:11:39 UTC
764bf0c new nullary tensor op ConstOne; GridDim class now queries actual GPU information 28 December 2015, 22:51:35 UTC
dba14b4 deleted virtual function ComputationNode::InferImageDimsFromInputs() since no longer needed after update of tensor-dim inference. Unary zip ops just copy the layout from their input, and binary zip ops take dimension-wise max (to consider broadcasting) 24 December 2015, 11:04:44 UTC
68ec1cd bug fix: SimpleNetworkBuilder::AddTrainAndEvalCriterionNodes() should not compute 'tinput' for certain training-criterion nodes because 'input' has a different meaning for those; revived BatchSequenceReader, now supports MBLayout::AddSequence() 24 December 2015, 09:48:25 UTC
back to top