https://github.com/torch/cunn

sort by:
Revision Author Date Message Commit Date
a270ee2 Add generic support for VolumetricReplicationPadding. 08 November 2016, 21:07:35 UTC
7578ac1 Add generic support for VolumetricAveragePooling. 08 November 2016, 21:07:35 UTC
ebb9d64 Add generic support for VolumetricMaxPooling, VolumetricMaxUnpooling, VolumetricDilatedMaxPooling. 08 November 2016, 21:07:35 UTC
1c82c32 Add generic support for TemporalMaxPooling. 08 November 2016, 21:07:35 UTC
aa18682 Rebase BatchNormalization. 08 November 2016, 21:06:52 UTC
ba27398 Add support for L1Cost. Changes thrust::reduce to trust::transform_reduce in order to be able to do summation at accreal precision. 08 November 2016, 21:01:06 UTC
bedeebf Add generic support for SparseLinear. We don't support SparseLInear with fp16 because of lack of cusparseHcsrmm (or equivalent Ex function) until CUDA 8.0. 08 November 2016, 21:01:06 UTC
f43f9aa Add generic support for DistKLDivCriterion. 08 November 2016, 21:01:06 UTC
44ec4ed Add generic support for ClassNLLCriterion. 08 November 2016, 21:01:06 UTC
a01d7c6 Add generic support for BCECriterion. Test skips comparing vs lua version for half type, because hdot is not currently implemented in cutorch. 08 November 2016, 21:01:06 UTC
125992e Add generic support for L1SmoothCriterion. 08 November 2016, 21:01:06 UTC
92e335c Add generic support for MultiLabelMarginCriterion. 08 November 2016, 21:01:06 UTC
2904d10 Add generic support for MultiMarginCriterion. Accumulation is done at accreal precision and changes target tensor indexing to THCIndexTensor. 08 November 2016, 21:01:06 UTC
b707ed6 Add generic support for MSECriterion. 08 November 2016, 21:01:06 UTC
0b3a0b0 Add generic support for SoftMarginCriterion. 08 November 2016, 21:01:06 UTC
d4b4a58 Add generic support for MarginCriterion. 08 November 2016, 21:01:06 UTC
8e23e5b Add generic support for AbsCriterion. 08 November 2016, 21:01:06 UTC
b838e36 Fix spacing in SpatialDilatedMaxPooling. 08 November 2016, 21:01:06 UTC
f5d6f30 More leeway for convolution backward weight/bias. 08 November 2016, 21:01:06 UTC
5fb9b7b Generic support for SpatialFullConvolution and SpatialDilatedConvolution. Uses matrix multiple for matrix vector multiply for half (no matrix vector implementation exists). 08 November 2016, 21:01:06 UTC
1a435b0 Add generic support for SpatialFractionalMaxPooling. 08 November 2016, 21:01:06 UTC
be33326 Generic support for SpatialConvolutionMM. Still need Hgemv. 08 November 2016, 21:01:06 UTC
1aad64f Add generic support for SpatialConvolutionLocal. 08 November 2016, 21:01:06 UTC
c138aca Add generic support for SpatialUpSamplingBilinear. Math is done at accreal precision. At real precision, forward pass fails, but backward passes. We do backward pass at accreal precision for consistency. 08 November 2016, 21:01:06 UTC
7e92dd2 Add generic support for SpatialUpSamplingNearest. Accumulates as AccType. 08 November 2016, 21:01:06 UTC
7b191ae Add generic support for SpatialReplicationPadding. 08 November 2016, 21:01:06 UTC
9e8056f Add generic support for SpatialReflectionPooling. 08 November 2016, 21:01:06 UTC
0d15053 Add generic support for SpatialSubSampling. Half types fail on backward, probably because we don't consistently accumulate in accreal. This is difficult because gradInput is accumulated directly (either with atomicAdd or not) rather than in another variable. 08 November 2016, 21:01:06 UTC
c077b3f Generic support for SpatialCrossMapLRN Removed the C-linkage for a couple of functions because they are now generic -- not sure if they were used by anyone outside. 08 November 2016, 21:01:06 UTC
2232736 Add generic support for SpatialAveragePooling. 08 November 2016, 21:01:06 UTC
b3b79b4 Add generic support for SpatialAdaptiveMaxPooling. 08 November 2016, 21:01:06 UTC
8b64c92 Use THCIndexTensors more generally. 08 November 2016, 21:01:06 UTC
be236c8 Use indices for SpatialAdaptiveMaxPooling indices. 08 November 2016, 21:01:06 UTC
6994ae7 Add generic support for SpatialMaxUnpooling. 08 November 2016, 21:01:05 UTC
30ccba8 Fix tests 08 November 2016, 21:01:05 UTC
0c0e3d8 Add generic support for SpatialMaxPooling. Also fix tests for SpatialDilatedMaxPooling. 08 November 2016, 21:01:05 UTC
3141686 Get SpatialDilatedMaxPooling generic working with long tensors as index. Does as much math as possible in accreal to try to suss out why CudaHalfTensor fails. 08 November 2016, 21:01:05 UTC
e3d7d12 Add generic support for SpatialDilatedMaxPooling. 08 November 2016, 21:01:05 UTC
3c89443 Add generic support for SpatialClassNLLCriterion. 08 November 2016, 21:01:05 UTC
c08d781 Remove fastExpIfAvail and benchmarking from functional tests. Also fix broken IFNDEF and test whitespace. 08 November 2016, 21:01:05 UTC
5d0c877 Reorganize THCHalfAutoNumerics. 08 November 2016, 21:01:05 UTC
d892d1f Iterate pointwise tests over all supported tensor types. 08 November 2016, 21:01:05 UTC
09acb86 Add generic support for RReLU. 08 November 2016, 21:01:05 UTC
c6f67e1 Add generic support for PReLU. This is the first instance of functions that take a lua number but are not reals in C. So, instead of automatically converting lua numbers in the half case, we parse the function definitions to find the argument positions to convert. 08 November 2016, 21:01:05 UTC
76232cb fix logsoftmax 08 November 2016, 21:01:05 UTC
e65e1cf Add generic support for LogSoftMax. 08 November 2016, 21:01:05 UTC
63b9beb Add generic support for SoftMax. Math is done at accreal precision (e.g. for half, math is done at float precision). Originally code called __expf, which doesn't have a double equivalent; we call exp instead of converting down. 08 November 2016, 21:01:05 UTC
e30f1b4 Add generic support for ELU. 08 November 2016, 21:01:05 UTC
c55e4a9 Add generic support for SoftShrink. 08 November 2016, 21:01:05 UTC
788ee5a Add generic support for Square. Math is (arbitrarily?) done at double precision to keep the intent of existing code. 08 November 2016, 21:01:05 UTC
cffe53a Add generic support for Sqrt. 08 November 2016, 21:01:05 UTC
384da3a Add generic support for LeakyReLU. 08 November 2016, 21:01:05 UTC
fb7e5af Add generic support for Threshold. 08 November 2016, 21:01:05 UTC
c7c91f4 Add generic support for LogSigmoid. This has the same logic as Sigmoid; i.e. math is done at double precision and then stored back at desired precision. 08 November 2016, 21:01:05 UTC
0324b96 Add generic support for Sigmoid. This maintains the existing logic of doing the math in double precision and converting back to the intended type (previously: just float). We do the same for half here, although perhaps we should do the math at float in that case. There is some question about what to do with conversions; Sigmoid did math in double before converting back to float; we keep this intent, although there is some question on whether this was intentional and for half -- should we just go up to float or up to double? 08 November 2016, 21:01:05 UTC
82a3664 Add generic support for Abs. 08 November 2016, 21:01:05 UTC
ff17570 Add generic support for HardTanh. 08 November 2016, 21:01:05 UTC
b8fe31f Add generic support for Tanh. 08 November 2016, 21:01:05 UTC
69491a1 Add generic support for SoftPlus. Adds the ability to "genericize" cunn modules that can exist simultaneously with non-generic modules (i.e. modules can be genericized one at a time). Allowing both generic and non-generic modules simultaneously requires some extra code that can be removed once every module is genericized. Also genericizes SoftPlus in this way. 08 November 2016, 21:01:05 UTC
aa256bc Merge pull request #364 from SYSTRAN/master test on inputGpu emptiness symmetric in backward and forward 02 November 2016, 23:44:58 UTC
98b777b test on inputGpu emptiness symmetric in backward and forward 02 November 2016, 08:33:21 UTC
64224a6 Add sameGPU checks to BatchNormalization (#361) 25 October 2016, 19:19:03 UTC
b612429 gcc 5 + cuda < 8 workaround improved 17 October 2016, 16:46:21 UTC
ae64752 Merge pull request #353 from torch/upsamplingbilinearfix fixes to upsampling bilinear API 17 October 2016, 04:46:26 UTC
5870d13 fixes to upsampling bilinear API 17 October 2016, 04:30:25 UTC
9cabae0 Merge pull request #351 from torch/revert-350-master Revert "change to work on windows && replace long with ptrdiff_t" 13 October 2016, 22:09:43 UTC
14c591d Revert "change to work on windows && replace long with ptrdiff_t" 13 October 2016, 22:09:34 UTC
60a753a Merge pull request #350 from BTNC/master change to work on windows && replace long with ptrdiff_t 13 October 2016, 16:25:59 UTC
4247d64 change to work on windows && replace long with ptrdiff_t 13 October 2016, 15:44:28 UTC
e388ee3 Merge pull request #338 from nitsky/spatial_logsoftmax SpatialLogSoftMax 07 October 2016, 14:36:40 UTC
612dda4 Merge pull request #343 from colesbury/master Fixes for https://github.com/torch/cutorch/pull/519 30 September 2016, 16:12:32 UTC
52f7419 Fixes for https://github.com/torch/cutorch/pull/519 29 September 2016, 23:19:41 UTC
9a3ffab Fix SpatialLogSoftMax memory leak and code cleanup 27 September 2016, 15:16:31 UTC
5aa68bb Merge pull request #339 from torch/classnllfix making ClassNLLCriterion targets consistent between cpu and cuda 27 September 2016, 00:50:42 UTC
9b3be1f making ClassNLLCriterion targets consistent between cpu and cuda 27 September 2016, 00:48:17 UTC
8e6bfc5 Update SpatialLogSoftMax kernel to use cuda dimensions 26 September 2016, 16:39:56 UTC
d85eeca Update LogSoftMax to work in spatial domain 21 September 2016, 15:11:59 UTC
10e9b68 Merge pull request #334 from apaszke/header_fix Mark BCECriterion weights as optional in THCUNN.h 14 September 2016, 14:53:29 UTC
a8a1c77 Mark BCECriterion weights as optional in THCUNN.h 14 September 2016, 14:36:59 UTC
ccc2309 moving arch detetction into THCUNN 13 September 2016, 04:09:25 UTC
4a04229 BCECriterion THCUNN + Weights (#331) BCE Criterion CUDA implementation 08 September 2016, 20:33:53 UTC
ecc4141 Merge pull request #332 from vivekn/dpt_fix Fix issue with flatten parameters in DataParallelTable 06 September 2016, 17:52:21 UTC
e1d9cbc Fix issue with flatten parameters in DataParallelTable 06 September 2016, 17:42:33 UTC
cdac0c3 Merge pull request #329 from gchanan/hardtanh inplace is reversed in HardTanh:backward. 01 September 2016, 02:36:26 UTC
dbbb218 inplace is reversed in HardTanh:backward. Fixes torch7 issue #734, "Inconsistent behavior of nn.Clamp in CPU and GPU modes". Adds a simple test that gradOutput equals gradInput after backward when inplace is set. It is possible to construct a test with inplace HardTanh where forward+backward yields different results for nn vs cunn, but this appears to be due to the inclusive vs exclusive bounds used for inplace vs non-inplace, respectively, so a more direct test is preferred. 01 September 2016, 02:31:41 UTC
d652966 Update README.md 27 August 2016, 18:02:44 UTC
b6e4a61 Merge pull request #327 from kmul00/voldilmaxpool VolumetricDilatedMaxPooling 27 August 2016, 14:42:05 UTC
ec16b36 VolumetricDilatedMaxPooling modified: lib/THCUNN/THCUNN.h copied: lib/THCUNN/VolumetricMaxPooling.cu -> lib/THCUNN/VolumetricDilatedMaxPooling.cu modified: lib/THCUNN/VolumetricMaxPooling.cu modified: test.lua 28 August 2016, 13:31:12 UTC
a3ccbeb Merge pull request #326 from kmul00/consistentapimaxpool Consistent Max Pool API 26 August 2016, 23:17:15 UTC
f8c82e5 Consistent Max Pool API renamed: lib/THCUNN/SpatialMaxPooling.cu -> lib/THCUNN/SpatialDilatedMaxPooling.cu modified: lib/THCUNN/SpatialMaxPooling.cu modified: lib/THCUNN/THCUNN.h 26 August 2016, 20:09:14 UTC
3c4a48a fix Lua 5.2 compat 24 August 2016, 20:27:10 UTC
ef2c953 fix critical bug in SpatialConvolution 22 August 2016, 18:41:46 UTC
50c63ac Merge pull request #323 from apaszke/threshold Make Threshold THCUNN functions more consistent 19 August 2016, 21:11:28 UTC
2298e03 updating cmake 18 August 2016, 20:06:57 UTC
d3f22ef Make Threshold THCUNN functions more consistent 18 August 2016, 18:14:25 UTC
d0e27d2 Merge pull request #322 from apaszke/spatial_conv Accept both 2D and 4D weights in SpatialConvolutionMM 18 August 2016, 14:25:20 UTC
f075592 Accept both 2D and 4D weights in SpatialConvolutionMM 18 August 2016, 13:58:43 UTC
25498b7 CUDA version of Spatioal Dilated Max Pooling modified: lib/THCUNN/SpatialMaxPooling.cu modified: lib/THCUNN/THCUNN.h modified: test.lua 12 August 2016, 15:00:02 UTC
3040562 Merge pull request #319 from torch/typesfix fixes for multiple cuda types 12 August 2016, 03:47:50 UTC
2618f79 fixes for multiple cuda types 12 August 2016, 01:57:28 UTC
back to top