a270ee2 | Gregory Chanan | 31 October 2016, 20:12:07 UTC | Add generic support for VolumetricReplicationPadding. | 08 November 2016, 21:07:35 UTC |
7578ac1 | Gregory Chanan | 31 October 2016, 19:48:38 UTC | Add generic support for VolumetricAveragePooling. | 08 November 2016, 21:07:35 UTC |
ebb9d64 | Gregory Chanan | 31 October 2016, 17:00:47 UTC | Add generic support for VolumetricMaxPooling, VolumetricMaxUnpooling, VolumetricDilatedMaxPooling. | 08 November 2016, 21:07:35 UTC |
1c82c32 | Gregory Chanan | 31 October 2016, 17:51:41 UTC | Add generic support for TemporalMaxPooling. | 08 November 2016, 21:07:35 UTC |
aa18682 | Gregory Chanan | 08 November 2016, 21:06:52 UTC | Rebase BatchNormalization. | 08 November 2016, 21:06:52 UTC |
ba27398 | Gregory Chanan | 28 October 2016, 15:42:01 UTC | Add support for L1Cost. Changes thrust::reduce to trust::transform_reduce in order to be able to do summation at accreal precision. | 08 November 2016, 21:01:06 UTC |
bedeebf | Gregory Chanan | 26 October 2016, 21:42:28 UTC | Add generic support for SparseLinear. We don't support SparseLInear with fp16 because of lack of cusparseHcsrmm (or equivalent Ex function) until CUDA 8.0. | 08 November 2016, 21:01:06 UTC |
f43f9aa | Gregory Chanan | 26 October 2016, 20:47:03 UTC | Add generic support for DistKLDivCriterion. | 08 November 2016, 21:01:06 UTC |
44ec4ed | Gregory Chanan | 26 October 2016, 20:23:15 UTC | Add generic support for ClassNLLCriterion. | 08 November 2016, 21:01:06 UTC |
a01d7c6 | Gregory Chanan | 26 October 2016, 18:10:55 UTC | Add generic support for BCECriterion. Test skips comparing vs lua version for half type, because hdot is not currently implemented in cutorch. | 08 November 2016, 21:01:06 UTC |
125992e | Gregory Chanan | 25 October 2016, 20:08:50 UTC | Add generic support for L1SmoothCriterion. | 08 November 2016, 21:01:06 UTC |
92e335c | Gregory Chanan | 25 October 2016, 19:44:15 UTC | Add generic support for MultiLabelMarginCriterion. | 08 November 2016, 21:01:06 UTC |
2904d10 | Gregory Chanan | 25 October 2016, 17:22:45 UTC | Add generic support for MultiMarginCriterion. Accumulation is done at accreal precision and changes target tensor indexing to THCIndexTensor. | 08 November 2016, 21:01:06 UTC |
b707ed6 | Gregory Chanan | 24 October 2016, 20:47:48 UTC | Add generic support for MSECriterion. | 08 November 2016, 21:01:06 UTC |
0b3a0b0 | Gregory Chanan | 24 October 2016, 19:48:42 UTC | Add generic support for SoftMarginCriterion. | 08 November 2016, 21:01:06 UTC |
d4b4a58 | Gregory Chanan | 24 October 2016, 19:17:14 UTC | Add generic support for MarginCriterion. | 08 November 2016, 21:01:06 UTC |
8e23e5b | Gregory Chanan | 24 October 2016, 17:30:03 UTC | Add generic support for AbsCriterion. | 08 November 2016, 21:01:06 UTC |
b838e36 | Gregory Chanan | 19 October 2016, 22:17:17 UTC | Fix spacing in SpatialDilatedMaxPooling. | 08 November 2016, 21:01:06 UTC |
f5d6f30 | Gregory Chanan | 19 October 2016, 20:26:45 UTC | More leeway for convolution backward weight/bias. | 08 November 2016, 21:01:06 UTC |
5fb9b7b | Gregory Chanan | 18 October 2016, 16:05:06 UTC | Generic support for SpatialFullConvolution and SpatialDilatedConvolution. Uses matrix multiple for matrix vector multiply for half (no matrix vector implementation exists). | 08 November 2016, 21:01:06 UTC |
1a435b0 | Gregory Chanan | 17 October 2016, 22:46:22 UTC | Add generic support for SpatialFractionalMaxPooling. | 08 November 2016, 21:01:06 UTC |
be33326 | Gregory Chanan | 17 October 2016, 21:57:58 UTC | Generic support for SpatialConvolutionMM. Still need Hgemv. | 08 November 2016, 21:01:06 UTC |
1aad64f | Gregory Chanan | 17 October 2016, 19:50:44 UTC | Add generic support for SpatialConvolutionLocal. | 08 November 2016, 21:01:06 UTC |
c138aca | Gregory Chanan | 17 October 2016, 18:48:10 UTC | Add generic support for SpatialUpSamplingBilinear. Math is done at accreal precision. At real precision, forward pass fails, but backward passes. We do backward pass at accreal precision for consistency. | 08 November 2016, 21:01:06 UTC |
7e92dd2 | Gregory Chanan | 17 October 2016, 17:13:04 UTC | Add generic support for SpatialUpSamplingNearest. Accumulates as AccType. | 08 November 2016, 21:01:06 UTC |
7b191ae | Gregory Chanan | 17 October 2016, 16:42:27 UTC | Add generic support for SpatialReplicationPadding. | 08 November 2016, 21:01:06 UTC |
9e8056f | Gregory Chanan | 17 October 2016, 16:21:18 UTC | Add generic support for SpatialReflectionPooling. | 08 November 2016, 21:01:06 UTC |
0d15053 | Gregory Chanan | 14 October 2016, 23:06:21 UTC | Add generic support for SpatialSubSampling. Half types fail on backward, probably because we don't consistently accumulate in accreal. This is difficult because gradInput is accumulated directly (either with atomicAdd or not) rather than in another variable. | 08 November 2016, 21:01:06 UTC |
c077b3f | Gregory Chanan | 17 October 2016, 15:54:36 UTC | Generic support for SpatialCrossMapLRN Removed the C-linkage for a couple of functions because they are now generic -- not sure if they were used by anyone outside. | 08 November 2016, 21:01:06 UTC |
2232736 | Gregory Chanan | 14 October 2016, 20:58:55 UTC | Add generic support for SpatialAveragePooling. | 08 November 2016, 21:01:06 UTC |
b3b79b4 | Gregory Chanan | 14 October 2016, 20:38:22 UTC | Add generic support for SpatialAdaptiveMaxPooling. | 08 November 2016, 21:01:06 UTC |
8b64c92 | Gregory Chanan | 13 October 2016, 20:56:38 UTC | Use THCIndexTensors more generally. | 08 November 2016, 21:01:06 UTC |
be236c8 | Gregory Chanan | 13 October 2016, 15:46:10 UTC | Use indices for SpatialAdaptiveMaxPooling indices. | 08 November 2016, 21:01:06 UTC |
6994ae7 | Gregory Chanan | 13 October 2016, 15:16:03 UTC | Add generic support for SpatialMaxUnpooling. | 08 November 2016, 21:01:05 UTC |
30ccba8 | Gregory Chanan | 13 October 2016, 14:53:02 UTC | Fix tests | 08 November 2016, 21:01:05 UTC |
0c0e3d8 | Gregory Chanan | 12 October 2016, 22:52:52 UTC | Add generic support for SpatialMaxPooling. Also fix tests for SpatialDilatedMaxPooling. | 08 November 2016, 21:01:05 UTC |
3141686 | Gregory Chanan | 11 October 2016, 16:43:09 UTC | Get SpatialDilatedMaxPooling generic working with long tensors as index. Does as much math as possible in accreal to try to suss out why CudaHalfTensor fails. | 08 November 2016, 21:01:05 UTC |
e3d7d12 | Gregory Chanan | 10 October 2016, 20:06:59 UTC | Add generic support for SpatialDilatedMaxPooling. | 08 November 2016, 21:01:05 UTC |
3c89443 | Gregory Chanan | 07 October 2016, 22:16:47 UTC | Add generic support for SpatialClassNLLCriterion. | 08 November 2016, 21:01:05 UTC |
c08d781 | Gregory Chanan | 10 October 2016, 14:18:02 UTC | Remove fastExpIfAvail and benchmarking from functional tests. Also fix broken IFNDEF and test whitespace. | 08 November 2016, 21:01:05 UTC |
5d0c877 | Gregory Chanan | 06 October 2016, 23:07:53 UTC | Reorganize THCHalfAutoNumerics. | 08 November 2016, 21:01:05 UTC |
d892d1f | Gregory Chanan | 06 October 2016, 22:49:33 UTC | Iterate pointwise tests over all supported tensor types. | 08 November 2016, 21:01:05 UTC |
09acb86 | Gregory Chanan | 05 October 2016, 17:04:01 UTC | Add generic support for RReLU. | 08 November 2016, 21:01:05 UTC |
c6f67e1 | Gregory Chanan | 06 October 2016, 22:40:02 UTC | Add generic support for PReLU. This is the first instance of functions that take a lua number but are not reals in C. So, instead of automatically converting lua numbers in the half case, we parse the function definitions to find the argument positions to convert. | 08 November 2016, 21:01:05 UTC |
76232cb | Gregory Chanan | 10 October 2016, 16:32:27 UTC | fix logsoftmax | 08 November 2016, 21:01:05 UTC |
e65e1cf | Gregory Chanan | 10 October 2016, 16:07:08 UTC | Add generic support for LogSoftMax. | 08 November 2016, 21:01:05 UTC |
63b9beb | Gregory Chanan | 04 October 2016, 16:33:17 UTC | Add generic support for SoftMax. Math is done at accreal precision (e.g. for half, math is done at float precision). Originally code called __expf, which doesn't have a double equivalent; we call exp instead of converting down. | 08 November 2016, 21:01:05 UTC |
e30f1b4 | Gregory Chanan | 04 October 2016, 15:07:03 UTC | Add generic support for ELU. | 08 November 2016, 21:01:05 UTC |
c55e4a9 | Gregory Chanan | 03 October 2016, 22:06:29 UTC | Add generic support for SoftShrink. | 08 November 2016, 21:01:05 UTC |
788ee5a | Gregory Chanan | 03 October 2016, 21:57:57 UTC | Add generic support for Square. Math is (arbitrarily?) done at double precision to keep the intent of existing code. | 08 November 2016, 21:01:05 UTC |
cffe53a | Gregory Chanan | 03 October 2016, 21:51:30 UTC | Add generic support for Sqrt. | 08 November 2016, 21:01:05 UTC |
384da3a | Gregory Chanan | 03 October 2016, 20:50:25 UTC | Add generic support for LeakyReLU. | 08 November 2016, 21:01:05 UTC |
fb7e5af | Gregory Chanan | 03 October 2016, 20:37:58 UTC | Add generic support for Threshold. | 08 November 2016, 21:01:05 UTC |
c7c91f4 | Gregory Chanan | 03 October 2016, 20:23:41 UTC | Add generic support for LogSigmoid. This has the same logic as Sigmoid; i.e. math is done at double precision and then stored back at desired precision. | 08 November 2016, 21:01:05 UTC |
0324b96 | Gregory Chanan | 03 October 2016, 20:09:31 UTC | Add generic support for Sigmoid. This maintains the existing logic of doing the math in double precision and converting back to the intended type (previously: just float). We do the same for half here, although perhaps we should do the math at float in that case. There is some question about what to do with conversions; Sigmoid did math in double before converting back to float; we keep this intent, although there is some question on whether this was intentional and for half -- should we just go up to float or up to double? | 08 November 2016, 21:01:05 UTC |
82a3664 | Gregory Chanan | 03 October 2016, 19:11:22 UTC | Add generic support for Abs. | 08 November 2016, 21:01:05 UTC |
ff17570 | Gregory Chanan | 03 October 2016, 18:59:59 UTC | Add generic support for HardTanh. | 08 November 2016, 21:01:05 UTC |
b8fe31f | Gregory Chanan | 03 October 2016, 18:17:26 UTC | Add generic support for Tanh. | 08 November 2016, 21:01:05 UTC |
69491a1 | Gregory Chanan | 07 October 2016, 01:54:56 UTC | Add generic support for SoftPlus. Adds the ability to "genericize" cunn modules that can exist simultaneously with non-generic modules (i.e. modules can be genericized one at a time). Allowing both generic and non-generic modules simultaneously requires some extra code that can be removed once every module is genericized. Also genericizes SoftPlus in this way. | 08 November 2016, 21:01:05 UTC |
aa256bc | Soumith Chintala | 02 November 2016, 23:44:58 UTC | Merge pull request #364 from SYSTRAN/master test on inputGpu emptiness symmetric in backward and forward | 02 November 2016, 23:44:58 UTC |
98b777b | Jean A. Senellart | 02 November 2016, 08:33:21 UTC | test on inputGpu emptiness symmetric in backward and forward | 02 November 2016, 08:33:21 UTC |
64224a6 | Sam Gross | 25 October 2016, 19:19:03 UTC | Add sameGPU checks to BatchNormalization (#361) | 25 October 2016, 19:19:03 UTC |
b612429 | Soumith Chintala | 17 October 2016, 16:46:05 UTC | gcc 5 + cuda < 8 workaround improved | 17 October 2016, 16:46:21 UTC |
ae64752 | Soumith Chintala | 17 October 2016, 04:46:26 UTC | Merge pull request #353 from torch/upsamplingbilinearfix fixes to upsampling bilinear API | 17 October 2016, 04:46:26 UTC |
5870d13 | Soumith Chintala | 17 October 2016, 04:30:25 UTC | fixes to upsampling bilinear API | 17 October 2016, 04:30:25 UTC |
9cabae0 | Soumith Chintala | 13 October 2016, 22:09:43 UTC | Merge pull request #351 from torch/revert-350-master Revert "change to work on windows && replace long with ptrdiff_t" | 13 October 2016, 22:09:43 UTC |
14c591d | Soumith Chintala | 13 October 2016, 22:09:34 UTC | Revert "change to work on windows && replace long with ptrdiff_t" | 13 October 2016, 22:09:34 UTC |
60a753a | Soumith Chintala | 13 October 2016, 16:25:59 UTC | Merge pull request #350 from BTNC/master change to work on windows && replace long with ptrdiff_t | 13 October 2016, 16:25:59 UTC |
4247d64 | Rui Guo | 13 October 2016, 15:44:28 UTC | change to work on windows && replace long with ptrdiff_t | 13 October 2016, 15:44:28 UTC |
e388ee3 | Soumith Chintala | 07 October 2016, 14:36:40 UTC | Merge pull request #338 from nitsky/spatial_logsoftmax SpatialLogSoftMax | 07 October 2016, 14:36:40 UTC |
612dda4 | Soumith Chintala | 30 September 2016, 16:12:32 UTC | Merge pull request #343 from colesbury/master Fixes for https://github.com/torch/cutorch/pull/519 | 30 September 2016, 16:12:32 UTC |
52f7419 | Sam Gross | 29 September 2016, 23:19:41 UTC | Fixes for https://github.com/torch/cutorch/pull/519 | 29 September 2016, 23:19:41 UTC |
9a3ffab | David Yamnitsky | 27 September 2016, 15:16:31 UTC | Fix SpatialLogSoftMax memory leak and code cleanup | 27 September 2016, 15:16:31 UTC |
5aa68bb | Soumith Chintala | 27 September 2016, 00:50:42 UTC | Merge pull request #339 from torch/classnllfix making ClassNLLCriterion targets consistent between cpu and cuda | 27 September 2016, 00:50:42 UTC |
9b3be1f | soumith | 27 September 2016, 00:48:17 UTC | making ClassNLLCriterion targets consistent between cpu and cuda | 27 September 2016, 00:48:17 UTC |
8e6bfc5 | David Yamnitsky | 26 September 2016, 16:39:56 UTC | Update SpatialLogSoftMax kernel to use cuda dimensions | 26 September 2016, 16:39:56 UTC |
d85eeca | fsuzanomassa | 14 January 2016, 15:08:31 UTC | Update LogSoftMax to work in spatial domain | 21 September 2016, 15:11:59 UTC |
10e9b68 | Soumith Chintala | 14 September 2016, 14:53:29 UTC | Merge pull request #334 from apaszke/header_fix Mark BCECriterion weights as optional in THCUNN.h | 14 September 2016, 14:53:29 UTC |
a8a1c77 | Adam Paszke | 14 September 2016, 14:36:59 UTC | Mark BCECriterion weights as optional in THCUNN.h | 14 September 2016, 14:36:59 UTC |
ccc2309 | soumith | 13 September 2016, 04:09:25 UTC | moving arch detetction into THCUNN | 13 September 2016, 04:09:25 UTC |
4a04229 | David Yamnitsky | 08 September 2016, 20:33:53 UTC | BCECriterion THCUNN + Weights (#331) BCE Criterion CUDA implementation | 08 September 2016, 20:33:53 UTC |
ecc4141 | Soumith Chintala | 06 September 2016, 17:52:21 UTC | Merge pull request #332 from vivekn/dpt_fix Fix issue with flatten parameters in DataParallelTable | 06 September 2016, 17:52:21 UTC |
e1d9cbc | Vivek Narayanan | 06 September 2016, 17:42:33 UTC | Fix issue with flatten parameters in DataParallelTable | 06 September 2016, 17:42:33 UTC |
cdac0c3 | Soumith Chintala | 01 September 2016, 02:36:26 UTC | Merge pull request #329 from gchanan/hardtanh inplace is reversed in HardTanh:backward. | 01 September 2016, 02:36:26 UTC |
dbbb218 | Gregory Chanan | 01 September 2016, 02:17:23 UTC | inplace is reversed in HardTanh:backward. Fixes torch7 issue #734, "Inconsistent behavior of nn.Clamp in CPU and GPU modes". Adds a simple test that gradOutput equals gradInput after backward when inplace is set. It is possible to construct a test with inplace HardTanh where forward+backward yields different results for nn vs cunn, but this appears to be due to the inclusive vs exclusive bounds used for inplace vs non-inplace, respectively, so a more direct test is preferred. | 01 September 2016, 02:31:41 UTC |
d652966 | Soumith Chintala | 27 August 2016, 18:02:44 UTC | Update README.md | 27 August 2016, 18:02:44 UTC |
b6e4a61 | Soumith Chintala | 27 August 2016, 14:42:05 UTC | Merge pull request #327 from kmul00/voldilmaxpool VolumetricDilatedMaxPooling | 27 August 2016, 14:42:05 UTC |
ec16b36 | kmul00 | 28 August 2016, 13:31:12 UTC | VolumetricDilatedMaxPooling modified: lib/THCUNN/THCUNN.h copied: lib/THCUNN/VolumetricMaxPooling.cu -> lib/THCUNN/VolumetricDilatedMaxPooling.cu modified: lib/THCUNN/VolumetricMaxPooling.cu modified: test.lua | 28 August 2016, 13:31:12 UTC |
a3ccbeb | Soumith Chintala | 26 August 2016, 23:17:15 UTC | Merge pull request #326 from kmul00/consistentapimaxpool Consistent Max Pool API | 26 August 2016, 23:17:15 UTC |
f8c82e5 | kmul00 | 26 August 2016, 20:09:14 UTC | Consistent Max Pool API renamed: lib/THCUNN/SpatialMaxPooling.cu -> lib/THCUNN/SpatialDilatedMaxPooling.cu modified: lib/THCUNN/SpatialMaxPooling.cu modified: lib/THCUNN/THCUNN.h | 26 August 2016, 20:09:14 UTC |
3c4a48a | Soumith Chintala | 24 August 2016, 20:27:10 UTC | fix Lua 5.2 compat | 24 August 2016, 20:27:10 UTC |
ef2c953 | soumith | 22 August 2016, 18:41:46 UTC | fix critical bug in SpatialConvolution | 22 August 2016, 18:41:46 UTC |
50c63ac | Soumith Chintala | 19 August 2016, 21:11:28 UTC | Merge pull request #323 from apaszke/threshold Make Threshold THCUNN functions more consistent | 19 August 2016, 21:11:28 UTC |
2298e03 | soumith | 18 August 2016, 20:06:57 UTC | updating cmake | 18 August 2016, 20:06:57 UTC |
d3f22ef | Adam Paszke | 18 August 2016, 17:24:04 UTC | Make Threshold THCUNN functions more consistent | 18 August 2016, 18:14:25 UTC |
d0e27d2 | Soumith Chintala | 18 August 2016, 14:25:20 UTC | Merge pull request #322 from apaszke/spatial_conv Accept both 2D and 4D weights in SpatialConvolutionMM | 18 August 2016, 14:25:20 UTC |
f075592 | Adam Paszke | 18 August 2016, 13:58:43 UTC | Accept both 2D and 4D weights in SpatialConvolutionMM | 18 August 2016, 13:58:43 UTC |
25498b7 | kmul00 | 11 August 2016, 07:19:25 UTC | CUDA version of Spatioal Dilated Max Pooling modified: lib/THCUNN/SpatialMaxPooling.cu modified: lib/THCUNN/THCUNN.h modified: test.lua | 12 August 2016, 15:00:02 UTC |
3040562 | Soumith Chintala | 12 August 2016, 03:47:49 UTC | Merge pull request #319 from torch/typesfix fixes for multiple cuda types | 12 August 2016, 03:47:50 UTC |
2618f79 | soumith | 12 August 2016, 01:06:05 UTC | fixes for multiple cuda types | 12 August 2016, 01:57:28 UTC |