8b64c92 | Gregory Chanan | 13 October 2016, 20:56:38 UTC | Use THCIndexTensors more generally. | 08 November 2016, 21:01:06 UTC |
be236c8 | Gregory Chanan | 13 October 2016, 15:46:10 UTC | Use indices for SpatialAdaptiveMaxPooling indices. | 08 November 2016, 21:01:06 UTC |
6994ae7 | Gregory Chanan | 13 October 2016, 15:16:03 UTC | Add generic support for SpatialMaxUnpooling. | 08 November 2016, 21:01:05 UTC |
30ccba8 | Gregory Chanan | 13 October 2016, 14:53:02 UTC | Fix tests | 08 November 2016, 21:01:05 UTC |
0c0e3d8 | Gregory Chanan | 12 October 2016, 22:52:52 UTC | Add generic support for SpatialMaxPooling. Also fix tests for SpatialDilatedMaxPooling. | 08 November 2016, 21:01:05 UTC |
3141686 | Gregory Chanan | 11 October 2016, 16:43:09 UTC | Get SpatialDilatedMaxPooling generic working with long tensors as index. Does as much math as possible in accreal to try to suss out why CudaHalfTensor fails. | 08 November 2016, 21:01:05 UTC |
e3d7d12 | Gregory Chanan | 10 October 2016, 20:06:59 UTC | Add generic support for SpatialDilatedMaxPooling. | 08 November 2016, 21:01:05 UTC |
3c89443 | Gregory Chanan | 07 October 2016, 22:16:47 UTC | Add generic support for SpatialClassNLLCriterion. | 08 November 2016, 21:01:05 UTC |
c08d781 | Gregory Chanan | 10 October 2016, 14:18:02 UTC | Remove fastExpIfAvail and benchmarking from functional tests. Also fix broken IFNDEF and test whitespace. | 08 November 2016, 21:01:05 UTC |
5d0c877 | Gregory Chanan | 06 October 2016, 23:07:53 UTC | Reorganize THCHalfAutoNumerics. | 08 November 2016, 21:01:05 UTC |
d892d1f | Gregory Chanan | 06 October 2016, 22:49:33 UTC | Iterate pointwise tests over all supported tensor types. | 08 November 2016, 21:01:05 UTC |
09acb86 | Gregory Chanan | 05 October 2016, 17:04:01 UTC | Add generic support for RReLU. | 08 November 2016, 21:01:05 UTC |
c6f67e1 | Gregory Chanan | 06 October 2016, 22:40:02 UTC | Add generic support for PReLU. This is the first instance of functions that take a lua number but are not reals in C. So, instead of automatically converting lua numbers in the half case, we parse the function definitions to find the argument positions to convert. | 08 November 2016, 21:01:05 UTC |
76232cb | Gregory Chanan | 10 October 2016, 16:32:27 UTC | fix logsoftmax | 08 November 2016, 21:01:05 UTC |
e65e1cf | Gregory Chanan | 10 October 2016, 16:07:08 UTC | Add generic support for LogSoftMax. | 08 November 2016, 21:01:05 UTC |
63b9beb | Gregory Chanan | 04 October 2016, 16:33:17 UTC | Add generic support for SoftMax. Math is done at accreal precision (e.g. for half, math is done at float precision). Originally code called __expf, which doesn't have a double equivalent; we call exp instead of converting down. | 08 November 2016, 21:01:05 UTC |
e30f1b4 | Gregory Chanan | 04 October 2016, 15:07:03 UTC | Add generic support for ELU. | 08 November 2016, 21:01:05 UTC |
c55e4a9 | Gregory Chanan | 03 October 2016, 22:06:29 UTC | Add generic support for SoftShrink. | 08 November 2016, 21:01:05 UTC |
788ee5a | Gregory Chanan | 03 October 2016, 21:57:57 UTC | Add generic support for Square. Math is (arbitrarily?) done at double precision to keep the intent of existing code. | 08 November 2016, 21:01:05 UTC |
cffe53a | Gregory Chanan | 03 October 2016, 21:51:30 UTC | Add generic support for Sqrt. | 08 November 2016, 21:01:05 UTC |
384da3a | Gregory Chanan | 03 October 2016, 20:50:25 UTC | Add generic support for LeakyReLU. | 08 November 2016, 21:01:05 UTC |
fb7e5af | Gregory Chanan | 03 October 2016, 20:37:58 UTC | Add generic support for Threshold. | 08 November 2016, 21:01:05 UTC |
c7c91f4 | Gregory Chanan | 03 October 2016, 20:23:41 UTC | Add generic support for LogSigmoid. This has the same logic as Sigmoid; i.e. math is done at double precision and then stored back at desired precision. | 08 November 2016, 21:01:05 UTC |
0324b96 | Gregory Chanan | 03 October 2016, 20:09:31 UTC | Add generic support for Sigmoid. This maintains the existing logic of doing the math in double precision and converting back to the intended type (previously: just float). We do the same for half here, although perhaps we should do the math at float in that case. There is some question about what to do with conversions; Sigmoid did math in double before converting back to float; we keep this intent, although there is some question on whether this was intentional and for half -- should we just go up to float or up to double? | 08 November 2016, 21:01:05 UTC |
82a3664 | Gregory Chanan | 03 October 2016, 19:11:22 UTC | Add generic support for Abs. | 08 November 2016, 21:01:05 UTC |
ff17570 | Gregory Chanan | 03 October 2016, 18:59:59 UTC | Add generic support for HardTanh. | 08 November 2016, 21:01:05 UTC |
b8fe31f | Gregory Chanan | 03 October 2016, 18:17:26 UTC | Add generic support for Tanh. | 08 November 2016, 21:01:05 UTC |
69491a1 | Gregory Chanan | 07 October 2016, 01:54:56 UTC | Add generic support for SoftPlus. Adds the ability to "genericize" cunn modules that can exist simultaneously with non-generic modules (i.e. modules can be genericized one at a time). Allowing both generic and non-generic modules simultaneously requires some extra code that can be removed once every module is genericized. Also genericizes SoftPlus in this way. | 08 November 2016, 21:01:05 UTC |
aa256bc | Soumith Chintala | 02 November 2016, 23:44:58 UTC | Merge pull request #364 from SYSTRAN/master test on inputGpu emptiness symmetric in backward and forward | 02 November 2016, 23:44:58 UTC |
98b777b | Jean A. Senellart | 02 November 2016, 08:33:21 UTC | test on inputGpu emptiness symmetric in backward and forward | 02 November 2016, 08:33:21 UTC |
64224a6 | Sam Gross | 25 October 2016, 19:19:03 UTC | Add sameGPU checks to BatchNormalization (#361) | 25 October 2016, 19:19:03 UTC |
b612429 | Soumith Chintala | 17 October 2016, 16:46:05 UTC | gcc 5 + cuda < 8 workaround improved | 17 October 2016, 16:46:21 UTC |
ae64752 | Soumith Chintala | 17 October 2016, 04:46:26 UTC | Merge pull request #353 from torch/upsamplingbilinearfix fixes to upsampling bilinear API | 17 October 2016, 04:46:26 UTC |
5870d13 | Soumith Chintala | 17 October 2016, 04:30:25 UTC | fixes to upsampling bilinear API | 17 October 2016, 04:30:25 UTC |
9cabae0 | Soumith Chintala | 13 October 2016, 22:09:43 UTC | Merge pull request #351 from torch/revert-350-master Revert "change to work on windows && replace long with ptrdiff_t" | 13 October 2016, 22:09:43 UTC |
14c591d | Soumith Chintala | 13 October 2016, 22:09:34 UTC | Revert "change to work on windows && replace long with ptrdiff_t" | 13 October 2016, 22:09:34 UTC |
60a753a | Soumith Chintala | 13 October 2016, 16:25:59 UTC | Merge pull request #350 from BTNC/master change to work on windows && replace long with ptrdiff_t | 13 October 2016, 16:25:59 UTC |
4247d64 | Rui Guo | 13 October 2016, 15:44:28 UTC | change to work on windows && replace long with ptrdiff_t | 13 October 2016, 15:44:28 UTC |
e388ee3 | Soumith Chintala | 07 October 2016, 14:36:40 UTC | Merge pull request #338 from nitsky/spatial_logsoftmax SpatialLogSoftMax | 07 October 2016, 14:36:40 UTC |
612dda4 | Soumith Chintala | 30 September 2016, 16:12:32 UTC | Merge pull request #343 from colesbury/master Fixes for https://github.com/torch/cutorch/pull/519 | 30 September 2016, 16:12:32 UTC |
52f7419 | Sam Gross | 29 September 2016, 23:19:41 UTC | Fixes for https://github.com/torch/cutorch/pull/519 | 29 September 2016, 23:19:41 UTC |
9a3ffab | David Yamnitsky | 27 September 2016, 15:16:31 UTC | Fix SpatialLogSoftMax memory leak and code cleanup | 27 September 2016, 15:16:31 UTC |
5aa68bb | Soumith Chintala | 27 September 2016, 00:50:42 UTC | Merge pull request #339 from torch/classnllfix making ClassNLLCriterion targets consistent between cpu and cuda | 27 September 2016, 00:50:42 UTC |
9b3be1f | soumith | 27 September 2016, 00:48:17 UTC | making ClassNLLCriterion targets consistent between cpu and cuda | 27 September 2016, 00:48:17 UTC |
8e6bfc5 | David Yamnitsky | 26 September 2016, 16:39:56 UTC | Update SpatialLogSoftMax kernel to use cuda dimensions | 26 September 2016, 16:39:56 UTC |
d85eeca | fsuzanomassa | 14 January 2016, 15:08:31 UTC | Update LogSoftMax to work in spatial domain | 21 September 2016, 15:11:59 UTC |
10e9b68 | Soumith Chintala | 14 September 2016, 14:53:29 UTC | Merge pull request #334 from apaszke/header_fix Mark BCECriterion weights as optional in THCUNN.h | 14 September 2016, 14:53:29 UTC |
a8a1c77 | Adam Paszke | 14 September 2016, 14:36:59 UTC | Mark BCECriterion weights as optional in THCUNN.h | 14 September 2016, 14:36:59 UTC |
ccc2309 | soumith | 13 September 2016, 04:09:25 UTC | moving arch detetction into THCUNN | 13 September 2016, 04:09:25 UTC |
4a04229 | David Yamnitsky | 08 September 2016, 20:33:53 UTC | BCECriterion THCUNN + Weights (#331) BCE Criterion CUDA implementation | 08 September 2016, 20:33:53 UTC |
ecc4141 | Soumith Chintala | 06 September 2016, 17:52:21 UTC | Merge pull request #332 from vivekn/dpt_fix Fix issue with flatten parameters in DataParallelTable | 06 September 2016, 17:52:21 UTC |
e1d9cbc | Vivek Narayanan | 06 September 2016, 17:42:33 UTC | Fix issue with flatten parameters in DataParallelTable | 06 September 2016, 17:42:33 UTC |
cdac0c3 | Soumith Chintala | 01 September 2016, 02:36:26 UTC | Merge pull request #329 from gchanan/hardtanh inplace is reversed in HardTanh:backward. | 01 September 2016, 02:36:26 UTC |
dbbb218 | Gregory Chanan | 01 September 2016, 02:17:23 UTC | inplace is reversed in HardTanh:backward. Fixes torch7 issue #734, "Inconsistent behavior of nn.Clamp in CPU and GPU modes". Adds a simple test that gradOutput equals gradInput after backward when inplace is set. It is possible to construct a test with inplace HardTanh where forward+backward yields different results for nn vs cunn, but this appears to be due to the inclusive vs exclusive bounds used for inplace vs non-inplace, respectively, so a more direct test is preferred. | 01 September 2016, 02:31:41 UTC |
d652966 | Soumith Chintala | 27 August 2016, 18:02:44 UTC | Update README.md | 27 August 2016, 18:02:44 UTC |
b6e4a61 | Soumith Chintala | 27 August 2016, 14:42:05 UTC | Merge pull request #327 from kmul00/voldilmaxpool VolumetricDilatedMaxPooling | 27 August 2016, 14:42:05 UTC |
ec16b36 | kmul00 | 28 August 2016, 13:31:12 UTC | VolumetricDilatedMaxPooling modified: lib/THCUNN/THCUNN.h copied: lib/THCUNN/VolumetricMaxPooling.cu -> lib/THCUNN/VolumetricDilatedMaxPooling.cu modified: lib/THCUNN/VolumetricMaxPooling.cu modified: test.lua | 28 August 2016, 13:31:12 UTC |
a3ccbeb | Soumith Chintala | 26 August 2016, 23:17:15 UTC | Merge pull request #326 from kmul00/consistentapimaxpool Consistent Max Pool API | 26 August 2016, 23:17:15 UTC |
f8c82e5 | kmul00 | 26 August 2016, 20:09:14 UTC | Consistent Max Pool API renamed: lib/THCUNN/SpatialMaxPooling.cu -> lib/THCUNN/SpatialDilatedMaxPooling.cu modified: lib/THCUNN/SpatialMaxPooling.cu modified: lib/THCUNN/THCUNN.h | 26 August 2016, 20:09:14 UTC |
3c4a48a | Soumith Chintala | 24 August 2016, 20:27:10 UTC | fix Lua 5.2 compat | 24 August 2016, 20:27:10 UTC |
ef2c953 | soumith | 22 August 2016, 18:41:46 UTC | fix critical bug in SpatialConvolution | 22 August 2016, 18:41:46 UTC |
50c63ac | Soumith Chintala | 19 August 2016, 21:11:28 UTC | Merge pull request #323 from apaszke/threshold Make Threshold THCUNN functions more consistent | 19 August 2016, 21:11:28 UTC |
2298e03 | soumith | 18 August 2016, 20:06:57 UTC | updating cmake | 18 August 2016, 20:06:57 UTC |
d3f22ef | Adam Paszke | 18 August 2016, 17:24:04 UTC | Make Threshold THCUNN functions more consistent | 18 August 2016, 18:14:25 UTC |
d0e27d2 | Soumith Chintala | 18 August 2016, 14:25:20 UTC | Merge pull request #322 from apaszke/spatial_conv Accept both 2D and 4D weights in SpatialConvolutionMM | 18 August 2016, 14:25:20 UTC |
f075592 | Adam Paszke | 18 August 2016, 13:58:43 UTC | Accept both 2D and 4D weights in SpatialConvolutionMM | 18 August 2016, 13:58:43 UTC |
25498b7 | kmul00 | 11 August 2016, 07:19:25 UTC | CUDA version of Spatioal Dilated Max Pooling modified: lib/THCUNN/SpatialMaxPooling.cu modified: lib/THCUNN/THCUNN.h modified: test.lua | 12 August 2016, 15:00:02 UTC |
3040562 | Soumith Chintala | 12 August 2016, 03:47:49 UTC | Merge pull request #319 from torch/typesfix fixes for multiple cuda types | 12 August 2016, 03:47:50 UTC |
2618f79 | soumith | 12 August 2016, 01:06:05 UTC | fixes for multiple cuda types | 12 August 2016, 01:57:28 UTC |
65568cf | Soumith Chintala | 11 August 2016, 19:40:15 UTC | Merge pull request #318 from apaszke/master Improvements/fixes in THCUNN | 11 August 2016, 19:40:15 UTC |
fdf6fe4 | Adam Paszke | 11 August 2016, 19:37:11 UTC | Fix DistKLDivCriterion gradInput formula | 11 August 2016, 19:37:11 UTC |
8ae08a2 | Adam Paszke | 11 August 2016, 19:29:57 UTC | Use TH_INDEX_BASE in THCUNN | 11 August 2016, 19:31:31 UTC |
1b76671 | Adam Paszke | 11 August 2016, 19:29:41 UTC | Mark optional arguments in THCUNN.h | 11 August 2016, 19:31:14 UTC |
ccba310 | Soumith Chintala | 10 August 2016, 16:51:55 UTC | Merge pull request #316 from apaszke/master Fix THCUNN.h formatting | 10 August 2016, 16:51:55 UTC |
75e4228 | Adam Paszke | 10 August 2016, 16:33:40 UTC | Fix THCUNN.h formatting | 10 August 2016, 16:33:40 UTC |
6409c15 | Soumith Chintala | 05 August 2016, 21:07:07 UTC | Merge pull request #314 from colesbury/bn Fix "invalid configuration" when using very large batch sizes in evaluate mode | 05 August 2016, 21:07:07 UTC |
49fdc1c | Sam Gross | 05 August 2016, 20:44:26 UTC | Fix "invalid configuration" when using very large batch sizes in evaluate mode. Example: ``` bn = nn.BatchNormalization(100):cuda() bn:evaluate() bn:forward(torch.CudaTensor(147000, 100):zero()) cutorch.synchronize() ``` Fixes https://github.com/torch/nn/issues/907 | 05 August 2016, 20:44:26 UTC |
7f27d6a | Soumith Chintala | 04 August 2016, 04:41:05 UTC | Merge pull request #313 from torch/voldilcol Volumetric Dilated Convolution | 04 August 2016, 04:41:05 UTC |
d7ee9c6 | soumith | 04 August 2016, 04:04:52 UTC | Volumetric Dilated Convolution | 04 August 2016, 04:29:30 UTC |
b592695 | Soumith Chintala | 02 August 2016, 15:50:26 UTC | Merge pull request #312 from torch/softfix fix SpatialSoftMax bug and add unit tests | 02 August 2016, 15:50:26 UTC |
0bd7346 | soumith | 02 August 2016, 15:48:48 UTC | fix SpatialSoftMax bug and add unit tests | 02 August 2016, 15:48:48 UTC |
2b31650 | Soumith Chintala | 29 July 2016, 23:39:50 UTC | adding new arch selector | 29 July 2016, 23:39:50 UTC |
8c4e9d5 | Soumith Chintala | 29 July 2016, 05:47:01 UTC | Merge pull request #310 from torch/cutorch-Sgemm-compat gemm -> Sgemm | 29 July 2016, 05:47:01 UTC |
84d3746 | Sergey Zagoruyko | 29 June 2016, 16:06:10 UTC | gemm -> Sgemm | 29 July 2016, 05:45:08 UTC |
208e1fa | Soumith Chintala | 29 July 2016, 05:08:40 UTC | add versioning script | 29 July 2016, 05:08:40 UTC |
e856c86 | Soumith Chintala | 29 July 2016, 05:04:49 UTC | Cutting version 1.0-0 | 29 July 2016, 05:04:49 UTC |
387ff4f | Soumith Chintala | 27 July 2016, 18:25:39 UTC | Merge pull request #309 from paulineluc14/master Adding SpatialUpSamplingBilinear | 27 July 2016, 18:25:39 UTC |
a6bb463 | Pauline Luc | 27 July 2016, 09:12:05 UTC | Adding SpatialUpSamplingBilinear | 27 July 2016, 09:12:05 UTC |
9634a07 | Soumith Chintala | 25 July 2016, 14:00:56 UTC | Merge pull request #308 from mys007/classnllbounds NLL Criteria: weight bound checking | 25 July 2016, 14:00:56 UTC |
3f14ce2 | Martin Simonovsky | 25 July 2016, 10:03:04 UTC | added bound checks for weights | 25 July 2016, 10:03:04 UTC |
2cd59e1 | Soumith Chintala | 07 July 2016, 23:19:19 UTC | Merge pull request #305 from colesbury/batchnorm Fix BatchNormalization warpSum for pre-Kepler cards | 07 July 2016, 23:19:19 UTC |
136b547 | Sam Gross | 07 July 2016, 23:12:58 UTC | Fix BatchNormalization warpSum for pre-Kepler cards Fixes #298 | 07 July 2016, 23:13:12 UTC |
86d9f56 | Soumith Chintala | 04 July 2016, 22:26:51 UTC | Merge pull request #303 from torch/powfix fix for std::pow ambiguousness | 04 July 2016, 22:26:51 UTC |
0732014 | soumith | 04 July 2016, 22:27:05 UTC | fix for std::pow ambiguousness | 04 July 2016, 22:27:05 UTC |
d02176d | Soumith Chintala | 03 July 2016, 12:39:23 UTC | Merge pull request #282 from nicholas-leonard/GPU nn.GPU | 03 July 2016, 12:39:23 UTC |
d642e59 | nicholas-leonard | 02 July 2016, 20:30:58 UTC | nn.GPU unit test | 02 July 2016, 20:31:06 UTC |
0e7f438 | Soumith Chintala | 23 June 2016, 18:57:27 UTC | Merge pull request #301 from PraveerSINGH/SpatialFullConvolution-noBias Add noBias for nn.SpatialFullConvolution | 23 June 2016, 18:57:27 UTC |
b80facc | PraveerSINGH | 23 June 2016, 13:22:39 UTC | nobias in spatial full conv | 23 June 2016, 13:22:39 UTC |
67c87ef | Soumith Chintala | 18 June 2016, 15:44:16 UTC | Merge pull request #300 from jonathantompson/volpad Added VolumetricReplicationPadding. | 18 June 2016, 15:44:16 UTC |
e5181dc | Soumith Chintala | 18 June 2016, 15:42:51 UTC | Merge pull request #299 from szagoruyko/inplace-hardtanh inplace hardtanh, remove relu6 | 18 June 2016, 15:42:51 UTC |