https://github.com/torch/cunn

sort by:
Revision Author Date Message Commit Date
8b64c92 Use THCIndexTensors more generally. 08 November 2016, 21:01:06 UTC
be236c8 Use indices for SpatialAdaptiveMaxPooling indices. 08 November 2016, 21:01:06 UTC
6994ae7 Add generic support for SpatialMaxUnpooling. 08 November 2016, 21:01:05 UTC
30ccba8 Fix tests 08 November 2016, 21:01:05 UTC
0c0e3d8 Add generic support for SpatialMaxPooling. Also fix tests for SpatialDilatedMaxPooling. 08 November 2016, 21:01:05 UTC
3141686 Get SpatialDilatedMaxPooling generic working with long tensors as index. Does as much math as possible in accreal to try to suss out why CudaHalfTensor fails. 08 November 2016, 21:01:05 UTC
e3d7d12 Add generic support for SpatialDilatedMaxPooling. 08 November 2016, 21:01:05 UTC
3c89443 Add generic support for SpatialClassNLLCriterion. 08 November 2016, 21:01:05 UTC
c08d781 Remove fastExpIfAvail and benchmarking from functional tests. Also fix broken IFNDEF and test whitespace. 08 November 2016, 21:01:05 UTC
5d0c877 Reorganize THCHalfAutoNumerics. 08 November 2016, 21:01:05 UTC
d892d1f Iterate pointwise tests over all supported tensor types. 08 November 2016, 21:01:05 UTC
09acb86 Add generic support for RReLU. 08 November 2016, 21:01:05 UTC
c6f67e1 Add generic support for PReLU. This is the first instance of functions that take a lua number but are not reals in C. So, instead of automatically converting lua numbers in the half case, we parse the function definitions to find the argument positions to convert. 08 November 2016, 21:01:05 UTC
76232cb fix logsoftmax 08 November 2016, 21:01:05 UTC
e65e1cf Add generic support for LogSoftMax. 08 November 2016, 21:01:05 UTC
63b9beb Add generic support for SoftMax. Math is done at accreal precision (e.g. for half, math is done at float precision). Originally code called __expf, which doesn't have a double equivalent; we call exp instead of converting down. 08 November 2016, 21:01:05 UTC
e30f1b4 Add generic support for ELU. 08 November 2016, 21:01:05 UTC
c55e4a9 Add generic support for SoftShrink. 08 November 2016, 21:01:05 UTC
788ee5a Add generic support for Square. Math is (arbitrarily?) done at double precision to keep the intent of existing code. 08 November 2016, 21:01:05 UTC
cffe53a Add generic support for Sqrt. 08 November 2016, 21:01:05 UTC
384da3a Add generic support for LeakyReLU. 08 November 2016, 21:01:05 UTC
fb7e5af Add generic support for Threshold. 08 November 2016, 21:01:05 UTC
c7c91f4 Add generic support for LogSigmoid. This has the same logic as Sigmoid; i.e. math is done at double precision and then stored back at desired precision. 08 November 2016, 21:01:05 UTC
0324b96 Add generic support for Sigmoid. This maintains the existing logic of doing the math in double precision and converting back to the intended type (previously: just float). We do the same for half here, although perhaps we should do the math at float in that case. There is some question about what to do with conversions; Sigmoid did math in double before converting back to float; we keep this intent, although there is some question on whether this was intentional and for half -- should we just go up to float or up to double? 08 November 2016, 21:01:05 UTC
82a3664 Add generic support for Abs. 08 November 2016, 21:01:05 UTC
ff17570 Add generic support for HardTanh. 08 November 2016, 21:01:05 UTC
b8fe31f Add generic support for Tanh. 08 November 2016, 21:01:05 UTC
69491a1 Add generic support for SoftPlus. Adds the ability to "genericize" cunn modules that can exist simultaneously with non-generic modules (i.e. modules can be genericized one at a time). Allowing both generic and non-generic modules simultaneously requires some extra code that can be removed once every module is genericized. Also genericizes SoftPlus in this way. 08 November 2016, 21:01:05 UTC
aa256bc Merge pull request #364 from SYSTRAN/master test on inputGpu emptiness symmetric in backward and forward 02 November 2016, 23:44:58 UTC
98b777b test on inputGpu emptiness symmetric in backward and forward 02 November 2016, 08:33:21 UTC
64224a6 Add sameGPU checks to BatchNormalization (#361) 25 October 2016, 19:19:03 UTC
b612429 gcc 5 + cuda < 8 workaround improved 17 October 2016, 16:46:21 UTC
ae64752 Merge pull request #353 from torch/upsamplingbilinearfix fixes to upsampling bilinear API 17 October 2016, 04:46:26 UTC
5870d13 fixes to upsampling bilinear API 17 October 2016, 04:30:25 UTC
9cabae0 Merge pull request #351 from torch/revert-350-master Revert "change to work on windows && replace long with ptrdiff_t" 13 October 2016, 22:09:43 UTC
14c591d Revert "change to work on windows && replace long with ptrdiff_t" 13 October 2016, 22:09:34 UTC
60a753a Merge pull request #350 from BTNC/master change to work on windows && replace long with ptrdiff_t 13 October 2016, 16:25:59 UTC
4247d64 change to work on windows && replace long with ptrdiff_t 13 October 2016, 15:44:28 UTC
e388ee3 Merge pull request #338 from nitsky/spatial_logsoftmax SpatialLogSoftMax 07 October 2016, 14:36:40 UTC
612dda4 Merge pull request #343 from colesbury/master Fixes for https://github.com/torch/cutorch/pull/519 30 September 2016, 16:12:32 UTC
52f7419 Fixes for https://github.com/torch/cutorch/pull/519 29 September 2016, 23:19:41 UTC
9a3ffab Fix SpatialLogSoftMax memory leak and code cleanup 27 September 2016, 15:16:31 UTC
5aa68bb Merge pull request #339 from torch/classnllfix making ClassNLLCriterion targets consistent between cpu and cuda 27 September 2016, 00:50:42 UTC
9b3be1f making ClassNLLCriterion targets consistent between cpu and cuda 27 September 2016, 00:48:17 UTC
8e6bfc5 Update SpatialLogSoftMax kernel to use cuda dimensions 26 September 2016, 16:39:56 UTC
d85eeca Update LogSoftMax to work in spatial domain 21 September 2016, 15:11:59 UTC
10e9b68 Merge pull request #334 from apaszke/header_fix Mark BCECriterion weights as optional in THCUNN.h 14 September 2016, 14:53:29 UTC
a8a1c77 Mark BCECriterion weights as optional in THCUNN.h 14 September 2016, 14:36:59 UTC
ccc2309 moving arch detetction into THCUNN 13 September 2016, 04:09:25 UTC
4a04229 BCECriterion THCUNN + Weights (#331) BCE Criterion CUDA implementation 08 September 2016, 20:33:53 UTC
ecc4141 Merge pull request #332 from vivekn/dpt_fix Fix issue with flatten parameters in DataParallelTable 06 September 2016, 17:52:21 UTC
e1d9cbc Fix issue with flatten parameters in DataParallelTable 06 September 2016, 17:42:33 UTC
cdac0c3 Merge pull request #329 from gchanan/hardtanh inplace is reversed in HardTanh:backward. 01 September 2016, 02:36:26 UTC
dbbb218 inplace is reversed in HardTanh:backward. Fixes torch7 issue #734, "Inconsistent behavior of nn.Clamp in CPU and GPU modes". Adds a simple test that gradOutput equals gradInput after backward when inplace is set. It is possible to construct a test with inplace HardTanh where forward+backward yields different results for nn vs cunn, but this appears to be due to the inclusive vs exclusive bounds used for inplace vs non-inplace, respectively, so a more direct test is preferred. 01 September 2016, 02:31:41 UTC
d652966 Update README.md 27 August 2016, 18:02:44 UTC
b6e4a61 Merge pull request #327 from kmul00/voldilmaxpool VolumetricDilatedMaxPooling 27 August 2016, 14:42:05 UTC
ec16b36 VolumetricDilatedMaxPooling modified: lib/THCUNN/THCUNN.h copied: lib/THCUNN/VolumetricMaxPooling.cu -> lib/THCUNN/VolumetricDilatedMaxPooling.cu modified: lib/THCUNN/VolumetricMaxPooling.cu modified: test.lua 28 August 2016, 13:31:12 UTC
a3ccbeb Merge pull request #326 from kmul00/consistentapimaxpool Consistent Max Pool API 26 August 2016, 23:17:15 UTC
f8c82e5 Consistent Max Pool API renamed: lib/THCUNN/SpatialMaxPooling.cu -> lib/THCUNN/SpatialDilatedMaxPooling.cu modified: lib/THCUNN/SpatialMaxPooling.cu modified: lib/THCUNN/THCUNN.h 26 August 2016, 20:09:14 UTC
3c4a48a fix Lua 5.2 compat 24 August 2016, 20:27:10 UTC
ef2c953 fix critical bug in SpatialConvolution 22 August 2016, 18:41:46 UTC
50c63ac Merge pull request #323 from apaszke/threshold Make Threshold THCUNN functions more consistent 19 August 2016, 21:11:28 UTC
2298e03 updating cmake 18 August 2016, 20:06:57 UTC
d3f22ef Make Threshold THCUNN functions more consistent 18 August 2016, 18:14:25 UTC
d0e27d2 Merge pull request #322 from apaszke/spatial_conv Accept both 2D and 4D weights in SpatialConvolutionMM 18 August 2016, 14:25:20 UTC
f075592 Accept both 2D and 4D weights in SpatialConvolutionMM 18 August 2016, 13:58:43 UTC
25498b7 CUDA version of Spatioal Dilated Max Pooling modified: lib/THCUNN/SpatialMaxPooling.cu modified: lib/THCUNN/THCUNN.h modified: test.lua 12 August 2016, 15:00:02 UTC
3040562 Merge pull request #319 from torch/typesfix fixes for multiple cuda types 12 August 2016, 03:47:50 UTC
2618f79 fixes for multiple cuda types 12 August 2016, 01:57:28 UTC
65568cf Merge pull request #318 from apaszke/master Improvements/fixes in THCUNN 11 August 2016, 19:40:15 UTC
fdf6fe4 Fix DistKLDivCriterion gradInput formula 11 August 2016, 19:37:11 UTC
8ae08a2 Use TH_INDEX_BASE in THCUNN 11 August 2016, 19:31:31 UTC
1b76671 Mark optional arguments in THCUNN.h 11 August 2016, 19:31:14 UTC
ccba310 Merge pull request #316 from apaszke/master Fix THCUNN.h formatting 10 August 2016, 16:51:55 UTC
75e4228 Fix THCUNN.h formatting 10 August 2016, 16:33:40 UTC
6409c15 Merge pull request #314 from colesbury/bn Fix "invalid configuration" when using very large batch sizes in evaluate mode 05 August 2016, 21:07:07 UTC
49fdc1c Fix "invalid configuration" when using very large batch sizes in evaluate mode. Example: ``` bn = nn.BatchNormalization(100):cuda() bn:evaluate() bn:forward(torch.CudaTensor(147000, 100):zero()) cutorch.synchronize() ``` Fixes https://github.com/torch/nn/issues/907 05 August 2016, 20:44:26 UTC
7f27d6a Merge pull request #313 from torch/voldilcol Volumetric Dilated Convolution 04 August 2016, 04:41:05 UTC
d7ee9c6 Volumetric Dilated Convolution 04 August 2016, 04:29:30 UTC
b592695 Merge pull request #312 from torch/softfix fix SpatialSoftMax bug and add unit tests 02 August 2016, 15:50:26 UTC
0bd7346 fix SpatialSoftMax bug and add unit tests 02 August 2016, 15:48:48 UTC
2b31650 adding new arch selector 29 July 2016, 23:39:50 UTC
8c4e9d5 Merge pull request #310 from torch/cutorch-Sgemm-compat gemm -> Sgemm 29 July 2016, 05:47:01 UTC
84d3746 gemm -> Sgemm 29 July 2016, 05:45:08 UTC
208e1fa add versioning script 29 July 2016, 05:08:40 UTC
e856c86 Cutting version 1.0-0 29 July 2016, 05:04:49 UTC
387ff4f Merge pull request #309 from paulineluc14/master Adding SpatialUpSamplingBilinear 27 July 2016, 18:25:39 UTC
a6bb463 Adding SpatialUpSamplingBilinear 27 July 2016, 09:12:05 UTC
9634a07 Merge pull request #308 from mys007/classnllbounds NLL Criteria: weight bound checking 25 July 2016, 14:00:56 UTC
3f14ce2 added bound checks for weights 25 July 2016, 10:03:04 UTC
2cd59e1 Merge pull request #305 from colesbury/batchnorm Fix BatchNormalization warpSum for pre-Kepler cards 07 July 2016, 23:19:19 UTC
136b547 Fix BatchNormalization warpSum for pre-Kepler cards Fixes #298 07 July 2016, 23:13:12 UTC
86d9f56 Merge pull request #303 from torch/powfix fix for std::pow ambiguousness 04 July 2016, 22:26:51 UTC
0732014 fix for std::pow ambiguousness 04 July 2016, 22:27:05 UTC
d02176d Merge pull request #282 from nicholas-leonard/GPU nn.GPU 03 July 2016, 12:39:23 UTC
d642e59 nn.GPU unit test 02 July 2016, 20:31:06 UTC
0e7f438 Merge pull request #301 from PraveerSINGH/SpatialFullConvolution-noBias Add noBias for nn.SpatialFullConvolution 23 June 2016, 18:57:27 UTC
b80facc nobias in spatial full conv 23 June 2016, 13:22:39 UTC
67c87ef Merge pull request #300 from jonathantompson/volpad Added VolumetricReplicationPadding. 18 June 2016, 15:44:16 UTC
e5181dc Merge pull request #299 from szagoruyko/inplace-hardtanh inplace hardtanh, remove relu6 18 June 2016, 15:42:51 UTC
back to top