https://github.com/BVLC/caffe

sort by:
Revision Author Date Message Commit Date
737ea5e Merge pull request #1112 from BVLC/next Next: release candidate 19 September 2014, 05:22:02 UTC
89fd7da relax precision of gradient-based solver tests 19 September 2014, 05:08:11 UTC
403b56b [example] groom siamese notebook 19 September 2014, 04:38:43 UTC
7c3c089 Merge pull request #959 from nickcarlevaris/contrastive_loss Add contrastive loss layer, tests, and a siamese network example 19 September 2014, 04:37:31 UTC
e146423 [docs] order ipython notebooks 19 September 2014, 04:25:10 UTC
a920a14 [example] resurrect imagenet training scripts 19 September 2014, 03:24:19 UTC
7a507d6 [model zoo] ignore models -- only for reference or zoo 19 September 2014, 03:15:50 UTC
4e02e06 [model zoo] download from gist grooming - invoke by shell - default download dir to models/ - save to flat dir of owner-gist instead of nested owner/gist 19 September 2014, 03:06:24 UTC
58dce0e Merge pull request #1110 from sergeyk/dev [model zoo] download gist script 18 September 2014, 23:27:52 UTC
08d7f8c [model zoo] download gist script 18 September 2014, 23:27:44 UTC
8008533 Merge pull request #594 from longjon/layer-reshaping On-the-fly net resizing, without reallocation (where possible) 18 September 2014, 20:32:35 UTC
d833ab3 check that LRN's local_size is odd as the current implementation requires 18 September 2014, 20:17:43 UTC
0b5e11d [docs] clarify the use of Blob::Reshape a bit 18 September 2014, 20:17:43 UTC
fdf2de1 [pycaffe] expose Net::Reshape 18 September 2014, 20:17:43 UTC
490077e add Net::Reshape for only reshaping Note that it is not normally necessary to call this function when using reshapable nets, but sometimes it can be useful to compute the sizes of intermediate layers without waiting for the forward pass. 18 September 2014, 20:17:43 UTC
24350a6 include Reshape in caffe time Since we are now calling Reshape in the Forward pass, it's only fair to include it when timing. Reshape calls should normally be four or so orders of magnitude faster than Forward calls; this change also makes it easy to notice a mistake that causes something slow to happen in Reshape. 18 September 2014, 20:17:43 UTC
db5bb15 test net reshaping 18 September 2014, 20:17:43 UTC
4f1b668 default LayerSetUp to no-op instead of NOT_IMPLEMENTED Now that top blobs are set up in Layer::Reshape, it's Reshape that is mandatory, and simple layers often don't need to implement LayerSetUp. Reshape is (already) declared abstract, so not implementing it is a compile-time error. 18 September 2014, 20:17:43 UTC
d2de2ee call Reshape in Layer::SetUp Strictly speaking, Reshape doesn't need to be called until the first Forward call; however, much existing code (especially tests) assumes that top blobs will be set up in SetUp, so we may as well do it there. 18 September 2014, 20:17:43 UTC
6c63b8c split off Reshape for vision layers Note that we are dropping some checks from LRN layer. However, these checks are fairly redundant; something is very wrong if these layers are producing top blobs that are different sizes than their inputs, and tests are the right place to catch that. The thing that really should be checked (that isn't) is that that local_size needs to be odd; this will be added in a future commit. 18 September 2014, 20:17:43 UTC
07d6246 split off Reshape for common layers 18 September 2014, 19:41:46 UTC
256209d split off Reshape for neuron layers 18 September 2014, 19:41:46 UTC
62bc0a8 split off Reshape for loss layers 18 September 2014, 19:41:46 UTC
4b34c72 split off Reshape for data layers 18 September 2014, 19:41:46 UTC
d7e8f2a separate setConvolutionDesc from createConvolutionDesc 18 September 2014, 19:41:46 UTC
5ce519c separate setTensor4dDesc from createTensor4dDesc This will make it possible to add reshaping to cuDNN layers. 18 September 2014, 19:41:46 UTC
87de5ed enable reshaping in the forward pass Note that calling Reshape when no reshape is necessary should be effectively a no-op, so this is not a performance regression. 18 September 2014, 19:41:45 UTC
4fff966 don't reallocate blobs when shrinking memory use This allows nets to be reshaped very quickly (essentially for free) as long as sufficient memory has been allocated. Calling Blob::Reshape in order to free up memory becomes impossible; however, this is not a normal use case (and deleting blobs does free memory). 18 September 2014, 19:41:45 UTC
3194bb1 add abstract Layer::Reshape, and document the new method protocol 18 September 2014, 19:41:45 UTC
69bf6b5 use Blob directly instead of shared_ptr for EltwiseLayer::max_idx_ This is in keeping with #742. 18 September 2014, 19:41:45 UTC
8dac339 Merge pull request #1104 from shelhamer/conv-comments-tests Document and Test Convolution 18 September 2014, 16:45:31 UTC
c3a69b7 Merge pull request #1100 from cNikolaou/issue1099 Polish mnist + cifar10 examples. 18 September 2014, 16:41:34 UTC
9a7f0a0 [docs] lenet grooming 18 September 2014, 16:41:19 UTC
18ca362 [docs] comment ConvolutionLayer 18 September 2014, 16:20:45 UTC
355af16 test convolution by random weights for robustness 18 September 2014, 16:20:42 UTC
e4d48c5 test convolution against explicit reference implementation To thoroughly check convolution, the output is compared against a reference implementation by explicit looping. Simple and group convolution by the Caffe and cuDNN engines are checked against the reference. 18 September 2014, 16:19:31 UTC
1096dde Updated mnist/readme.md file with additional information. 18 September 2014, 09:49:17 UTC
3fc22b3 Update readme.md files of cifar10 and mnist examples. Fixed broken links. 17 September 2014, 18:36:06 UTC
a77ca76 Merge pull request #1093 from CellScope/io-cant-load-error-msg [fix] Move file reading error checking closer to actual file read command 16 September 2014, 23:01:13 UTC
aecab61 [Bugfix] Move error checking closer to file read Previously, if (height > 0 && width > 0) was true, the cv::resize() function would be called before cv_img_origin was confirmed valid; if the image file/filename was not valid, this caused an opencv assert error like this: terminate called after throwing an instance of 'cv::Exception' what(): /tmp/A3p0_4200_32550/batserve/A3p0/glnxa64/OpenCV/modules/imgproc/src/imgwarp.cpp:1725: error: (-215) ssize.area() > 0 in function resize 16 September 2014, 22:48:26 UTC
0fb2faf Merge pull request #1088 from shelhamer/fix-solverstate-filename Fix snapshot filename for solver states 16 September 2014, 22:38:05 UTC
4b1f53c Merge pull request #1091 from ronghanghu/fix_window_data_layer set up datum size for WindowDataLayer 16 September 2014, 17:33:51 UTC
06d7310 set up datum size for WindowDataLayer 16 September 2014, 16:56:11 UTC
4e6d977 [fix] snapshot model weights as .caffemodel, solver state as .solverstate 16 September 2014, 15:19:02 UTC
0120476 [example] update paths in net surgery 16 September 2014, 15:13:17 UTC
1f4e039 Merge pull request #1083 from longjon/fix-solver-gpu-init Fix solver GPU initialization order (e.g., training with cuDNN on non-default device) 15 September 2014, 22:59:17 UTC
bbd166e fix caffe train GPU initialization Previously, the solver constructed nets before the caffe train tool read the --gpu flag, which can cause errors due to LayerSetUp executing on the wrong device (breaking cuDNN, for example). 15 September 2014, 21:15:58 UTC
2da6bc9 Merge pull request #1077 from bhack/glog_ppa Add ppa for gflag and glog 14 September 2014, 22:14:46 UTC
aa10e72 Merge pull request #1076 from kloudkl/cuda-6.5 Update CUDA to version 6.5 in the Travis install script 14 September 2014, 21:18:52 UTC
8de9ab0 Fix a little typo 14 September 2014, 20:19:51 UTC
503ac0b Fix comments 14 September 2014, 19:16:38 UTC
e294f6a fix spelling error in caffe.proto 14 September 2014, 01:30:52 UTC
d54846c fix out-of-date next ID comment for SolverParameter 14 September 2014, 01:30:32 UTC
431a516 Update CUDA to version 6.5 in the Travis install script 13 September 2014, 05:11:27 UTC
3a69e22 Add ppa for gflag and glog 12 September 2014, 18:05:44 UTC
c69b3b4 Merge pull request #1051 from jeffdonahue/travis-red-errors restore "red X" build failures in Travis 11 September 2014, 15:45:09 UTC
f036ef4 add -fPIC flag to CMake build 11 September 2014, 15:27:46 UTC
4ce6e43 restore "red X" build failures in Travis 11 September 2014, 15:13:28 UTC
15538f8 Merge pull request #1067 from bhack/lmdb Get lmdb from openldap 11 September 2014, 05:00:33 UTC
be9c5bd Fix lmbdb travis with openldap 10 September 2014, 22:59:24 UTC
133b4db Merge pull request #1053 from jeffdonahue/to3i-elem_max_layer rebase and fixup #688 from @to3i: elementwise max 10 September 2014, 13:49:45 UTC
d149c9a Added contrastive loss layer, associated tests, and a siamese network example using shared weights and the contrastive loss. 08 September 2014, 20:14:58 UTC
6bda406 lint & reduce gradient check stepsize to pass checks 08 September 2014, 16:05:17 UTC
761c815 Implemented elementwise max layer 08 September 2014, 15:41:27 UTC
fc921bf Back-merge to dev for slides 08 September 2014, 10:47:47 UTC
7353e3d Merge pull request #1052 from shelhamer/caffe-presentation Caffe tutorial slides 08 September 2014, 10:46:21 UTC
64c8dcb [docs] replace intro slides with caffe tutorial 08 September 2014, 10:44:21 UTC
63bad31 Revert "call __signbit for CUDA >= 6.5 implementation" -- doesn't compile on OSX w/ CUDA 6.5 This reverts commit 8819f5953b903ec8b48e541271737e89a2cd24e6. 08 September 2014, 09:02:31 UTC
8cfd587 Merge pull request #1050 from jeffdonahue/linecount-more linecount counts more dirs than just src/ 08 September 2014, 08:49:06 UTC
e855bb9 Merge pull request #1044 from jeffdonahue/no-tmpnam change uses of tmpnam to mkstemp/mkdtemp 08 September 2014, 08:48:49 UTC
2d88103 linecount counts more dirs than just src/ 08 September 2014, 08:42:55 UTC
99c4ed5 [lint] cuDNN conv declaration 08 September 2014, 08:03:55 UTC
3bafe2f Merge pull request #1046 from shelhamer/cudnn cuDNN acceleration 08 September 2014, 07:57:44 UTC
ae85996 Merge pull request #1049 from niuzhiheng/dev Fixed CMake script of FindOpenBLAS. 08 September 2014, 07:28:49 UTC
68e2657 Fixed CMake script of FindOpenBLAS. 08 September 2014, 06:46:43 UTC
adaad52 Merge pull request #1045 from akosiorek/origin/dev Fixed CMake building test objects multiple times 08 September 2014, 06:41:38 UTC
5ab3d97 Merge pull request #1048 from jyegerlehner/conv_layer-init-weight Conv layer: fix crash by setting weight pointer 08 September 2014, 05:34:46 UTC
a739cda Fix more lint. 08 September 2014, 04:10:33 UTC
396da71 Repair crash in conv_layer due to weight pointer being NULL. 08 September 2014, 02:52:11 UTC
359197b [docs] include cuDNN in installation and performance reference 07 September 2014, 17:56:45 UTC
c65d5a0 report cuDNN error string 07 September 2014, 17:56:45 UTC
9e3d86f CUDNN_CHECK 07 September 2014, 17:56:42 UTC
84bd1f5 strategize cuDNN softmax 07 September 2014, 17:56:15 UTC
14a9198 strategize cuDNN activations: ReLU, Sigmoid, TanH 07 September 2014, 17:25:23 UTC
00f5fa6 strategize cuDNN pooling 07 September 2014, 17:25:23 UTC
d1b38ee strategize cuDNN convolution 07 September 2014, 17:25:23 UTC
8819f59 call __signbit for CUDA >= 6.5 implementation 07 September 2014, 17:25:23 UTC
77d9124 add cuDNN to build 07 September 2014, 17:25:23 UTC
9086df9 added common.cpp explicitly to tests 07 September 2014, 17:22:43 UTC
37e55fa cpp and cu files processed separately in test build 07 September 2014, 17:22:43 UTC
1cb7040 enabled object file reusing in test builds 07 September 2014, 17:22:43 UTC
3182b1c add <cuda>/lib64 only if exists to suppress linker warnings 07 September 2014, 09:50:58 UTC
fb0a3d0 remove uses of tmpnam 07 September 2014, 09:14:51 UTC
3cf3df8 fix transform_param in mnist_autoencoder.prototxt 07 September 2014, 07:44:58 UTC
b37f4f9 [docs] tutorial/layers: fix inner product sample 07 September 2014, 05:10:29 UTC
cbc50e1 [docs] tutorial/layers: describe some more data layers 07 September 2014, 05:09:13 UTC
bd13f32 [docs] tutorial/layers: clean up sample markdown 07 September 2014, 04:42:35 UTC
1545628 [docs] tutorial/layers: brief descriptions of some loss layers 07 September 2014, 04:39:56 UTC
40fa5be [docs] in tutorial/layers, Options -> Parameters It sounds funny to have optional options, and "parameters" is more in line with the internal usage. 07 September 2014, 04:22:23 UTC
853d65a [docs] split layer params in required/optional Also, make the parameter name come first. This makes it much easier to find/scan parameters. 07 September 2014, 04:20:36 UTC
back to top