https://github.com/karenyyng/george

sort by:
Revision Author Date Message Commit Date
0e3c512 fixed version info 27 September 2015, 23:10:58 UTC
e2e2775 Clean up kernels C++ classes to avoid ambiguities in subclass method and member resolutions 16 August 2015, 06:19:50 UTC
81f62bd Use bounds checking in accessing the Metric class parameter vector 16 August 2015, 06:18:58 UTC
3979483 C++ level implementation of l_sq as 1 / beta should be fixed 10 June 2015, 09:54:01 UTC
5789efb found version setup.py that generates correct debugging symbols 09 May 2015, 04:51:51 UTC
0e82b9f adding debugging symbols 09 May 2015, 04:47:47 UTC
fdcfe79 deleted const keyword for parent class gradient methods 08 May 2015, 05:50:21 UTC
8513960 first draft of gradient function inheritance 08 May 2015, 05:32:27 UTC
b512bc1 commented out print statements 01 May 2015, 23:07:22 UTC
d1ccf80 version that passed all tests 01 May 2015, 18:16:09 UTC
e428cf2 gives approximately correct results - due to numerical precision loss one must use approximate comparison for tests to pass 27 April 2015, 22:05:28 UTC
70bf0d6 changed value function signature by removing const - inheritance works properly now 27 April 2015, 20:47:44 UTC
8662218 confirmed metric bug is due to improper inheritance of the value method (ExpSquaredKernel value method not overridden 25 April 2015, 01:28:24 UTC
1796309 bug fix removed const from function defs 24 April 2015, 01:30:06 UTC
5c07019 compiles again after implementing modified value function 24 April 2015, 01:05:49 UTC
d7e5671 bug fixes and adding to implementation of an alternative product operator 24 April 2015, 00:20:49 UTC
aa2c964 almost finished adding methods to .h file 23 April 2015, 23:54:29 UTC
e239a3a almost finished adding methods to .h file 23 April 2015, 23:12:34 UTC
150a904 too tired to continue 23 April 2015, 07:04:18 UTC
9191d0a adding more methods and compiles 23 April 2015, 04:51:58 UTC
b5c9609 adding more methods 23 April 2015, 04:50:44 UTC
d4706b8 adding more methods 23 April 2015, 03:59:47 UTC
5acd475 implemented more of the C++ version of the kernel 23 April 2015, 03:29:31 UTC
7fcd336 package compiles after adding new methods 22 April 2015, 23:55:56 UTC
f9f5110 trying different cpp implementation of the kernel after taking derivatives 22 April 2015, 23:43:34 UTC
d74434b debugging 22 April 2015, 19:33:49 UTC
13e64d7 slight changes 17 April 2015, 06:57:55 UTC
4d45fa4 important bug fix in Cython code, same bug as 38159cdfd7dbc6d3d135a5c42cb370ad14fc89cc 30 March 2015, 16:39:59 UTC
da840ae KappaKappaExpSquaredKernel now can be computed ... 25 March 2015, 00:13:47 UTC
69f0888 bug fixes 24 March 2015, 22:20:24 UTC
8ff481a added variable types to _kernels.pyx, to check consistencies of arguments of methods 24 March 2015, 21:48:03 UTC
92c58d1 cleaning up unnecessary definitions, George now compiles but more testing is needed 23 March 2015, 22:32:46 UTC
d719ad1 testing Cython code method by method 21 March 2015, 16:30:58 UTC
2c2e569 commented out python and C++ header to get Cython code working first 21 March 2015, 16:05:10 UTC
88dbef7 added python code to cython file _kernel.pyx - need testing 16 March 2015, 15:28:39 UTC
139d858 added more header info, imports without errors 10 March 2015, 01:33:25 UTC
2b08729 added skeleton of the classes 10 March 2015, 00:06:19 UTC
fc5dd1c reverting a few docs changes 06 January 2015, 17:38:36 UTC
c90d483 Merge branch 'master' of https://github.com/dfm/george 06 January 2015, 17:37:32 UTC
2fe6d9e Merge pull request #26 from mindriot101/correct-model-fitting-example-docs Correct model fitting example docs 06 January 2015, 17:37:25 UTC
2baf7ed A few words for the paper 06 January 2015, 17:36:57 UTC
40800ce Use correct params array for call to `model` In the final example the `model` function is in the source code, and only takes the Gaussian parameters. 06 January 2015, 09:57:57 UTC
95b671b Run the second burn in and save lnp This step was not included in the documentation, and must occur to get the maximum lnp before running the second burn in. 06 January 2015, 09:52:33 UTC
29f4818 Set the number of walkers 06 January 2015, 09:48:38 UTC
10aeb35 Set the seed in the documentation This ensures the user following the example gets the same results as shown 06 January 2015, 09:37:53 UTC
6f2a4a5 optimization method 18 November 2014, 21:47:58 UTC
3cb331b Merge branch 'master' of https://github.com/dfm/george 18 November 2014, 21:24:58 UTC
81056ea typo in hyperparameter docs 18 November 2014, 21:24:52 UTC
978781d some documentation about kernel parameterization 18 November 2014, 02:51:21 UTC
bba942c Nicer badges 29 October 2014, 01:15:03 UTC
830b32e typo 13 October 2014, 21:41:25 UTC
1d3405f fixing #23 13 October 2014, 21:25:00 UTC
9c460b5 include Cython files in source distribution on PyPI 13 October 2014, 21:13:38 UTC
3be361f adding documentaion for GP 13 October 2014, 17:47:04 UTC
378b72c removing rebuild from HODLRSolver pickle support 13 October 2014, 16:27:00 UTC
a93d0f4 documentation for the HODLR solver 13 October 2014, 16:20:15 UTC
407d815 some progress on solver docs 12 October 2014, 22:54:38 UTC
7a1adfb removing comment about periodic kernel dimension 11 October 2014, 18:46:49 UTC
a556175 Merge branch 'custom-kernels' 11 October 2014, 18:39:24 UTC
365612a kernel documentation 11 October 2014, 18:39:18 UTC
a93d3cd updating alpha when kernel parameters change 10 October 2014, 21:25:58 UTC
18c90e1 a few words about Python kernels 10 October 2014, 21:09:26 UTC
7cd5b35 adding custom kernels implemented in Python 10 October 2014, 20:59:52 UTC
f7b85f8 Update GP test list. 10 October 2014, 16:39:14 UTC
80fb447 Add test for GP alpha cache. 10 October 2014, 16:22:59 UTC
f4eae9e Compare arrays using native numpy function. 10 October 2014, 15:50:55 UTC
0d3ab7d PEP8 fix. 10 October 2014, 15:47:46 UTC
ac06bea Accelerate repeated predictions. 09 October 2014, 19:57:43 UTC
d1c05e0 Forgot to add new test to __all__. 09 October 2014, 19:53:40 UTC
8b8ded6 Add DOI to README 07 October 2014, 19:09:29 UTC
b2e4150 bumping version number 07 October 2014, 19:06:09 UTC
c41088c HODLR in_place bug 07 October 2014, 19:01:58 UTC
514ecab fixing transpose issue in prediction 07 October 2014, 18:52:06 UTC
8979de9 Merge branch 'fix_covariance' of https://github.com/jbernhard/george 07 October 2014, 18:47:09 UTC
fc46c5c Fix bug in computing the posterior covariance. 07 October 2014, 16:08:23 UTC
945e1d0 Add GP prediction / regression test. 07 October 2014, 16:07:40 UTC
45d424e save some memory 05 October 2014, 16:13:12 UTC
adbc09f a few formal words 23 September 2014, 15:17:53 UTC
928132d a few words in the paper 18 September 2014, 16:58:48 UTC
aa619df remove extra assert 07 September 2014, 17:54:11 UTC
f83d333 slight refactor of grad computation 07 September 2014, 17:53:55 UTC
430def1 Merge branch 'speedup-grad' of https://github.com/shoyer/george 07 September 2014, 17:37:33 UTC
81eddc3 temp 07 September 2014, 17:37:29 UTC
e14cef0 Remove loop for grad_lnlikelihood calculation The result (using einsum) is cleaner and perhaps slightly faster. 03 September 2014, 22:11:13 UTC
9c920c4 Precompute the matrix inverse for grad_lnlikelihood This reduces the number of required operations for grad_lnlikelihood by a factor equal to the number of kernel parameters. e.g., consider this example: import numpy as np import george from george.kernels import Matern32Kernel, ConstantKernel, WhiteKernel x = 10 * np.random.RandomState(12356).rand(2000, 2) yerr = 0.2 * np.ones_like(x[:, 0]) y = np.sin(x[:, 0] + x[:, 1]) + yerr * np.random.randn(len(x)) kernel = ConstantKernel(0.5, ndim=2) * Matern32Kernel(0.5, ndim=2) + WhiteKernel(0.1, ndim=2) gp = george.GP(kernel, solver=george.HODLRSolver) gp.compute(x) %time gp.grad_lnlikelihood(y) Before this change, I get: CPU times: user 8.89 s, sys: 479 ms, total: 9.37 s Wall time: 9.37 s array([-182.28430864, 208.82577854, -463.12445191]) After this change: CPU times: user 3.37 s, sys: 273 ms, total: 3.64 s Wall time: 3.64 s array([-182.27545648, 209.89776176, -479.34696447]) So it's a factor of three speedup, corresponding to the three parameters. There is a similar speedup for basic solver (which for this problem size is actually faster, at least on my laptop, perhaps because scipy's cholesky factorization can use multi-processing). 03 September 2014, 21:58:48 UTC
592498f Use np.einsum in grad_lnlikelihood instead of a complex sum/map/dot mix It makes for a 3x constant speedup for that line of code on my machine: In [1]: import numpy as np In [2]: alpha = np.random.randn(5000) In [3]: k = np.random.randn(5000, 5000) In [4]: %timeit sum(map(lambda r: np.dot(alpha, r), alpha[:, None] * k)) 10 loops, best of 3: 148 ms per loop In [5]: %timeit np.einsum('i,j,ji', alpha, alpha, k) 10 loops, best of 3: 43.9 ms per loop More importantly, I think it is also much more readable. 03 September 2014, 20:41:54 UTC
0809981 c++ compiler for travis 02 September 2014, 18:07:05 UTC
13dae85 making kernels and solvers picklable 02 September 2014, 17:33:52 UTC
4cdbdfa silencing some harmless compile warnings 30 August 2014, 19:36:43 UTC
297bfca more tol 30 August 2014, 19:29:59 UTC
f8a53be more info 30 August 2014, 19:27:43 UTC
7508c86 Fixing gradient bug (#13) 30 August 2014, 19:21:12 UTC
b48b2a5 fixing kernel dtype bug (#13) 30 August 2014, 18:56:02 UTC
d758bbe bug when re-computing product kernels 30 August 2014, 16:09:06 UTC
9037511 fixing hodlr apply_inverse behaviour 30 August 2014, 15:39:41 UTC
3eb371a allowing 1D kernels 30 August 2014, 14:03:01 UTC
5ef0787 ensure contiguous arrays for Cython 30 August 2014, 13:13:25 UTC
9304e87 making HODLR solver recompute on kernel changes properly 30 August 2014, 12:59:28 UTC
82b7dfe the lnlikelihood should be a scalar 30 August 2014, 03:49:48 UTC
faf6ea0 import the HODLR solver 30 August 2014, 03:40:12 UTC
back to top