https://github.com/GPflow/GPflow
Revision b41d4f38436e4a090c940dbd3bc7e2afd39a283e authored by st-- on 23 April 2020, 18:17:42 UTC, committed by GitHub on 23 April 2020, 18:17:42 UTC
Previously, GPflow's NaturalGradient optimizer would call the loss_function once for each (q_mu, q_sqrt) set in the var_list. This is a light refactor that separates out applying the natural gradient step from computing the gradients (`_natgrad_apply_gradients`), and changes `_natgrad_steps` to only evaluate the loss function once, computing the gradients for all (q_mu, q_sqrt) tuples passed in the var_list.

Other changes:
- The no-longer-used `_natgrad_step` method got removed.
- NaturalGradient now takes a `xi_transform` argument that is used for all parameter sets without explicitly specified xi transform (i.e. tuples rather than triplets).
- XiTransform has been changed to have staticmethods.

None of this should affect any downstream code; this PR is backwards-compatible.
1 parent c7550ce
History
Tip revision: b41d4f38436e4a090c940dbd3bc7e2afd39a283e authored by st-- on 23 April 2020, 18:17:42 UTC
refactor natgrads to be more efficient (#1443)
Tip revision: b41d4f3
File Mode Size
gpflow
integration
__init__.py -rw-r--r-- 154 bytes

back to top