https://github.com/GPflow/GPflow
Revision b41d4f38436e4a090c940dbd3bc7e2afd39a283e authored by st-- on 23 April 2020, 18:17:42 UTC, committed by GitHub on 23 April 2020, 18:17:42 UTC
Previously, GPflow's NaturalGradient optimizer would call the loss_function once for each (q_mu, q_sqrt) set in the var_list. This is a light refactor that separates out applying the natural gradient step from computing the gradients (`_natgrad_apply_gradients`), and changes `_natgrad_steps` to only evaluate the loss function once, computing the gradients for all (q_mu, q_sqrt) tuples passed in the var_list.

Other changes:
- The no-longer-used `_natgrad_step` method got removed.
- NaturalGradient now takes a `xi_transform` argument that is used for all parameter sets without explicitly specified xi transform (i.e. tuples rather than triplets).
- XiTransform has been changed to have staticmethods.

None of this should affect any downstream code; this PR is backwards-compatible.
1 parent c7550ce
Raw File
Tip revision: b41d4f38436e4a090c940dbd3bc7e2afd39a283e authored by st-- on 23 April 2020, 18:17:42 UTC
refactor natgrads to be more efficient (#1443)
Tip revision: b41d4f3
.coveragerc
[report]
omit = *tests*, setup.py
exclude_lines =
    pragma: no cover
    def __repr__
    def __str__
    def _repr_html_
    def _repr_pretty_
    if self.debug:
    if settings.DEBUG
    raise AssertionError
    raise NotImplementedError
    if __name__ == .__main__.:
    print
back to top