Skip to main content
  • Home
  • Development
  • Documentation
  • Donate
  • Operational login
  • Browse the archive

swh logo
SoftwareHeritage
Software
Heritage
Archive
Features
  • Search

  • Downloads

  • Save code now

  • Add forge now

  • Help

  • ede1ea4
  • /
  • utilities
  • /
  • misc.py
Raw File Download

To reference or cite the objects present in the Software Heritage archive, permalinks based on SoftWare Hash IDentifiers (SWHIDs) must be used.
Select below a type of object currently browsed in order to display its associated SWHID and permalink.

  • content
  • directory
content badge
swh:1:cnt:386ca84bab163f78126da0d270fe6dad98e0da62
directory badge
swh:1:dir:5494866503ebe2f811e6319f52722e9a343edaff

This interface enables to generate software citations, provided that the root directory of browsed objects contains a citation.cff or codemeta.json file.
Select below a type of object currently browsed in order to generate citations for them.

  • content
  • directory
(requires biblatex-software package)
Generating citation ...
(requires biblatex-software package)
Generating citation ...
misc.py
# Copyright 2017-2021 The GPflow Contributors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from typing import Callable, Iterable, List, Optional, Union

import tensorflow as tf
import tensorflow_probability as tfp

from ..base import TensorData
from ..config import default_float, default_int
from .ops import cast

__all__ = [
    "is_variable",
    "set_trainable",
    "to_default_float",
    "to_default_int",
    "training_loop",
]


def to_default_int(x: TensorData) -> tf.Tensor:
    return cast(x, dtype=default_int())


def to_default_float(x: TensorData) -> tf.Tensor:
    return cast(x, dtype=default_float())


def set_trainable(model: Union[tf.Module, Iterable[tf.Module]], flag: bool) -> None:
    """
    Set trainable flag for all :class:`tf.Variable`\ s and :class:`gpflow.Parameter`\ s in a
    :class:`tf.Module` or collection of :class:`tf.Module`\ s.
    """
    modules = [model] if isinstance(model, tf.Module) else model

    for mod in modules:
        for variable in mod.variables:
            variable._trainable = flag


def is_variable(t: TensorData) -> bool:
    """
    Returns whether the `t` is a TensorFlow variable.
    """
    return isinstance(t, (tf.Variable, tfp.util.TransformedVariable))


def training_loop(
    closure: Callable[[], tf.Tensor],
    optimizer: Optional[tf.optimizers.Optimizer] = None,
    var_list: Optional[List[tf.Variable]] = None,
    maxiter: int = 1_000,
    compile: bool = False,
) -> None:
    """
    Simple generic training loop. At each iteration uses a GradientTape to compute
    the gradients of a loss function with respect to a set of variables.

    :param closure: Callable that constructs a loss function based on data and model being trained
    :param optimizer: tf.optimizers or tf.keras.optimizers that updates variables by applying the
        corresponding loss gradients. Adam is a default optimizer with default settings.
    :param var_list: List of model variables to be learnt during training
    :param maxiter: Maximum number of
    :return:
    """

    safe_optimizer = tf.optimizers.Adam() if optimizer is None else optimizer
    safe_var_list = [] if var_list is None else var_list

    def optimization_step() -> None:
        with tf.GradientTape(watch_accessed_variables=False) as tape:
            tape.watch(safe_var_list)
            loss = closure()
        grads = tape.gradient(loss, safe_var_list)
        safe_optimizer.apply_gradients(zip(grads, safe_var_list))

    if compile:
        optimization_step = tf.function(optimization_step)

    for _ in range(maxiter):
        optimization_step()

back to top

Software Heritage — Copyright (C) 2015–2026, The Software Heritage developers. License: GNU AGPLv3+.
The source code of Software Heritage itself is available on our development forge.
The source code files archived by Software Heritage are available under their own copyright and licenses.
Terms of use: Archive access, API— Content policy— Contact— JavaScript license information— Web API