Skip to main content
  • Home
  • Development
  • Documentation
  • Donate
  • Operational login
  • Browse the archive

swh logo
SoftwareHeritage
Software
Heritage
Archive
Features
  • Search

  • Downloads

  • Save code now

  • Add forge now

  • Help

  • 698042a
  • /
  • notebooks
  • /
  • cookbooks
  • /
  • search.ipynb
Raw File Download

To reference or cite the objects present in the Software Heritage archive, permalinks based on SoftWare Hash IDentifiers (SWHIDs) must be used.
Select below a type of object currently browsed in order to display its associated SWHID and permalink.

  • content
  • directory
content badge Iframe embedding
swh:1:cnt:769c04ba691681ab5db150a0d8f64ebec9c4e6ce
directory badge Iframe embedding
swh:1:dir:df4d23563fe364875a8e5909d01ddf7c6d28f735

This interface enables to generate software citations, provided that the root directory of browsed objects contains a citation.cff or codemeta.json file.
Select below a type of object currently browsed in order to generate citations for them.

  • content
  • directory
Generate software citation in BibTex format (requires biblatex-software package)
Generating citation ...
Generate software citation in BibTex format (requires biblatex-software package)
Generating citation ...
search.ipynb
{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Cookbook: Searches\n",
        "==================\n",
        "\n",
        "This cookbook provides an overview of the non-linear searches available in **PyAutoFit**, and how to use them.\n",
        "\n",
        "__Contents__\n",
        "\n",
        "It first covers standard options available for all non-linear searches:\n",
        "\n",
        " - Example Fit: A simple example of a non-linear search to remind us how it works.\n",
        " - Output To Hard-Disk: Output results to hard-disk so they can be inspected and used to restart a crashed search.\n",
        " - Unique Identifier: Ensure results are output in unique folders, so tthey do not overwrite each other.\n",
        " - Iterations Per Update: Control how often non-linear searches output results to hard-disk.\n",
        " - Parallelization: Use parallel processing to speed up the sampling of parameter space.\n",
        " - Plots: Perform non-linear search specific visualization using their in-built visualization tools.\n",
        " - Start Point: Manually specify the start point of a non-linear search, or sample a specific region of parameter space.\n",
        "\n",
        "It then provides example code for using every search:\n",
        "\n",
        " - Emcee (MCMC): The Emcee ensemble sampler MCMC.\n",
        " - Zeus (MCMC): The Zeus ensemble sampler MCMC.\n",
        " - DynestyDynamic (Nested Sampling): The Dynesty dynamic nested sampler.\n",
        " - DynestyStatic (Nested Sampling): The Dynesty static nested sampler.\n",
        " - UltraNest (Nested Sampling): The UltraNest nested sampler.\n",
        " - PySwarmsGlobal (Particle Swarm Optimization): The global PySwarms particle swarm optimization\n",
        " - PySwarmsLocal (Particle Swarm Optimization): The local PySwarms particle swarm optimization.\n",
        " - LBFGS: The L-BFGS scipy optimization."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "%matplotlib inline\n",
        "from pyprojroot import here\n",
        "workspace_path = str(here())\n",
        "%cd $workspace_path\n",
        "print(f\"Working Directory has been set to `{workspace_path}`\")\n",
        "\n",
        "import numpy as np\n",
        "from os import path\n",
        "\n",
        "import autofit as af\n",
        "import autofit.plot as aplt"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "__Example Fit__\n",
        "\n",
        "An example of how to use a `search` to fit a model to data is given in other example scripts, but is shown below\n",
        "for completeness."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n",
        "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n",
        "noise_map = af.util.numpy_array_from_json(\n",
        "    file_path=path.join(dataset_path, \"noise_map.json\")\n",
        ")\n",
        "\n",
        "model = af.Model(af.ex.Gaussian)\n",
        "\n",
        "analysis = af.ex.Analysis(data=data, noise_map=noise_map)"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "It is this line, where the command `af.Emcee()` can be swapped out for the examples provided throughout this\n",
        "cookbook to use different non-linear searches."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "search = af.Emcee()\n",
        "\n",
        "result = search.fit(model=model, analysis=analysis)"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "__Output To Hard-Disk__\n",
        "\n",
        "By default, a non-linear search does not output its results to hard-disk and its results can only be inspected\n",
        "in Python via the `result` object. \n",
        "\n",
        "However, the results of any non-linear search can be output to hard-disk by passing the `name` and / or `path_prefix`\n",
        "attributes, which are used to name files and output the results to a folder on your hard-disk.\n",
        "\n",
        "The benefits of doing this include:\n",
        "\n",
        "- Inspecting results via folders on your computer can be more efficient than using a Jupyter Notebook.\n",
        "- Results are output on-the-fly, making it possible to check that a fit i progressing as expected mid way through.\n",
        "- Additional information about a fit (e.g. visualization) is output.\n",
        "- Unfinished runs can be resumed from where they left off if they are terminated.\n",
        "- On high performance super computers which use a batch system, results must be output in this way.\n",
        "\n",
        "These outputs are fully described in the scientific workflow example."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "search = af.Emcee(path_prefix=path.join(\"folder_0\", \"folder_1\"), name=\"example_mcmc\")"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "__Unique Identifier__\n",
        "\n",
        "Results are output to a folder which is a collection of random characters, which is the 'unique_identifier' of\n",
        "the model-fit. This identifier is generated based on the model fitted and search used, such that an identical\n",
        "combination of model and search generates the same identifier.\n",
        "\n",
        "This ensures that rerunning an identical fit will use the existing results to resume the model-fit. In contrast, if\n",
        "you change the model or search, a new unique identifier will be generated, ensuring that the model-fit results are\n",
        "output into a separate folder.\n",
        "\n",
        "A `unique_tag` can be input into a search, which customizes the unique identifier based on the string you provide.\n",
        "For example, if you are performing many fits to different datasets, using an identical model and search, you may\n",
        "wish to provide a unique tag for each dataset such that the model-fit results are output into a different folder."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "search = af.Emcee(unique_tag=\"example_tag\")"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "__Iterations Per Update__\n",
        "\n",
        "If results are output to hard-disk, this occurs every `iterations_per_update` number of iterations. \n",
        "\n",
        "For certain problems, you may want this value to be low, to inspect the results of the model-fit on a regular basis.\n",
        "This is especially true if the time it takes for your non-linear search to perform an iteration by evaluating the \n",
        "log likelihood is long (e.g. > 1s) and your model-fit often goes to incorrect solutions that you want to monitor.\n",
        "\n",
        "For other problems, you may want to increase this value, to avoid spending lots of time outputting the results to\n",
        "hard-disk. This is especially true if the time it takes for your non-linear search to perform an iteration by\n",
        "evaluating the log likelihood is fast (e.g. < 0.1s) and you are confident your model-fit will find the global\n",
        "maximum solution given enough iterations."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "search = af.Emcee(iterations_per_update=1000)"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "__Parallelization__\n",
        "\n",
        "Many searches support parallelization using the Python ``multiprocessing`` module. \n",
        "\n",
        "This distributes the non-linear search analysis over multiple CPU's, speeding up the run-time roughly by the number \n",
        "of CPUs used.\n",
        "\n",
        "To enable parallelization, input a `number_of_cores` greater than 1. You should aim not to exceed the number of\n",
        "physical cores in your computer, as using more cores than exist may actually slow down the non-linear search."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "search = af.Emcee(number_of_cores=4)"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "__Plots__\n",
        "\n",
        "Every non-linear search supported by **PyAutoFit** has a dedicated `plotter` class that allows the results of the\n",
        "model-fit to be plotted and inspected.\n",
        "\n",
        "This uses that search's in-built visualization libraries, which are fully described in the `plot` package of the\n",
        "workspace.\n",
        "\n",
        "For example, `Emcee` has a corresponding `EmceePlotter`, which is used as follows.\n",
        "\n",
        "Checkout the `plot` package for a complete description of the plots that can be made for a given search."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "samples = result.samples\n",
        "\n",
        "search_plotter = aplt.EmceePlotter(samples=samples)\n",
        "\n",
        "search_plotter.corner(\n",
        "    bins=20,\n",
        "    range=None,\n",
        "    color=\"k\",\n",
        "    hist_bin_factor=1,\n",
        "    smooth=None,\n",
        "    smooth1d=None,\n",
        "    label_kwargs=None,\n",
        "    titles=None,\n",
        "    show_titles=False,\n",
        "    title_fmt=\".2f\",\n",
        "    title_kwargs=None,\n",
        "    truths=None,\n",
        "    truth_color=\"#4682b4\",\n",
        "    scale_hist=False,\n",
        "    quantiles=None,\n",
        "    verbose=False,\n",
        "    fig=None,\n",
        "    max_n_ticks=5,\n",
        "    top_ticks=False,\n",
        "    use_math_text=False,\n",
        "    reverse=False,\n",
        "    labelpad=0.0,\n",
        "    hist_kwargs=None,\n",
        "    group=\"posterior\",\n",
        "    var_names=None,\n",
        "    filter_vars=None,\n",
        "    coords=None,\n",
        "    divergences=False,\n",
        "    divergences_kwargs=None,\n",
        "    labeller=None,\n",
        ")"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "The Python library `GetDist <https://getdist.readthedocs.io/en/latest/>`_ can also be used to create plots of the\n",
        "results. \n",
        "\n",
        "This is described in the `plot` package of the workspace.\n",
        "\n",
        "__Start Point__\n",
        "\n",
        "For maximum likelihood estimator (MLE) and Markov Chain Monte Carlo (MCMC) non-linear searches, parameter space\n",
        "sampling is built around having a \"location\" in parameter space.\n",
        "\n",
        "This could simply be the parameters of the current maximum likelihood model in an MLE fit, or the locations of many\n",
        "walkers in parameter space (e.g. MCMC).\n",
        "\n",
        "For many model-fitting problems, we may have an expectation of where correct solutions lie in parameter space and\n",
        "therefore want our non-linear search to start near that location of parameter space. Alternatively, we may want to\n",
        "sample a specific region of parameter space, to determine what solutions look like there.\n",
        "\n",
        "The start-point API allows us to do this, by manually specifying the start-point of an MLE fit or the start-point of\n",
        "the walkers in an MCMC fit. Because nested sampling draws from priors, it cannot use the start-point API.\n",
        "\n",
        "We now define the start point of certain parameters in the model as follows."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "initializer = af.SpecificRangeInitializer(\n",
        "    {\n",
        "        model.centre: (49.0, 51.0),\n",
        "        model.normalization: (4.0, 6.0),\n",
        "        model.sigma: (1.0, 2.0),\n",
        "    }\n",
        ")"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Similar behaviour can be achieved by customizing the priors of a model-fit. We could place `GaussianPrior`'s\n",
        "centred on the regions of parameter space we want to sample, or we could place tight `UniformPrior`'s on regions\n",
        "of parameter space we believe the correct answer lies.\n",
        "\n",
        "The downside of using priors is that our priors have a direct influence on the parameters we infer and the size\n",
        "of the inferred parameter errors. By using priors to control the location of our model-fit, we therefore risk\n",
        "inferring a non-representative model.\n",
        "\n",
        "For users more familiar with statistical inference, adjusting ones priors in the way described above leads to\n",
        "changes in the posterior, which therefore impacts the model inferred.\n",
        "\n",
        "__Emcee (MCMC)__\n",
        "\n",
        "The Emcee sampler is a Markov Chain Monte Carlo (MCMC) Ensemble sampler. It is a Python implementation of the\n",
        "`Goodman & Weare <https://msp.org/camcos/2010/5-1/p04.xhtml>`_ affine-invariant ensemble MCMC sampler.\n",
        "\n",
        "Information about Emcee can be found at the following links:\n",
        "\n",
        " - https://github.com/dfm/emcee\n",
        " - https://emcee.readthedocs.io/en/stable/\n",
        "\n",
        "The following workspace example shows examples of fitting data with Emcee and plotting the results.\n",
        "\n",
        "- `autofit_workspace/notebooks/searches/mcmc/Emcee.ipynb`\n",
        "- `autofit_workspace/notebooks/plot/EmceePlotter.ipynb`\n",
        "\n",
        "The following code shows how to use Emcee with all available options."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "search = af.Emcee(\n",
        "    nwalkers=30,\n",
        "    nsteps=1000,\n",
        "    initializer=af.InitializerBall(lower_limit=0.49, upper_limit=0.51),\n",
        "    auto_correlations_settings=af.AutoCorrelationsSettings(\n",
        "        check_for_convergence=True,\n",
        "        check_size=100,\n",
        "        required_length=50,\n",
        "        change_threshold=0.01,\n",
        "    ),\n",
        ")"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "__Zeus (MCMC)__\n",
        "\n",
        "The Zeus sampler is a Markov Chain Monte Carlo (MCMC) Ensemble sampler. \n",
        "\n",
        "Information about Zeus can be found at the following links:\n",
        "\n",
        " - https://github.com/minaskar/zeus\n",
        " - https://zeus-mcmc.readthedocs.io/en/latest/"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "search = af.Zeus(\n",
        "    nwalkers=30,\n",
        "    nsteps=1001,\n",
        "    initializer=af.InitializerBall(lower_limit=0.49, upper_limit=0.51),\n",
        "    auto_correlations_settings=af.AutoCorrelationsSettings(\n",
        "        check_for_convergence=True,\n",
        "        check_size=100,\n",
        "        required_length=50,\n",
        "        change_threshold=0.01,\n",
        "    ),\n",
        "    tune=False,\n",
        "    tolerance=0.05,\n",
        "    patience=5,\n",
        "    maxsteps=10000,\n",
        "    mu=1.0,\n",
        "    maxiter=10000,\n",
        "    vectorize=False,\n",
        "    check_walkers=True,\n",
        "    shuffle_ensemble=True,\n",
        "    light_mode=False,\n",
        ")"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "__DynestyDynamic (Nested Sampling)__\n",
        "\n",
        "The DynestyDynamic sampler is a Dynamic Nested Sampling algorithm. It is a Python implementation of the\n",
        "`Speagle <https://arxiv.org/abs/1904.02180>`_ algorithm.\n",
        "\n",
        "Information about Dynesty can be found at the following links:\n",
        "\n",
        " - https://github.com/joshspeagle/dynesty\n",
        " - https://dynesty.readthedocs.io/en/latest/"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "search = af.DynestyDynamic(\n",
        "    nlive=50,\n",
        "    bound=\"multi\",\n",
        "    sample=\"auto\",\n",
        "    bootstrap=None,\n",
        "    enlarge=None,\n",
        "    update_interval=None,\n",
        "    walks=25,\n",
        "    facc=0.5,\n",
        "    slices=5,\n",
        "    fmove=0.9,\n",
        "    max_move=100,\n",
        ")"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "__DynestyStatic (Nested Sampling)__\n",
        "\n",
        "The DynestyStatic sampler is a Static Nested Sampling algorithm. It is a Python implementation of the\n",
        "`Speagle <https://arxiv.org/abs/1904.02180>`_ algorithm.\n",
        "\n",
        "Information about Dynesty can be found at the following links:\n",
        "\n",
        " - https://github.com/joshspeagle/dynesty\n",
        " - https://dynesty.readthedocs.io/en/latest/"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "search = af.DynestyStatic(\n",
        "    nlive=50,\n",
        "    bound=\"multi\",\n",
        "    sample=\"auto\",\n",
        "    bootstrap=None,\n",
        "    enlarge=None,\n",
        "    update_interval=None,\n",
        "    walks=25,\n",
        "    facc=0.5,\n",
        "    slices=5,\n",
        "    fmove=0.9,\n",
        "    max_move=100,\n",
        ")"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "__UltraNest (Nested Sampling)__\n",
        "\n",
        "The UltraNest sampler is a Nested Sampling algorithm. It is a Python implementation of the\n",
        "`Buchner <https://arxiv.org/abs/1904.02180>`_ algorithm.\n",
        "\n",
        "UltraNest is an optional requirement and must be installed manually via the command `pip install ultranest`.\n",
        "It is optional as it has certain dependencies which are generally straight forward to install (e.g. Cython).\n",
        "\n",
        "Information about UltraNest can be found at the following links:\n",
        "\n",
        " - https://github.com/JohannesBuchner/UltraNest\n",
        " - https://johannesbuchner.github.io/UltraNest/readme.html"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "search = af.UltraNest(\n",
        "    resume=True,\n",
        "    run_num=None,\n",
        "    num_test_samples=2,\n",
        "    draw_multiple=True,\n",
        "    num_bootstraps=30,\n",
        "    vectorized=False,\n",
        "    ndraw_min=128,\n",
        "    ndraw_max=65536,\n",
        "    storage_backend=\"hdf5\",\n",
        "    warmstart_max_tau=-1,\n",
        "    update_interval_volume_fraction=0.8,\n",
        "    update_interval_ncall=None,\n",
        "    log_interval=None,\n",
        "    show_status=True,\n",
        "    viz_callback=\"auto\",\n",
        "    dlogz=0.5,\n",
        "    dKL=0.5,\n",
        "    frac_remain=0.01,\n",
        "    Lepsilon=0.001,\n",
        "    min_ess=400,\n",
        "    max_iters=None,\n",
        "    max_ncalls=None,\n",
        "    max_num_improvement_loops=-1,\n",
        "    min_num_live_points=50,\n",
        "    cluster_num_live_points=40,\n",
        "    insertion_test_window=10,\n",
        "    insertion_test_zscore_threshold=2,\n",
        "    stepsampler_cls=\"RegionMHSampler\",\n",
        "    nsteps=11,\n",
        ")"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "__PySwarmsGlobal__\n",
        "\n",
        "The PySwarmsGlobal sampler is a Global Optimization algorithm. It is a Python implementation of the\n",
        "`Bratley <https://arxiv.org/abs/1904.02180>`_ algorithm.\n",
        "\n",
        "Information about PySwarms can be found at the following links:\n",
        "\n",
        " - https://github.com/ljvmiranda921/pyswarms\n",
        " - https://pyswarms.readthedocs.io/en/latest/index.html\n",
        " - https://pyswarms.readthedocs.io/en/latest/api/pyswarms.single.html#module-pyswarms.single.global_best"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "search = af.PySwarmsGlobal(\n",
        "    n_particles=50,\n",
        "    iters=1000,\n",
        "    cognitive=0.5,\n",
        "    social=0.3,\n",
        "    inertia=0.9,\n",
        "    ftol=-np.inf,\n",
        ")"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "__PySwarmsLocal__\n",
        "\n",
        "The PySwarmsLocal sampler is a Local Optimization algorithm. It is a Python implementation of the\n",
        "`Bratley <https://arxiv.org/abs/1904.02180>`_ algorithm.\n",
        "\n",
        "Information about PySwarms can be found at the following links:\n",
        "\n",
        " - https://github.com/ljvmiranda921/pyswarms\n",
        " - https://pyswarms.readthedocs.io/en/latest/index.html\n",
        " - https://pyswarms.readthedocs.io/en/latest/api/pyswarms.single.html#module-pyswarms.single.global_best"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "search = af.PySwarmsLocal(\n",
        "    n_particles=50,\n",
        "    iters=1000,\n",
        "    cognitive=0.5,\n",
        "    social=0.3,\n",
        "    inertia=0.9,\n",
        "    number_of_k_neighbors=3,\n",
        "    minkowski_p_norm=2,\n",
        "    ftol=-np.inf,\n",
        ")"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "__LBFGS__\n",
        "\n",
        "The LBFGS sampler is a Local Optimization algorithm. It is a Python implementation of the scipy.optimize.lbfgs\n",
        "algorithm.\n",
        "\n",
        "Information about the L-BFGS method can be found at the following links:\n",
        "\n",
        " - https://docs.scipy.org/doc/scipy/reference/optimize.minimize-lbfgsb.html"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {},
      "source": [
        "search = af.LBFGS(\n",
        "    tol=None,\n",
        "    disp=None,\n",
        "    maxcor=10,\n",
        "    ftol=2.220446049250313e-09,\n",
        "    gtol=1e-05,\n",
        "    eps=1e-08,\n",
        "    maxfun=15000,\n",
        "    maxiter=15000,\n",
        "    iprint=-1,\n",
        "    maxls=20,\n",
        ")\n"
      ],
      "outputs": [],
      "execution_count": null
    }
  ],
  "metadata": {
    "anaconda-cloud": {},
    "kernelspec": {
      "display_name": "Python 3",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.6.1"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 4
}

back to top

Software Heritage — Copyright (C) 2015–2025, The Software Heritage developers. License: GNU AGPLv3+.
The source code of Software Heritage itself is available on our development forge.
The source code files archived by Software Heritage are available under their own copyright and licenses.
Terms of use: Archive access, API— Content policy— Contact— JavaScript license information— Web API