result.ipynb
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Cookbook: Results\n",
"=================\n",
"\n",
"After a non-linear search has completed, it returns a `Result` object that contains information on fit, such as\n",
"the maximum likelihood model instance, the errors on each parameter and the Bayesian evidence.\n",
"\n",
"This cookbook provides an overview of using the results.\n",
"\n",
"__Contents__\n",
"\n",
" - Model Fit: Perform a simple model-fit to create a `Result` object.\n",
" - Info: Print the `info` attribute of the `Result` object to display a summary of the model-fit.\n",
" - Samples: The `Samples` object contained in the `Result`, containing all non-linear samples (e.g. parameters,\n",
" log likelihoods, etc.).\n",
" - Maximum Likelihood: The maximum likelihood model instance.\n",
" - Posterior / PDF: The median PDF model instance and PDF vectors of all model parameters via 1D marginalization.\n",
" - Errors: The errors on every parameter estimated from the PDF, computed via marginalized 1D PDFs at an input sigma.\n",
" - Sample Instance: The model instance of any accepted sample.\n",
" - Search Plots: Plots of the non-linear search, for example a corner plot or 1D PDF of every parameter.\n",
" - Bayesian Evidence: The log evidence estimated via a nested sampling algorithm.\n",
" - Collection: Results created from models defined via a `Collection` object.\n",
" - Lists: Extracting results as Python lists instead of instances.\n",
" - Latex: Producing latex tables of results (e.g. for a paper).\n",
"\n",
"The following sections outline how to use advanced features of the results, which you may skip on a first read:\n",
"\n",
" - Derived Quantities: Computing quantities and errors for quantities and parameters not included directly in the model.\n",
" - Result Extension: Extend the `Result` object with new attributes and methods (e.g. `max_log_likelihood_model_data`).\n",
" - Samples Filtering: Filter the `Samples` object to only contain samples fulfilling certain criteria."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"%matplotlib inline\n",
"from pyprojroot import here\n",
"workspace_path = str(here())\n",
"%cd $workspace_path\n",
"print(f\"Working Directory has been set to `{workspace_path}`\")\n",
"\n",
"import autofit as af\n",
"import autofit.plot as aplt\n",
"\n",
"from os import path\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__Model Fit__\n",
"\n",
"To illustrate results, we need to perform a model-fit in order to create a `Result` object.\n",
"\n",
"We do this below using the standard API and noisy 1D signal example, which you should be familiar with from other \n",
"example scripts.\n",
"\n",
"Note that the `Gaussian` and `Analysis` classes come via the `af.ex` module, which contains example model components\n",
"that are identical to those found throughout the examples."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n",
"data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n",
"noise_map = af.util.numpy_array_from_json(\n",
" file_path=path.join(dataset_path, \"noise_map.json\")\n",
")\n",
"\n",
"model = af.Model(af.ex.Gaussian)\n",
"\n",
"analysis = af.ex.Analysis(data=data, noise_map=noise_map)\n",
"\n",
"search = af.Emcee(\n",
" nwalkers=30,\n",
" nsteps=1000,\n",
" number_of_cores=1,\n",
")\n",
"\n",
"result = search.fit(model=model, analysis=analysis)"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__Info__\n",
"\n",
"Printing the `info` attribute shows the overall result of the model-fit in a human readable format."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"print(result.info)"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__Samples__\n",
"\n",
"The result contains a `Samples` object, which contains all samples of the non-linear search.\n",
"\n",
"Each sample corresponds to a set of model parameters that were evaluated and accepted by the non linear search, \n",
"in this example emcee. \n",
"\n",
"This includes their log likelihoods, which are used for computing additional information about the model-fit,\n",
"for example the error on every parameter. \n",
"\n",
"Our model-fit used the MCMC algorithm Emcee, so the `Samples` object returned is a `SamplesMCMC` object."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"samples = result.samples\n",
"\n",
"print(\"MCMC Samples: \\n\")\n",
"print(samples)"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The parameters are stored as a list of lists, where:\n",
"\n",
" - The outer list is the size of the total number of samples.\n",
" - The inner list is the size of the number of free parameters in the fit."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"samples = result.samples\n",
"\n",
"print(\"Sample 5's second parameter value (Gaussian -> normalization):\")\n",
"print(samples.parameter_lists[4][1])\n",
"print(\"Sample 10`s third parameter value (Gaussian -> sigma)\")\n",
"print(samples.parameter_lists[9][2], \"\\n\")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The Samples class contains the log likelihood, log prior, log posterior and weight_list of every accepted sample, where:\n",
"\n",
"- The `log_likelihood` is the value evaluated in the `log_likelihood_function`.\n",
"\n",
"- The `log_prior` encodes information on how parameter priors map log likelihood values to log posterior values.\n",
"\n",
"- The `log_posterior` is `log_likelihood + log_prior`.\n",
"\n",
"- The `weight` gives information on how samples are combined to estimate the posterior, which depends on type of search\n",
" used (for `Emcee` they are all 1's meaning they are weighted equally).\n",
"\n",
"Lets inspect the last 10 values of each for the analysis. "
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"print(\"log(likelihood), log(prior), log(posterior) and weight of the tenth sample.\")\n",
"print(samples.log_likelihood_list[9])\n",
"print(samples.log_prior_list[9])\n",
"print(samples.log_posterior_list[9])\n",
"print(samples.weight_list[9])"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__Maximum Likelihood__\n",
"\n",
"Using the `Samples` object many results can be returned as an instance of the model, using the Python class structure\n",
"of the model composition.\n",
"\n",
"For example, we can return the model parameters corresponding to the maximum log likelihood sample."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"instance = samples.max_log_likelihood()\n",
"\n",
"print(\"Max Log Likelihood `Gaussian` Instance:\")\n",
"print(\"Centre = \", instance.centre)\n",
"print(\"Normalization = \", instance.normalization)\n",
"print(\"Sigma = \", instance.sigma, \"\\n\")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This makes it straight forward to plot the median PDF model:"
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"model_data = instance.model_data_1d_via_xvalues_from(xvalues=np.arange(data.shape[0]))\n",
"\n",
"plt.plot(range(data.shape[0]), data)\n",
"plt.plot(range(data.shape[0]), model_data)\n",
"plt.title(\"Illustrative model fit to 1D `Gaussian` profile data.\")\n",
"plt.xlabel(\"x values of profile\")\n",
"plt.ylabel(\"Profile normalization\")\n",
"plt.show()\n",
"plt.close()"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__Posterior / PDF__\n",
"\n",
"The result contains the full posterior information of our non-linear search, which can be used for parameter \n",
"estimation. \n",
"\n",
"The median pdf vector is available, which estimates every parameter via 1D marginalization of their PDFs."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"instance = samples.median_pdf()\n",
"\n",
"print(\"Median PDF `Gaussian` Instance:\")\n",
"print(\"Centre = \", instance.centre)\n",
"print(\"Normalization = \", instance.normalization)\n",
"print(\"Sigma = \", instance.sigma, \"\\n\")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__Errors__\n",
"\n",
"Methods for computing error estimates on all parameters are provided. \n",
"\n",
"This again uses 1D marginalization, now at an input sigma confidence limit. "
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"instance_upper_sigma = samples.errors_at_upper_sigma(sigma=3.0)\n",
"instance_lower_sigma = samples.errors_at_lower_sigma(sigma=3.0)\n",
"\n",
"print(\"Upper Error values (at 3.0 sigma confidence):\")\n",
"print(\"Centre = \", instance_upper_sigma.centre)\n",
"print(\"Normalization = \", instance_upper_sigma.normalization)\n",
"print(\"Sigma = \", instance_upper_sigma.sigma, \"\\n\")\n",
"\n",
"print(\"lower Error values (at 3.0 sigma confidence):\")\n",
"print(\"Centre = \", instance_lower_sigma.centre)\n",
"print(\"Normalization = \", instance_lower_sigma.normalization)\n",
"print(\"Sigma = \", instance_lower_sigma.sigma, \"\\n\")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"They can also be returned at the values of the parameters at their error values."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"instance_upper_values = samples.values_at_upper_sigma(sigma=3.0)\n",
"instance_lower_values = samples.values_at_lower_sigma(sigma=3.0)\n",
"\n",
"print(\"Upper Parameter values w/ error (at 3.0 sigma confidence):\")\n",
"print(\"Centre = \", instance_upper_values.centre)\n",
"print(\"Normalization = \", instance_upper_values.normalization)\n",
"print(\"Sigma = \", instance_upper_values.sigma, \"\\n\")\n",
"\n",
"print(\"lower Parameter values w/ errors (at 3.0 sigma confidence):\")\n",
"print(\"Centre = \", instance_lower_values.centre)\n",
"print(\"Normalization = \", instance_lower_values.normalization)\n",
"print(\"Sigma = \", instance_lower_values.sigma, \"\\n\")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__Sample Instance__\n",
"\n",
"A non-linear search retains every model that is accepted during the model-fit.\n",
"\n",
"We can create an instance of any model -- below we create an instance of the last accepted model."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"instance = samples.from_sample_index(sample_index=-1)\n",
"\n",
"print(\"Gaussian Instance of last sample\")\n",
"print(\"Centre = \", instance.centre)\n",
"print(\"Normalization = \", instance.normalization)\n",
"print(\"Sigma = \", instance.sigma, \"\\n\")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__Search Plots__\n",
"\n",
"The Probability Density Functions (PDF's) of the results can be plotted using the Emcee's visualization \n",
"tool `corner.py`, which is wrapped via the `EmceePlotter` object."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"search_plotter = aplt.EmceePlotter(samples=result.samples)\n",
"search_plotter.corner()"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__Bayesian Evidence__\n",
"\n",
"If a nested sampling non-linear search is used, the evidence of the model is also available which enables Bayesian\n",
"model comparison to be performed (given we are using Emcee, which is not a nested sampling algorithm, the log evidence \n",
"is None).:"
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"log_evidence = samples.log_evidence\n",
"print(f\"Log Evidence: {log_evidence}\")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__Collection__\n",
"\n",
"The examples correspond to a model where `af.Model(Gaussian)` was used to compose the model.\n",
"\n",
"Below, we illustrate how the results API slightly changes if we compose our model using a `Collection`:"
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"model = af.Collection(gaussian=af.ex.Gaussian, exponential=af.ex.Exponential)\n",
"\n",
"analysis = af.ex.Analysis(data=data, noise_map=noise_map)\n",
"\n",
"search = af.Emcee(\n",
" nwalkers=50,\n",
" nsteps=1000,\n",
" number_of_cores=1,\n",
")\n",
"\n",
"result = search.fit(model=model, analysis=analysis)"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `result.info` shows the result for the model with both a `Gaussian` and `Exponential` profile."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"print(result.info)"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Result instances again use the Python classes used to compose the model. \n",
"\n",
"However, because our fit uses a `Collection` the `instance` has attribues named according to the names given to the\n",
"`Collection`, which above were `gaussian` and `exponential`.\n",
"\n",
"For complex models, with a large number of model components and parameters, this offers a readable API to interpret\n",
"the results."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"samples = result.samples\n",
"\n",
"instance = samples.max_log_likelihood()\n",
"\n",
"print(\"Max Log Likelihood `Gaussian` Instance:\")\n",
"print(\"Centre = \", instance.gaussian.centre)\n",
"print(\"Normalization = \", instance.gaussian.normalization)\n",
"print(\"Sigma = \", instance.gaussian.sigma, \"\\n\")\n",
"\n",
"print(\"Max Log Likelihood Exponential Instance:\")\n",
"print(\"Centre = \", instance.exponential.centre)\n",
"print(\"Normalization = \", instance.exponential.normalization)\n",
"print(\"Sigma = \", instance.exponential.rate, \"\\n\")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__Lists__\n",
"\n",
"All results can alternatively be returned as a 1D list of values, by passing `as_instance=False`:"
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"max_lh_list = samples.max_log_likelihood(as_instance=False)\n",
"print(\"Max Log Likelihood Model Parameters: \\n\")\n",
"print(max_lh_list, \"\\n\\n\")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The list above does not tell us which values correspond to which parameters.\n",
"\n",
"The following quantities are available in the `Model`, where the order of their entries correspond to the parameters \n",
"in the `ml_vector` above:\n",
"\n",
" - `paths`: a list of tuples which give the path of every parameter in the `Model`.\n",
" - `parameter_names`: a list of shorthand parameter names derived from the `paths`.\n",
" - `parameter_labels`: a list of parameter labels used when visualizing non-linear search results (see below).\n",
"\n",
"For simple models like the one fitted in this tutorial, the quantities below are somewhat redundant. For the\n",
"more complex models they are important for tracking the parameters of the model."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"model = samples.model\n",
"\n",
"print(model.paths)\n",
"print(model.parameter_names)\n",
"print(model.parameter_labels)\n",
"print(model.model_component_and_parameter_names)\n",
"print(\"\\n\")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"All the methods above are available as lists."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"instance = samples.median_pdf(as_instance=False)\n",
"values_at_upper_sigma = samples.values_at_upper_sigma(sigma=3.0, as_instance=False)\n",
"values_at_lower_sigma = samples.values_at_lower_sigma(sigma=3.0, as_instance=False)\n",
"errors_at_upper_sigma = samples.errors_at_upper_sigma(sigma=3.0, as_instance=False)\n",
"errors_at_lower_sigma = samples.errors_at_lower_sigma(sigma=3.0, as_instance=False)"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__Latex__\n",
"\n",
"If you are writing modeling results up in a paper, you can use PyAutoFit's inbuilt latex tools to create latex table \n",
"code which you can copy to your .tex document.\n",
"\n",
"By combining this with the filtering tools below, specific parameters can be included or removed from the latex.\n",
"\n",
"Remember that the superscripts of a parameter are loaded from the config file `notation/label.yaml`, providing high\n",
"levels of customization for how the parameter names appear in the latex table. This is especially useful if your model\n",
"uses the same model components with the same parameter, which therefore need to be distinguished via superscripts."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"latex = af.text.Samples.latex(\n",
" samples=result.samples,\n",
" median_pdf_model=True,\n",
" sigma=3.0,\n",
" name_to_label=True,\n",
" include_name=True,\n",
" include_quickmath=True,\n",
" prefix=\"Example Prefix \",\n",
" suffix=\" \\\\[-2pt]\",\n",
")\n",
"\n",
"print(latex)"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__Derived Errors (Advanced)__\n",
"\n",
"Computing the errors of a quantity like the `sigma` of the Gaussian is simple, because it is sampled by the non-linear \n",
"search. Thus, to get their errors above we used the `Samples` object to simply marginalize over all over parameters \n",
"via the 1D Probability Density Function (PDF).\n",
"\n",
"Computing errors on derived quantitys is more tricky, because it is not sampled directly by the non-linear search. \n",
"For example, what if we want the error on the full width half maximum (FWHM) of the Gaussian? In order to do this\n",
"we need to create the PDF of that derived quantity, which we can then marginalize over using the same function we\n",
"use to marginalize model parameters.\n",
"\n",
"Below, we compute the FWHM of every accepted model sampled by the non-linear search and use this determine the PDF \n",
"of the FWHM. When combining the FWHM's we weight each value by its `weight`. For Emcee, an MCMC algorithm, the\n",
"weight of every sample is 1, but weights may take different values for other non-linear searches.\n",
"\n",
"In order to pass these samples to the function `marginalize`, which marginalizes over the PDF of the FWHM to compute \n",
"its error, we also pass the weight list of the samples.\n",
"\n",
"(Computing the error on the FWHM could be done in much simpler ways than creating its PDF from the list of every\n",
"sample. We chose this example for simplicity, in order to show this functionality, which can easily be extended to more\n",
"complicated derived quantities.)"
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"fwhm_list = []\n",
"\n",
"for sample in samples.sample_list:\n",
" instance = sample.instance_for_model(model=samples.model)\n",
"\n",
" sigma = instance.gaussian.sigma\n",
"\n",
" fwhm = 2 * np.sqrt(2 * np.log(2)) * sigma\n",
"\n",
" fwhm_list.append(fwhm)\n",
"\n",
"median_fwhm, upper_fwhm, lower_fwhm = af.marginalize(\n",
" parameter_list=fwhm_list, sigma=3.0, weight_list=samples.weight_list\n",
")\n",
"\n",
"print(f\"FWHM = {median_fwhm} ({upper_fwhm} {lower_fwhm}\")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__Result Extensions (Advanced)__\n",
"\n",
"You might be wondering what else the results contains, as nearly everything we discussed above was a part of its \n",
"`samples` property! The answer is, not much, however the result can be extended to include model-specific results for \n",
"your project. \n",
"\n",
"We detail how to do this in the **HowToFit** lectures, but for the example of fitting a 1D Gaussian we could extend\n",
"the result to include the maximum log likelihood profile:\n",
"\n",
"(The commented out functions below are llustrative of the API we can create by extending a result)."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"# max_log_likelihood_profile = results.max_log_likelihood_profile"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__Samples Filtering (Advanced)__\n",
"\n",
"Our samples object has the results for all three parameters in our model. However, we might only be interested in the\n",
"results of a specific parameter.\n",
"\n",
"The basic form of filtering specifies parameters via their path, which was printed above via the model and is printed \n",
"again below."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"samples = result.samples\n",
"\n",
"print(\"Parameter paths in the model which are used for filtering:\")\n",
"print(samples.model.paths)\n",
"\n",
"print(\"All parameters of the very first sample\")\n",
"print(samples.parameter_lists[0])\n",
"\n",
"samples = samples.with_paths([(\"gaussian\", \"centre\")])\n",
"\n",
"print(\"All parameters of the very first sample (containing only the Gaussian centre.\")\n",
"print(samples.parameter_lists[0])\n",
"\n",
"print(\"Maximum Log Likelihood Model Instances (containing only the Gaussian centre):\\n\")\n",
"print(samples.max_log_likelihood(as_instance=False))"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Above, we specified each path as a list of tuples of strings. \n",
"\n",
"This is how the PyAutoFit source code stores the path to different components of the model, but it is not \n",
"in-profile_1d with the PyAutoFIT API used to compose a model.\n",
"\n",
"We can alternatively use the following API:"
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"samples = result.samples\n",
"\n",
"samples = samples.with_paths([\"gaussian.centre\"])\n",
"\n",
"print(\"All parameters of the very first sample (containing only the Gaussian centre).\")\n",
"print(samples.parameter_lists[0])"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Above, we filtered the `Samples` but asking for all parameters which included the path (\"gaussian\", \"centre\").\n",
"\n",
"We can alternatively filter the `Samples` object by removing all parameters with a certain path. Below, we remove\n",
"the Gaussian's `centre` to be left with 2 parameters; the `normalization` and `sigma`."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [
"samples = result.samples\n",
"\n",
"print(\"Parameter paths in the model which are used for filtering:\")\n",
"print(samples.model.paths)\n",
"\n",
"print(\"All parameters of the very first sample\")\n",
"print(samples.parameter_lists[0])\n",
"\n",
"samples = samples.without_paths([\"gaussian.centre\"])\n",
"\n",
"print(\n",
" \"All parameters of the very first sample (containing only the Gaussian normalization and sigma).\"\n",
")\n",
"print(samples.parameter_lists[0])"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__Wrap Up__\n",
"\n",
"Adding model complexity does not change the behaviour of the Result object, other than the switch\n",
"to Collections meaning that our instances now have named entries.\n",
"\n",
"When you name your model components, you should make sure to give them descriptive and information names that make \n",
"the use of a result object clear and intuitive!\n",
"\n",
"__Database__\n",
"\n",
"For large-scaling model-fitting problems to large datasets, the results of the many model-fits performed can be output\n",
"and stored in a queryable sqlite3 database. The `Result` and `Samples` objects have been designed to streamline the \n",
"analysis and interpretation of model-fits to large datasets using the database.\n",
"\n",
"Checkout the database cookbook for more details on how to use the database."
]
},
{
"cell_type": "code",
"metadata": {},
"source": [],
"outputs": [],
"execution_count": null
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.1"
}
},
"nbformat": 4,
"nbformat_minor": 4
}