Raw File
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Differentiation\n",
    "\n",
    "To derive a tensor network one just needs to derive each core along its spatial dimension (if it has one)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "3D TT tensor:\n",
       "\n",
       " 32  32  32\n",
       "  |   |   |\n",
       " (0) (1) (2)\n",
       " / \\ / \\ / \\\n",
       "1   3   3   1"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import torch\n",
    "torch.set_default_dtype(torch.float64)\n",
    "import tntorch as tn\n",
    "\n",
    "t = tn.rand([32]*3, ranks_tt=3, requires_grad=True)\n",
    "t"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Basic Derivatives\n",
    "\n",
    "To derive w.r.t. one or several variables, use `partial()`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "3D TT tensor:\n",
       "\n",
       " 32  32  32\n",
       "  |   |   |\n",
       " (0) (1) (2)\n",
       " / \\ / \\ / \\\n",
       "1   3   3   1"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tn.partial(t, dim=[0, 1], order=2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Many Derivatives at Once\n",
    "\n",
    "Thanks to [mask tensors](logic.ipynb) we can specify and consider groups of many derivatives at once using the function `partialset()`. For example, the following tensor encodes *all* 2nd-order derivatives that contain $x$:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "3D TT tensor:\n",
      "\n",
      " 96  96  96\n",
      "  |   |   |\n",
      " (0) (1) (2)\n",
      " / \\ / \\ / \\\n",
      "1   9   9   1\n",
      "\n"
     ]
    }
   ],
   "source": [
    "x, y, z = tn.symbols(t.dim())\n",
    "d = tn.partialset(t, order=2, mask=x)\n",
    "print(d)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can check by summing squared norms:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor(48342.2888, grad_fn=<SumBackward0>)\n",
      "tensor(48342.2888, grad_fn=<ThAddBackward>)\n"
     ]
    }
   ],
   "source": [
    "print(tn.normsq(d))\n",
    "print(tn.normsq(tn.partial(t, 0, order=2)) + tn.normsq(tn.partial(t, [0, 1], order=1)) + tn.normsq(tn.partial(t, [0, 2], order=1)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The method with masks is attractive because its cost scales linearly with dimensionality $N$. Computing all order-$O$ derivatives costs $O(N O^3 R^2)$ with `partialset()` vs. $O(N^{(O+1)} R^2)$ with the naive `partial()`.\n",
    "\n",
    "### Applications\n",
    "\n",
    "See [this notebook](completion.ipynb) for an example of tensor optimization that tries to maximize an interpolator's smoothness. Tensor derivatives are also used for some [vector field](vector_fields.ipynb) computations and in the [active subspace method](active_subspaces.ipynb)."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
back to top