Skip to main content
  • Home
  • Development
  • Documentation
  • Donate
  • Operational login
  • Browse the archive

swh logo
SoftwareHeritage
Software
Heritage
Archive
Features
  • Search

  • Downloads

  • Save code now

  • Add forge now

  • Help

Raw File Download
Permalink

To reference or cite the objects present in the Software Heritage archive, permalinks based on SoftWare Hash IDentifiers (SWHIDs) must be used.
Select below a type of object currently browsed in order to display its associated SWHID and permalink.

  • content
content badge Iframe embedding
swh:1:cnt:b0da04c4de66b060a4f498bb8ff21d5d850eb3e8
Citations

This interface enables to generate software citations, provided that the root directory of browsed objects contains a citation.cff or codemeta.json file.
Select below a type of object currently browsed in order to generate citations for them.

  • content
Generate software citation in BibTex format (requires biblatex-software package)
Generating citation ...
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Main Tensor Formats\n",
    "\n",
    "Three of the most popular tensor decompositions are supported in *tntorch*:\n",
    "\n",
    "- [CANDECOMP/PARAFAC (CP)](https://epubs.siam.org/doi/pdf/10.1137/07070111X)\n",
    "- [Tucker](https://epubs.siam.org/doi/pdf/10.1137/S0895479898346995)\n",
    "- [Tensor Train (TT)](https://epubs.siam.org/doi/abs/10.1137/090752286?journalCode=sjoce3)\n",
    "\n",
    "Those formats are all represented using $N$ *tensor cores* (one per tensor dimension, used for CP/TT) and, optionally, up to $N$ *factor matrices* (needed for Tucker).\n",
    "\n",
    "In an $N$-D tensor of shape $I_1 \\times \\dots \\times I_N$, each $n$-th core can come in four flavors:\n",
    "\n",
    "- $R^{\\mathrm{TT}}_n \\times I_n \\times R^{\\mathrm{TT}}_{n+1}$: a TT core.\n",
    "- $R^{\\mathrm{TT}}_n \\times S_n^{\\mathrm{Tucker}} \\times R^{\\mathrm{TT}}_{n+1}$: a TT-Tucker core, accompanied by an $I_n \\times S_n^{\\mathrm{Tucker}}$ factor matrix.\n",
    "- $I_n \\times R^{\\mathrm{CP}}_n$: a CP core. Conceptually, it works as if it were a 3D TT core of shape $R^{\\mathrm{CP}}_n \\times I_n \\times R^{\\mathrm{CP}}_n$ whose slices along the 2nd mode are all diagonal matrices.\n",
    "- $S_n^{\\mathrm{Tucker}} \\times R^{\\mathrm{CP}}_n$: a CP-Tucker core, accompanied by an $I_n \\times S_n^{\\mathrm{Tucker}}$ factor matrix. Conceptually, it works as a 3D TT-Tucker core.\n",
    "\n",
    "One tensor network can combine cores of different kinds. So all in all one may have TT, TT-Tucker, Tucker, CP, TT-CP, CP-Tucker, and TT-CP-Tucker tensors. We will show examples of all.\n",
    "\n",
    "*(see [this notebook](decompositions.ipynb) to decompose full tensors into those main formats)*\n",
    "\n",
    "*(see [this notebook](other_formats.ipynb) for other structured and custom decompositions)*"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## TT\n",
    "\n",
    "Tensor train cores are represented in parentheses `( )`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "5D TT tensor:\n",
       "\n",
       " 32  32  32  32  32\n",
       "  |   |   |   |   |\n",
       " (0) (1) (2) (3) (4)\n",
       " / \\ / \\ / \\ / \\ / \\\n",
       "1   5   5   5   5   1"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import torch\n",
    "torch.set_default_dtype(torch.float64)\n",
    "import tntorch as tn\n",
    "\n",
    "tn.rand([32]*5, ranks_tt=5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## TT-Tucker\n",
    "\n",
    "In this format, TT cores are compressed along their spatial dimension (2nd mode) using an accompanying Tucker factor. This was considered e.g. in the original [TT paper](https://epubs.siam.org/doi/abs/10.1137/090752286?journalCode=sjoce3)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "5D TT-Tucker tensor:\n",
       "\n",
       " 32  32  32  32  32\n",
       "  |   |   |   |   |\n",
       "  6   6   6   6   6\n",
       " (0) (1) (2) (3) (4)\n",
       " / \\ / \\ / \\ / \\ / \\\n",
       "1   5   5   5   5   1"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tn.rand([32]*5, ranks_tt=5, ranks_tucker=6)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here is an example where only *some* cores have Tucker factors:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "5D TT-Tucker tensor:\n",
       "\n",
       "     32          32\n",
       " 32   |  32  32   |\n",
       "  |   6   |   |   7\n",
       " (0) (1) (2) (3) (4)\n",
       " / \\ / \\ / \\ / \\ / \\\n",
       "1   5   5   5   5   1"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tn.rand([32]*5, ranks_tt=5, ranks_tucker=[None, 6, None, None, 7])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note: if you want to leave some factors fixed during gradient descent, simply set them to some PyTorch tensor that has requires_grad=False."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Tucker\n",
    "\n",
    "\"Pure\" Tucker is technically speaking not supported, but is equivalent to a TT-Tucker tensor with full TT-ranks. The minimal necessary ranks are automatically computed and set up for you:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "5D TT-Tucker tensor:\n",
       "\n",
       " 32  32  32  32  32\n",
       "  |   |   |   |   |\n",
       "  3   3   3   3   3\n",
       " (0) (1) (2) (3) (4)\n",
       " / \\ / \\ / \\ / \\ / \\\n",
       "1   3   9   9   3   1"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tn.rand([32]*5, ranks_tucker=3)  # Actually a TT-Tucker network, but just as expressive as a pure Tucker decomposition"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In other words, all $32^5$ tensors of Tucker rank $3$ can be represented by a tensor that has the shape shown above, and vice versa."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## CP\n",
    "\n",
    "CP factors are shown as cores in brackets `< >`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "5D CP tensor:\n",
       "\n",
       " 32  32  32  32  32\n",
       "  |   |   |   |   |\n",
       " <0> <1> <2> <3> <4>\n",
       " / \\ / \\ / \\ / \\ / \\\n",
       "4   4   4   4   4   4"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tn.rand([32]*5, ranks_cp=4)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Even though those factors work conceptually as 3D cores in a tensor train (every CP tensor [is a particular case](https://epubs.siam.org/doi/abs/10.1137/090752286?journalCode=sjoce3) of the TT format), they are stored in 2D as in a standard CP decomposition. In this case all cores have shape $32 \\times 4$.\n",
    "\n",
    "## TT-CP\n",
    "\n",
    "TT and CP cores can be combined by specifying lists of ranks for each format. You should provide $N-1$ TT ranks and $N$ CP ranks and use `None` so that they do not collide anywhere. Also note that consecutive CP ranks must coincide. Here is a tensor with 3 TT cores and 2 CP cores:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "5D TT-CP tensor:\n",
       "\n",
       " 32  32  32  32  32\n",
       "  |   |   |   |   |\n",
       " (0) (1) (2) <3> <4>\n",
       " / \\ / \\ / \\ / \\ / \\\n",
       "1   2   3   4   4   4"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tn.rand([32]*5, ranks_tt=[2, 3, None, None], ranks_cp=[None, None, None, 4, 4])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here is another example with 2 TT cores and 3 CP cores:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "5D TT-CP tensor:\n",
       "\n",
       " 32  32  32  32  32\n",
       "  |   |   |   |   |\n",
       " <0> (1) (2) <3> <4>\n",
       " / \\ / \\ / \\ / \\ / \\\n",
       "4   4   2   5   5   5"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tn.rand([32]*5, ranks_tt=[None, 2, None, None], ranks_cp=[4, None, None, 5, 5])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## CP-Tucker\n",
    "\n",
    "Similarly to TT-Tucker, this model restricts the columns along CP factors to live in a low-dimensional subspace. It is also known as a [canonical decomposition with linear constraints (*CANDELINC*)](https://link.springer.com/article/10.1007/BF02293596). [Compressing a Tucker core via CP](https://www.degruyter.com/downloadpdf/j/math.2007.5.issue-3/s11533-007-0018-0/s11533-007-0018-0.pdf) leads to an equivalent format."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "5D CP-Tucker tensor:\n",
       "\n",
       " 32  32  32  32  32\n",
       "  |   |   |   |   |\n",
       "  4   4   4   4   4\n",
       " <0> <1> <2> <3> <4>\n",
       " / \\ / \\ / \\ / \\ / \\\n",
       "2   2   2   2   2   2"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tn.rand([32]*5, ranks_cp=2, ranks_tucker=4)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## TT-CP-Tucker\n",
    "\n",
    "Finally, we can combine all sorts of cores to get a hybrid of all 3 models:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "5D TT-CP-Tucker tensor:\n",
       "\n",
       " 32      32  32    \n",
       "  |  32   |   |  32\n",
       "  5   |   5   5   |\n",
       " (0) (1) (2) <3> <4>\n",
       " / \\ / \\ / \\ / \\ / \\\n",
       "1   2   3   10  10  10"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tn.rand([32]*5, ranks_tt=[2, 3, None, None], ranks_cp=[None, None, None, 10, 10], ranks_tucker=[5, None, 5, 5, None])"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}

back to top

Software Heritage — Copyright (C) 2015–2025, The Software Heritage developers. License: GNU AGPLv3+.
The source code of Software Heritage itself is available on our development forge.
The source code files archived by Software Heritage are available under their own copyright and licenses.
Terms of use: Archive access, API— Contact— JavaScript license information— Web API