TensorAlgebra{V} design and code generation

Mathematical foundations and definitions specific to the Grassmann.jl implementation provide an extensible platform for computing with a universal language for finite element methods based on a discrete manifold bundle. Tools built on these foundations enable computations based on multi-linear algebra and spin groups using the geometric algebra known as Grassmann algebra or Clifford algebra. This foundation is built on a DirectSum.jl parametric type system for tangent bundles and vector spaces generating the algorithms for local tangent algebras in a global context. With this unifying mathematical foundation, it is possible to improve efficiency of multi-disciplinary research using geometric tensor calculus by relying on universal mathematical principles.

  • AbstractTensors.jl: Tensor algebra abstract type interoperability setup
  • DirectSum.jl: Tangent bundle, vector space and Submanifold definition
  • Grassmann.jl: ⟨Grassmann-Clifford-Hodge⟩ multilinear differential geometric algebra

Direct sum parametric type polymorphism

DOI GitHub release (latest SemVer) GitHub commits since latest release Build status

The DirectSum.jl package is a work in progress providing the necessary tools to work with an arbitrary Manifold specified by an encoding. Due to the parametric type system for the generating TensorBundle, the Julia compiler can fully preallocate and often cache values efficiently ahead of run-time. Although intended for use with the Grassmann.jl package, DirectSum can be used independently.

Let $M = T^\mu V$ be a $\mathbb{K}$-module of rank $n$, then an instance for $T^\mu V$ can be the tuple $(n,\mathbb{P},g,\nu,\mu)$ with $\mathbb{P}\subseteq \langle v_\infty,v_\emptyset\rangle$ specifying the presence of the projective basis and $g:V\times V\rightarrow\mathbb{K}$ is a metric tensor specification. The type TensorBundle{n,$\mathbb{P}$,g,$\nu$,$\mu$} encodes this information as byte-encoded data available at pre-compilation, where $\mu$ is an integer specifying the order of the tangent bundle (i.e. multiplicity limit of the Leibniz-Taylor monomials). Lastly, $\nu$ is the number of tangent variables.

\[\langle v_1,\dots,v_{n-\nu},\partial_1,\dots,\partial_\nu\rangle=M\leftrightarrow M' = \langle w_1,\dots,w_{n-\nu},\epsilon_1,\dots,\epsilon_\nu\rangle\]

where $v_i$ and $w_i$ are bases for the vectors and covectors, while $\partial_i$ and $\epsilon_j$ are bases for differential operators and scalar functions. The purpose of the TensorBundle type is to specify the $\mathbb{K}$-module basis at compile time. When assigned in a workspace, V = Submanifold(::TensorBundle) is used.

The metric signature of the Submanifold{V,1} elements of a vector space $V$ can be specified with the V"..." by using $+$ or $-$ to specify whether the Submanifold{V,1} element of the corresponding index squares to $+1$ or $-1$. For example, S"+++" constructs a positive definite 3-dimensional TensorBundle, so constructors such as S"..." and D"..." are convenient.

julia> ℝ^3 == V"+++" == Manifold(3)true

It is also possible to change the diagonal scaling, such as with D"1,1,1,0", although the Signature format has a more compact representation if limited to $+1$ and $-1$. It is also possible to change the diagonal scaling, such as with D"0.3,2.4,1". Fully general MetricTensor as a type with non-diagonal components requires a matrix, e.g. MetricTensor([1 2; 2 3]).

Declaring an additional point at infinity is done by specifying it in the string constructor with $\infty$ at the first index (i.e. Riemann sphere S"∞+++"). The hyperbolic geometry can be declared by $\emptyset$ subsequently (i.e. hyperbolic projection S"∅+++"). Additionally, the null-basis based on the projective split for conformal geometric algebra would be specified with S"∞∅+++". These two declared basis elements are interpreted in the type system. The tangent(V,μ,ν) map can be used to specify $\mu$ and $\nu$.

To assign V = Submanifold(::TensorBundle) along with associated basis elements of the DirectSum.Basis to the local Julia session workspace, it is typical to use Submanifold elements created by the @basis macro,

julia> using Grassmann; @basis S"-++" # macro or basis"-++"(⟨-++⟩, v, v₁, v₂, v₃, v₁₂, v₁₃, v₂₃, v₁₂₃)

the macro @basis V delcares a local basis in Julia. As a result of this macro, all Submanifold{V,G} elements generated with M::TensorBundle become available in the local workspace with the specified naming arguments. The first argument provides signature specifications, the second argument is the variable name for $V$ the $\mathbb{K}$-module, and the third and fourth argument are prefixes of the Submanifold vector names (and covector names). Default is $V$ assigned Submanifold{M} and $v$ is prefix for the Submanifold{V}.

It is entirely possible to assign multiple different bases having different signatures without any problems. The @basis macro arguments are used to assign the vector space name to $V$ and the basis elements to $v_i$, but other assigned names can be chosen so that their local names don't interfere: If it is undesirable to assign these variables to a local workspace, the versatile constructs of DirectSum.Basis{V} can be used to contain or access them, which is exported to the user as the method DirectSum.Basis(V).

julia> DirectSum.Basis(V)DirectSum.Basis{⟨-++⟩,8}(v, v₁, v₂, v₃, v₁₂, v₁₃, v₂₃, v₁₂₃)

V(::Int...) provides a convenient way to define a Submanifold by using integer indices to reference specific direct sums within the ambient space $V$.

julia> (ℝ^5)(3,5)⟨__+_+⟩
julia> dump(ans)Submanifold{⟨+++++⟩, 2, 0x0000000000000014} ⟨__+_+⟩

Here, calling a Manifold with a set of indices produces a Submanifold representation.

The direct sum operator $\oplus$ can be used to join spaces (alternatively $+$), and the dual space functor $'$ is an involution which toggles a dual vector space with inverted signature.

julia> V = ℝ'⊕ℝ^3⟨-+++⟩
julia> V'⟨+---⟩'
julia> W = V⊕V'⟨-++++---⟩*

The direct sum of a TensorBundle and its dual $V\oplus V'$ represents the full mother space $V*$.

julia> collect(V) # all Submanifold vector basis elementsDirectSum.Basis{⟨-+++⟩,16}(⟨____⟩, ⟨-___⟩, ⟨_+__⟩, ⟨__+_⟩, ⟨___+⟩, ⟨-+__⟩, ⟨-_+_⟩, ⟨-__+⟩, ⟨_++_⟩, ⟨_+_+⟩, ⟨__++⟩, ⟨-++_⟩, ⟨-+_+⟩, ⟨-_++⟩, ⟨_+++⟩, ⟨-+++⟩)
julia> collect(Submanifold(V')) # all covector basis elementsDirectSum.Basis{⟨+---⟩',16}(w, w¹, w², w³, w⁴, w¹², w¹³, w¹⁴, w²³, w²⁴, w³⁴, w¹²³, w¹²⁴, w¹³⁴, w²³⁴, w¹²³⁴)
julia> collect(Submanifold(W)) # all mixed basis elementsDirectSum.Basis{⟨-++++---⟩*,256}(v, v₁, v₂, v₃, v₄, w¹, w², w³, w⁴, v₁₂, v₁₃, v₁₄, v₁w¹, v₁w², v₁w³, v₁w⁴, v₂₃, v₂₄, v₂w¹, v₂w², v₂w³, v₂w⁴, v₃₄, v₃w¹, v₃w², v₃w³, v₃w⁴, v₄w¹, v₄w², v₄w³, v₄w⁴, w¹², w¹³, w¹⁴, w²³, w²⁴, w³⁴, v₁₂₃, v₁₂₄, v₁₂w¹, v₁₂w², v₁₂w³, v₁₂w⁴, v₁₃₄, v₁₃w¹, v₁₃w², v₁₃w³, v₁₃w⁴, v₁₄w¹, v₁₄w², v₁₄w³, v₁₄w⁴, v₁w¹², v₁w¹³, v₁w¹⁴, v₁w²³, v₁w²⁴, v₁w³⁴, v₂₃₄, v₂₃w¹, v₂₃w², v₂₃w³, v₂₃w⁴, v₂₄w¹, v₂₄w², v₂₄w³, v₂₄w⁴, v₂w¹², v₂w¹³, v₂w¹⁴, v₂w²³, v₂w²⁴, v₂w³⁴, v₃₄w¹, v₃₄w², v₃₄w³, v₃₄w⁴, v₃w¹², v₃w¹³, v₃w¹⁴, v₃w²³, v₃w²⁴, v₃w³⁴, v₄w¹², v₄w¹³, v₄w¹⁴, v₄w²³, v₄w²⁴, v₄w³⁴, w¹²³, w¹²⁴, w¹³⁴, w²³⁴, v₁₂₃₄, v₁₂₃w¹, v₁₂₃w², v₁₂₃w³, v₁₂₃w⁴, v₁₂₄w¹, v₁₂₄w², v₁₂₄w³, v₁₂₄w⁴, v₁₂w¹², v₁₂w¹³, v₁₂w¹⁴, v₁₂w²³, v₁₂w²⁴, v₁₂w³⁴, v₁₃₄w¹, v₁₃₄w², v₁₃₄w³, v₁₃₄w⁴, v₁₃w¹², v₁₃w¹³, v₁₃w¹⁴, v₁₃w²³, v₁₃w²⁴, v₁₃w³⁴, v₁₄w¹², v₁₄w¹³, v₁₄w¹⁴, v₁₄w²³, v₁₄w²⁴, v₁₄w³⁴, v₁w¹²³, v₁w¹²⁴, v₁w¹³⁴, v₁w²³⁴, v₂₃₄w¹, v₂₃₄w², v₂₃₄w³, v₂₃₄w⁴, v₂₃w¹², v₂₃w¹³, v₂₃w¹⁴, v₂₃w²³, v₂₃w²⁴, v₂₃w³⁴, v₂₄w¹², v₂₄w¹³, v₂₄w¹⁴, v₂₄w²³, v₂₄w²⁴, v₂₄w³⁴, v₂w¹²³, v₂w¹²⁴, v₂w¹³⁴, v₂w²³⁴, v₃₄w¹², v₃₄w¹³, v₃₄w¹⁴, v₃₄w²³, v₃₄w²⁴, v₃₄w³⁴, v₃w¹²³, v₃w¹²⁴, v₃w¹³⁴, v₃w²³⁴, v₄w¹²³, v₄w¹²⁴, v₄w¹³⁴, v₄w²³⁴, w¹²³⁴, v₁₂₃₄w¹, v₁₂₃₄w², v₁₂₃₄w³, v₁₂₃₄w⁴, v₁₂₃w¹², v₁₂₃w¹³, v₁₂₃w¹⁴, v₁₂₃w²³, v₁₂₃w²⁴, v₁₂₃w³⁴, v₁₂₄w¹², v₁₂₄w¹³, v₁₂₄w¹⁴, v₁₂₄w²³, v₁₂₄w²⁴, v₁₂₄w³⁴, v₁₂w¹²³, v₁₂w¹²⁴, v₁₂w¹³⁴, v₁₂w²³⁴, v₁₃₄w¹², v₁₃₄w¹³, v₁₃₄w¹⁴, v₁₃₄w²³, v₁₃₄w²⁴, v₁₃₄w³⁴, v₁₃w¹²³, v₁₃w¹²⁴, v₁₃w¹³⁴, v₁₃w²³⁴, v₁₄w¹²³, v₁₄w¹²⁴, v₁₄w¹³⁴, v₁₄w²³⁴, v₁w¹²³⁴, v₂₃₄w¹², v₂₃₄w¹³, v₂₃₄w¹⁴, v₂₃₄w²³, v₂₃₄w²⁴, v₂₃₄w³⁴, v₂₃w¹²³, v₂₃w¹²⁴, v₂₃w¹³⁴, v₂₃w²³⁴, v₂₄w¹²³, v₂₄w¹²⁴, v₂₄w¹³⁴, v₂₄w²³⁴, v₂w¹²³⁴, v₃₄w¹²³, v₃₄w¹²⁴, v₃₄w¹³⁴, v₃₄w²³⁴, v₃w¹²³⁴, v₄w¹²³⁴, v₁₂₃₄w¹², v₁₂₃₄w¹³, v₁₂₃₄w¹⁴, v₁₂₃₄w²³, v₁₂₃₄w²⁴, v₁₂₃₄w³⁴, v₁₂₃w¹²³, v₁₂₃w¹²⁴, v₁₂₃w¹³⁴, v₁₂₃w²³⁴, v₁₂₄w¹²³, v₁₂₄w¹²⁴, v₁₂₄w¹³⁴, v₁₂₄w²³⁴, v₁₂w¹²³⁴, v₁₃₄w¹²³, v₁₃₄w¹²⁴, v₁₃₄w¹³⁴, v₁₃₄w²³⁴, v₁₃w¹²³⁴, v₁₄w¹²³⁴, v₂₃₄w¹²³, v₂₃₄w¹²⁴, v₂₃₄w¹³⁴, v₂₃₄w²³⁴, v₂₃w¹²³⁴, v₂₄w¹²³⁴, v₃₄w¹²³⁴, v₁₂₃₄w¹²³, v₁₂₃₄w¹²⁴, v₁₂₃₄w¹³⁴, v₁₂₃₄w²³⁴, v₁₂₃w¹²³⁴, v₁₂₄w¹²³⁴, v₁₃₄w¹²³⁴, v₂₃₄w¹²³⁴, v₁₂₃₄w¹²³⁴)

In addition to the direct-sum operation, several other operations are supported, such as $\cup,\cap,\subseteq,\supseteq$ for set operations. Due to the design of the TensorBundle dispatch, these operations enable code optimizations at compile-time provided by the bit parameters.

julia> ℝ⊕ℝ' ⊇ Manifold(1)true
julia> ℝ ∩ ℝ' == Manifold(0)true
julia> ℝ ∪ ℝ' == ℝ⊕ℝ'true

Operations on Manifold types is automatically handled at compile time.

More information about DirectSum is available at https://github.com/chakravala/DirectSum.jl

Higher dimensions with SparseBasis and ExtendedBasis

In order to work with a TensorAlgebra{V}, it is necessary for some computations to be cached. This is usually done automatically when accessed.

julia> Λ(7) ⊕ Λ(7)'
DirectSum.SparseBasis{⟨+++++++-------⟩*,16384}(v, ..., v₁₂₃₄₅₆₇w¹²³⁴⁵⁶⁷)

One way of declaring the cache for all 3 combinations of a TensorBundle{N} and its dual is to ask for the sum Λ(V) + Λ(V)', which is equivalent to Λ(V⊕V'), but this does not initialize the cache of all 3 combinations unlike the former.

Staging of precompilation and caching is designed so that a user can smoothly transition between very high dimensional and low dimensional algebras in a single session, with varying levels of extra caching and optimizations. The parametric type formalism in Grassmann is highly expressive and enables pre-allocation of geometric algebra computations involving specific sparse subalgebras, including the representation of rotational groups.

It is possible to reach elements with up to $N=62$ vertices from a TensorAlgebra having higher maximum dimensions than supported by Julia natively.

julia> Λ(62)DirectSum.ExtendedBasis{⟨11111111111111111111111111111111111111111111111111111111111111⟩,4611686018427387904}(v, ..., v₁₂₃₄₅₆₇₈₉₀abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ)
julia> Λ(62).v32a87Ng-1v₂₃₇₈agN

The 62 indices require full alpha-numeric labeling with lower-case and capital letters. This now allows you to reach up to $4,611,686,018,427,387,904$ dimensions with Julia using Grassmann. Then the volume element is

v₁₂₃₄₅₆₇₈₉₀abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ

Full Multivector allocations are only possible for $N\leq22$, but sparse operations are also available at higher dimensions. While DirectSum.Basis{V} is a container for the TensorAlgebra generators of $V$, the Basis is only cached for $N\leq8$. For the range of dimensions $8<N\leq22$, the SparseBasis type is used.

julia> Λ(22)
DirectSum.SparseBasis{⟨++++++++++++++++++++++⟩,4194304}(v, ..., v₁₂₃₄₅₆₇₈₉₀abcdefghijkl)

This is the largest SparseBasis that can be generated with Julia, due to array size limitations.

To reach higher dimensions with $N>22$, the DirectSum.ExtendedBasis type is used. It is suficient to work with a 64-bit representation (which is the default). And it turns out that with 62 standard keyboard characters, this fits.

julia> V = ℝ^22⟨++++++++++++++++++++++⟩
julia> Λ(V+V')DirectSum.ExtendedBasis{⟨++++++++++++++++++++++----------------------⟩*,17592186044416}(v, ..., v₁₂₃₄₅₆₇₈₉₀abcdefghijklw¹²³⁴⁵⁶⁷⁸⁹⁰ABCDEFGHIJKL)

At 22 dimensions and lower there is better caching, with further extra caching for 8 dimensions or less. Thus, the largest Hilbert space that is fully reachable has 4,194,304 dimensions, but we can still reach out to 4,611,686,018,427,387,904 dimensions with the ExtendedBasis built in. It is still feasible to extend to a further super-extended 128-bit representation using the UInt128 type (but this will require further modifications of internals and helper functions. To reach into infinity even further, it is theoretically possible to construct ultra-extensions also using dictionaries. Full Multivector elements are not representable when ExtendedBasis is used, but the performance of the Basis and sparse elements should be just as fast as for lower dimensions for the current SubAlgebra and TensorAlgebra types. The sparse representations are a work in progress to be improved with time.

Interoperability for TensorAlgebra{V}

DOI GitHub release (latest SemVer) GitHub commits since latest release Build status

The AbstractTensors package is intended for universal interoperation of the abstract TensorAlgebra type system. All TensorAlgebra{V} subtypes have type parameter $V$, used to store a Submanifold{M} value, which is parametrized by $M$ the TensorBundle choice. This means that different tensor types can have a commonly shared underlying $\mathbb{K}$-module parametric type expressed by defining V::Submanifold{M}. Each TensorAlgebra subtype must be accompanied by a corresponding TensorBundle parameter, which is fully static at compile time. Due to the parametric type system for the $\mathbb{K}$-module types, the compiler can fully pre-allocate and often cache.

Since TensorBundle choices are fundamental to TensorAlgebra operations, the universal interoperability between TensorAlgebra{V} elements with different associated TensorBundle choices is naturally realized by applying the union morphism to operations, e.g. $\bigwedge :\Lambda^{p_1}V_1\times\dots\times\Lambda^{p_g}V_g \rightarrow \Lambda^{\sum_kp_k}\bigcup_k V_k$. Some of the method names like $+,-,*,\otimes,\circledast,\odot,\boxtimes,\star$ for TensorAlgebra elements are shared across different packages, with interoperability.

function op(::TensorAlgebra{V},::TensorAlgebra{V}) where V
    # well defined operations if V is shared
end # but what if V ≠ W in the input types?

function op(a::TensorAlgebra{V},b::TensorAlgebra{W}) where {V,W}
    VW = V ∪ W        # VectorSpace type union
    op(VW(a),VW(b))   # makes call well-defined
end # this option is automatic with interop(a,b)

# alternatively for evaluation of forms, VW(a)(VW(b))

Additionally, a universal unit volume element can be specified in terms of LinearAlgebra.UniformScaling, which is independent of $V$ and has its interpretation only instantiated by context of TensorAlgebra{V} elements being operated on. Interoperability of LinearAlgebra.UniformScaling as a pseudoscalar element which takes on the TensorBundle form of any other TensorAlgebra element is handled globally. This enables the usage of I from LinearAlgebra as a universal pseudoscalar element defined at every point $x$ of a Manifold, which is mathematically denoted by $I = I(x)$ and specified by the $g(x)$ bilinear tensor field of $TM$.

More information about AbstractTensors is available at https://github.com/chakravala/AbstractTensors.jl