- From: John Erickson <olyerickson@gmail.com>
- Date: Mon, 12 Jun 2017 08:06:56 -0400
- To: Alexander Bigerl <bigerl@informatik.uni-leipzig.de>
- Cc: Jörn Hees <j_hees@cs.uni-kl.de>, Linking Open Data <public-lod@w3.org>
Please keep us posted on how this work progresses. I'm particularly interested; we have recently begun work on a framework (or at least a set of repeatable processes) for assembling tensors based on experimental data persisted to triplestore-based knowledge graphs. Thanks! John On Mon, Jun 12, 2017 at 7:48 AM, Alexander Bigerl <bigerl@informatik.uni-leipzig.de> wrote: > Thank you both. Especially tensorlab looks promising. > > Best, > Alex > > > Am Freitag, den 19.05.2017, 05:58 -0400 schrieb John Erickson: > > Tensorlab? http://tensorlab.net/ > > > > Am Freitag, den 19.05.2017, 16:10 +0200 schrieb Jörn Hees: > > RESCAL? https://github.com/mnick/rescal.py > > Best, > Jörn > > > On 18 May 2017, at 18:28, Alexander Bigerl > <bigerl@informatik.uni-leipzig.de> wrote: > > Hi everyone, > > I am working on a tensor-based triple store to query triple patterns (not > full SPARQL). Therefor I'm looking for a suitable library supporting sparse > tensor product. The programming language doesn't matter. But it would be > nice if it was optimized for orthonormal-based tensors (means it doesn't > need to distinguish between co- and contravariant dimensions for > multiplication). > > In more detail: > > I represent my my data like this: > > • I have tensors storing boolean values. > > • They are n >= 3 dimensional and every dimension has the same size > m>1000000. > > • Every dimension uses a natural number index 0...m. > > • The tensors are orthonormal-based so I don't need to distinguish between > co- and contraviarant dimensions. > > • There are only very few true values in every tensor, so the rest of the > values is false. Therefor it should be sparse. Non-sparse is no option > because of at least 1000000^3 entries. > > I'm looking for: > > • efficient sparse n-D tensor implementation with support of a fast inner > product like: Tαγβ • Dβδε = Rαγδε > > • optional: support for pipelining multiple operations > > • optional: support for logical and or pointwise multiplication of > equal-dimensioned tensors. > The following libraries don't do the trick for reasons: > > • Tensor flow: misses multiplication with non-dense-none-2D-matrices > • scipy sparse: supports only 2D representation and would output a dense > narray for dotproduct > • theano: supports only 2D sparse tensors > • Shared Scientific Toolbox and Universal Java Matrix Package: don't > support multiplication of n-D sparse tensors > Who is wandering now where the triples are: They are mapped to the > dimensions' index so that the coordinates of a true in a 3D Tensor > represents a triple. > > I would be very thankful for any comments or recommendations. > > Kind regards, > > Alexander Bigerl > > > > > -- John S. Erickson, Ph.D. Director of Operations, The Rensselaer IDEA Deputy Director, Web Science Research Center (RPI) <http://idea.rpi.edu/> <olyerickson@gmail.com> Twitter & Skype: olyerickson
Received on Monday, 12 June 2017 12:09:23 UTC