Re: Looking for: tensor-library for sparse tensor based triple store

@Tristan: One of my considerations was also extending NumPy. And they
have already a tensordot implementation vor multidimensional arrays.
I'm curious how it performs.
@John, Tristan: Thanks for the help. I'll let you know when I have some
results to show. 
Thanks for the help,
Alex

Am Dienstag, den 13.06.2017, 14:04 +0900 schrieb Tristan Hascoet:
> Hi,
> 
> Sorry I'm a bit late but you might want to check out this: https://gi
> thub.com/mrocklin/sparse/
> 
> "This implements sparse multidimensional arrays on top of NumPy and
> Scipy.sparse.  It generalizes the scipy.sparse.coo_matrix layout but
> extends
> beyond just rows and columns to an arbitrary number of dimensions.
> The original motivation is for machine learning algorithms, but it is
> intended for somewhat general use."
> 
> It might be particularly relevant for very large datasets as it has
> been integrated to dask for parallel out of core computations (http:/
> /dask.pydata.org/en/latest/array-sparse.html):"By swapping out in-
> memory numpy arrays with in-memory sparse arrays we can
> reuse the blocked algorithms of Dask.array to achieve parallel and
> distributed
> sparse arrays."
> I would also be very interested in hearing how your experiments go. I
> hope this helps, good luck.
> Tristan
> 
> 
> 
> 2017-06-12 21:06 GMT+09:00 John Erickson <olyerickson@gmail.com>:
> > Please keep us posted on how this work progresses.
> > 
> > 
> > 
> > I'm particularly interested; we have recently begun work on a
> > 
> > framework (or at least a set of repeatable processes) for
> > assembling
> > 
> > tensors based on experimental data persisted to triplestore-based
> > 
> > knowledge graphs.
> > 
> > 
> > 
> > Thanks!
> > 
> > 
> > 
> > John
> > 
> > 
> > 
> > On Mon, Jun 12, 2017 at 7:48 AM, Alexander Bigerl
> > 
> > <bigerl@informatik.uni-leipzig.de> wrote:
> > 
> > > Thank you both. Especially tensorlab looks promising.
> > 
> > >
> > 
> > > Best,
> > 
> > > Alex
> > 
> > >
> > 
> > >
> > 
> > > Am Freitag, den 19.05.2017, 05:58 -0400 schrieb John Erickson:
> > 
> > >
> > 
> > > Tensorlab? http://tensorlab.net/
> > 
> > >
> > 
> > >
> > 
> > >
> > 
> > > Am Freitag, den 19.05.2017, 16:10 +0200 schrieb Jörn Hees:
> > 
> > >
> > 
> > > RESCAL? https://github.com/mnick/rescal.py
> > 
> > >
> > 
> > > Best,
> > 
> > > Jörn
> > 
> > >
> > 
> > >
> > 
> > > On 18 May 2017, at 18:28, Alexander Bigerl
> > 
> > > <bigerl@informatik.uni-leipzig.de> wrote:
> > 
> > >
> > 
> > > Hi everyone,
> > 
> > >
> > 
> > > I am working on a tensor-based triple store to query triple
> > patterns (not
> > 
> > > full SPARQL). Therefor I'm looking for a suitable library
> > supporting sparse
> > 
> > > tensor product. The programming language doesn't matter. But it
> > would be
> > 
> > > nice if it was optimized for orthonormal-based tensors (means it
> > doesn't
> > 
> > > need to distinguish between co- and contravariant dimensions for
> > 
> > > multiplication).
> > 
> > >
> > 
> > > In more detail:
> > 
> > >
> > 
> > > I represent my my data like this:
> > 
> > >
> > 
> > >       • I have tensors storing boolean values.
> > 
> > >
> > 
> > >       • They are n >= 3 dimensional and every dimension has the
> > same size
> > 
> > > m>1000000.
> > 
> > >
> > 
> > >       • Every dimension uses a natural number index 0...m.
> > 
> > >
> > 
> > >       • The tensors are orthonormal-based so I don't need to
> > distinguish between
> > 
> > > co- and contraviarant dimensions.
> > 
> > >
> > 
> > >       • There are only very few true values in every tensor, so
> > the rest of the
> > 
> > > values is false. Therefor it should be sparse. Non-sparse is no
> > option
> > 
> > > because of at least 1000000^3 entries.
> > 
> > >
> > 
> > > I'm looking for:
> > 
> > >
> > 
> > >       • efficient sparse n-D tensor implementation with support
> > of a fast inner
> > 
> > > product like: Tαγβ • Dβδε = Rαγδε
> > 
> > >
> > 
> > >       • optional: support for pipelining multiple operations
> > 
> > >
> > 
> > >       • optional: support for logical and or pointwise
> > multiplication of
> > 
> > > equal-dimensioned tensors.
> > 
> > > The following libraries don't do the trick for reasons:
> > 
> > >
> > 
> > >       • Tensor flow: misses multiplication with non-dense-none-
> > 2D-matrices
> > 
> > >       • scipy sparse: supports only 2D representation and would
> > output a dense
> > 
> > > narray for dotproduct
> > 
> > >       • theano: supports only 2D sparse tensors
> > 
> > >       • Shared Scientific Toolbox and Universal Java Matrix
> > Package: don't
> > 
> > > support multiplication of n-D sparse tensors
> > 
> > > Who is wandering now where the triples are: They are mapped to
> > the
> > 
> > > dimensions' index so that the coordinates of a true in a 3D
> > Tensor
> > 
> > > represents a triple.
> > 
> > >
> > 
> > > I would be very thankful for any comments or recommendations.
> > 
> > >
> > 
> > > Kind regards,
> > 
> > >
> > 
> > > Alexander Bigerl
> > 
> > >
> > 
> > >
> > 
> > >
> > 
> > >
> > 
> > >
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > --
> > 
> > John S. Erickson, Ph.D.
> > 
> > Director of Operations, The Rensselaer IDEA
> > 
> > Deputy Director, Web Science Research Center (RPI)
> > 
> > <http://idea.rpi.edu/> <olyerickson@gmail.com>
> > 
> > Twitter & Skype: olyerickson
> > 
> > 
> > 
> > 

Received on Wednesday, 14 June 2017 17:34:18 UTC