abstract: There are several applications in science and engineering, where enormous amounts of \emph{multirelational data} have been produced. They usually have a complex intrinsic structure. Generally, these data depend on various parameters; hence, they can be interpreted as multidimensional data. Although computational power has been increased drastically over the last decades, a direct treatment, involving such multidimensional data, is still \emph{almost} impossible due to the \textit{curse of dimensionality}. This means, the required memory storage for multidimensional data increases exponentially with respect to dimensionality and also the associated computational cost.
\emph{Tensors} (multi-way arrays) can be considered as an essential tool to mitigate the aforementioned issue. They often provide a natural and compact representation for such massive multidimensional data. There has been a significant advancement in the use of tensor decompositions for feature extraction. The decompositions allow us to select necessary features from a large dimensional feature space.
We make use of the tensor train decomposition for building an algorithm for non-separable multidimensional data, which are, in general, not separable by a linear boundary. Therefore, we exploit \emph{Kernelized Support Vector Machines}, as a base to our approach. To preserve the input data structure, we directly work with tensors as an input. For the classification in a multidimensional case, we propose a method, the so-called \emph{Support Tensor Train Machine}, by utilizing the tensor train format, thus not restricting ourselves to rank one tensors. We show the robustness and efficiency of the proposed method by means of numerical experiments.