Professional Documents
Culture Documents
15697
tion, even though tensor ring factorization has been pro- matrix rows and remaining modes with the original order
posed in [18]. The different novelties as compared to [18] in the columns such that
include the initialization algorithm, exclusion of tensor fac-
tor normalization, conversion of tensor completion prob- X[i] 2 RIi ⇥(I1 ···Ii−1 Ii+1 ···In ) . (1)
lem into different least square sub-problems, and analysis
Definition 2. (Left Unfolding and Right Unfolding [9]) Let
of complexity in storage and computation.
X 2 RRi−1 ⇥Ii ⇥Ri be a third order tensor, the left unfolding
The proposed algorithm is evaluated on a variety of
is the matrix obtained by taking the first two modes indices
datasets, including Einstein’s image, Extended YaleFace
as rows indices and the third mode indices as column in-
Dataset B, and high speed video. The results are compared
dices such that
with the tensor train completion algorithms in [7, 16], and
the additional structure in the tensor ring is shown to signif- L(X) = (X[3] )T 2 R(Ri−1 Ii )⇥Ri . (2)
icantly improve the performance as compared to using the
TT structure. Similarly, the right unfolding gives
The rest of the paper is organized as follows. In section 2
we introduce the basic notation and preliminaries on the TR R(X) = X[1] 2 RRi−1 ⇥(Ii Ri ) . (3)
decomposition. In section 3 we outline the problem state-
ment and propose the main algorithm. We also describe the Definition 3. (Mode-i canonical matrization [4] ) Let X 2
computational complexity of the proposed algorithm. Fol- RI1 ⇥···⇥In be an nth order tensor, the mode-i canonical ma-
lowing that we test the algorithm extensively against com- trization gives
Qi Qn
peting methods on a number of real and synthetic data ex-
X<i> 2 R( t=1 It )⇥( t=i+1 It )
, (4)
periments in section 4. Finally we provide conclusion and
future research directions in section 5. such that any entry in X<i> satisfies
is the location index along the k th mode. A colon is ap- X(i1 , · · · , in ) = ··· U1 (rn , i1 , r1 ) · · ·
r1 =1 rn =1
(6)
plied to represent all the elements of a mode in a tensor,
e.g. X(:, i2 , · · · , in ) represents the fiber along mode 1 and Un (rn−1 , in , rn ),
X(:, :, i3 , i4 , · · · , in ) represents the slice along mode 1 and
mode 2 and so forth. Similar to Hadamard product un- where Ui 2 RRi−1 ⇥Ii ⇥Ri is a set of 3-order tensors,
der matrices case, Hadamard product between tensors is also named matrix product states (MPS), which consist the
the entry-wise product of the two tensors. vec(·) repre- bases of the tensor ring structures. Note that Uj (:, ij , :) 2
sents the vectorization of the tensor in the argument. The RRj−1 ⇥1⇥Rj can be regarded as a matrix of size RRj−1 ⇥Rj ,
vectorization is carried out lexicographically over the index thus (6) is equivalent to
set, stacking the elements on top of each other in that or- X(i1 , · · · , in ) = tr(U1 (:, i1 , :) ⇥ · · · ⇥ Un (:, in , :)). (7)
der. Frobenius norm of a tensor is the same as the vector
`2 norm of the corresponding tensor after vectorization, e.g. Remark 1. (Tensor Ring Rank (TR-Rank)) In the formu-
kXkF = kvec(X)k`2 . ⇥ between matrices is the standard lation of tensor ring, we note that tensor ring rank is the
matrix product operation. vector [R1 , · · · , Rn ]. In general, Ri are not necessary to
Definition 1. (Mode-i unfolding [4]) Let X 2 RI1 ⇥···⇥In be the same. In our set-up, we set Ri = R 8i = 1, · · · , n,
be a n-mode tensor. Mode-i unfolding of X, denoted as and the scalar R is referred as the tensor ring rank in the
X[i] , matrized the tensor X by putting the ith mode in the remainder of this paper.
5698
Remark 2. (Tensor Train [15]) Tensor train is a special 3. Formulation and Algorithm for Tensor Ring
case of tensor ring when Rn = 1. Completion
Based on the formulation of tensor ring structure, we de-
fine a tensor connect product, the operation between the 3.1. Problem Formulation
MPSs, to describe the generation of high order tensor X Given a tensor X 2 RI1 ⇥···⇥In that is partially observed
from the sets of MPSs Ui:i=1,··· ,n . Let R0 , Rn for ease at locations Ω, let PΩ 2 RI1 ⇥···⇥In be the correspond-
of expressions. ing binary tensor in which 1 represents an observed entry
Definition 5. (Tensor Connect Product) Let Ui 2 and 0 represents a missing entry. The problem is to find a
RRi−1 ⇥Ii ⇥Ri , i = 1, · · · , n be n 3rd-order tensors, the ten- low tensor ring rank (TR-Rank) approximation of the ten-
sor connect product between Uj and Uj+1 is defined as, sor X, denoted as f (U1 · · · Un ), such that the recovered ten-
sor matches X at PΩ . This problem is referred as the ten-
Uj Uj+1 2 RRj−1 ⇥(Ij Ij+1 )⇥Rj+1 sor completion problem under tensor ring model, which is
(8) equivalent to the following problem
= reshape (L(Uj ) ⇥ R(Uj+1 )) .
Thus, the tensor connect product n MPSs is min kPΩ ◦ (f (U1 · · · Un ) − X)k2F . (14)
Ui:i=1,··· ,n
U = U1 · · · Un 2 RR0 ⇥(I1 ···In )⇥Rn . (9) Note that the rank of the tensor ring R is predefined and the
Tensor connect product gives the product rule for the pro- dimension of Ui:i=1,··· ,n is RR⇥Ii ⇥R .
duction between 3-order tensors, just like the matrix product To solve this problem, we propose an algorithm, referred
as for 2-order tensor. Under matrix case, Uj 2 R1⇥Ij ⇥Rj , as Tensor Ring completion by Alternating Least Square
Uj+1 2 RRj ⇥Ij+1 ⇥1 . Thus tensor connect product gives (TR-ALS) to solve the problem in two steps.
the vectorized solution of matrix product. • Choose an initial starting point by using Tensor Ring
We then define an operator f that applies on U. Let Approximation (TRA). This initialization algorithm is
U 2 RR0 ⇥(I1 ···In )⇥Rn be the 3-order tensor, R0 = Rn , detailed in Section 3.2.
and let f be a reshaping operator function that reshapes a • Update the solution by applying Alternating Least
3-order tensor U to a tensor of dimension X of dimension Square (ALS) that alternatively (in a cyclic order) es-
RI1 ⇥···⇥In , denoted as timates a factor say Ui keeping the other factors fixed.
X = f (U), (10) This algorithm is detailed in Section 3.3.
5699
Algorithm 1 Tensor Ring Approximation (TRA) We further apply mode-k unfolding, which gives the
Input: Missing entry zero filled tensor X 2 R I1 ⇥I2 ⇥···⇥In
, equivalent problem
TR-Rank R, small random variable depicting the stan-
Uk = argminY kPP
Ω [k] ◦ f (YUk+1 · · · Un U1 · · · Uk−1 )[k]
k
dard deviation of the added normal random variable σ
Output: Tensor train decomposition Ui:i=1,··· ,n 2 − XP 2
Ω [k] kF ,
k
RR⇥Ii ⇥R (19)
1: Apply mode-1 canonical matricization for X and get
matrix X1 = X<1> 2 RI1 ⇥(I2 I3 ···In ) where PP Pk
Ω [k] , f (YUk+1 · · · Un U1 · · · Uk−1 )[k] and XΩ [k]
k
(k)
(18) kvec((BΩi (:, j, :))> )> vec(Z) − XP 2
Ω [k] (ik , j)kF .
k
k
5700
Algorithm 2 TR-ALS Algorithm P is the total number of observations. Within one iteration
Input: Zero-filled Tensor XΩ 2 R I1 ⇥I2 ⇥...⇥In
, binary ob- when n MPSs need to be updated, the overall complexity is
servation index tensor PΩ 2 RI1 ⇥I2 ⇥...⇥In , tensor ring max(O(nP R4 ), O(nR6 )).
rank R, thresholding parameter tot, maximum iteration We note that tensor train completion [7] gives the simi-
maxiter lar complexity as tensor ring completion. However, tensor
Output: Recovered tensor XR train rank is a vector and it is hard for tuning to achieve the
1: Apply tensor ring approximation in Algorithm 1 on XΩ
optimal completion. The intermediate ranks in tensor train
to initialize the MPSs Ui:i=1,··· ,n 2 RR⇥Ii ⇥R . Set it- are large in general, leading to significantly higher compu-
eration parameter ` = 0. tational complexity of tensor train. This is alleviated in part
2: while ` maxiter do
by the tensor ring structure which can be parametrized by
3: `=`+1 the tensor ring rank which can be smaller than the interme-
4: for i = 1 to n do diate ranks of the tensor train in general. In addition, the sin-
5: Solve by Least Square Method Ui (`) = gle parameter in the tensor ring structure leads to an ease in
(`−1) (`−1) (`) (`) characterizing the performance for different ranks and can
argminU kPΩ ◦ (UUi+1 ...Un U1 ...Ui−1 − X)k2F
be easily tuned for practical applications. The lower ranks
6: end for
kU(`+1) −U(`) k lead to lower computational complexity of data completion
7: if n (`) n F tot then under the tensor ring structure as compared to the tensor
kUn kF
8: Break train structure.
9: end if
10: end while 4. Numerical Results
(`) (`) (`) (`)
11: Return XR = reshape(U1 U2 ...Un−1 Un )
In this section, we compare our proposed TR-ALS algo-
rithm with tensor train completion under alternating least
square (TT-ALS) algorithm [7], which solves the tensor
Then the problem for solving Uk [:, ik , :] becomes a least completion by alternating least squares under tensor train
square problem. Solving Ik least square problem would format. SiLRTC algorithm is another tensor train comple-
give the optimal solution for Uk . Since each Ui:i=1,··· ,n tion algorithm proposed in [16] and the tensor train rank is
can solved by a least square method, tensor completion un- tuned based on the dimensionality of the tensor. It is se-
der tensor ring model can be solved by taking orders to up- lected for comparison as it shows good recovery in image
date Ui:i=1,··· ,n until convergence. We note the completion completion [16]. The evaluation merit we consider is Re-
algorithm does not require normalization on each MPS, un- covery Error (RE). Let X̂ be the recovered tensor and X be
like the decomposition algorithm [18] that normalizes all the ground truth of the tensor. Thus, the recovery error is
the MPSs to seek a unique factorization. The stopping crite- defined as
ria in TR-ALS is measured via the changes of the last tensor kX̂ − XkF
factors Un since if the last factor does not change, the other RE = .
kXkF
factors are less likely to change. Details of the algorithm
Tensor ring completion by alternating least square (TR-ALS
are given in Algorithm 2.
) algorithm is an iterative algorithm and the maximum itera-
3.4. Complexity Analysis tion, maxiter, is set to be 300. The convergence is captured
by the change of the last factorization term Un , where the
Storage Complexity Given an n-order tensor X 2 error tolerance is set to be 10−10 .
I1 ⇥···⇥In
R
Qn , the total amount of parameters to store is In the remaining of the section, we first evaluate the com-
i=1 I i , which increases exponentially with order. Under pletion results for synthetic data. Then we validate the pro-
tensor ring model, we can reduce the storage space by con- posed TR-ALS algorithm on image completion, YaleFace
verting each factor (except the last) one by one to being image-sets completion, and video completion.
orthonormal and multiply the product with the next fac-
tor. Thus, the number of parameters to store the MPSs 4.1. Synthetic Data
U ,n−1 with orthonormal property requires storage In this section, we consider a completion problem of a
Pi:i=1,···
n−1 2
i=1 (R Ii − R2 ), and Un with P
parameter R2 In . Thus, 4-order tensor X 2 R20⇥20⇥20⇥20 with TR-Rank being
2 n
the total amount of storage is R ( i Ii − n + 1), where 8 without loss of generality. The tensor is generated by
the tensor ring rank R can be adjusted to fit the tensor data a sequence of connected 3-rd order tensor Ui:i=1,··· ,4 2
at the desired accuracy. R8⇥20⇥8 and every entry in Ui are sampled independently
Computational Complexity For each Ui , the least from a standard normal distribution.
square problem in (19) solved by pseudo-inverse gives a TT-ALS is considered as a comparable to show the dif-
computational complexity max(O(P R4 ), O(R6 )), where ference between tensor train model and tensor ring model.
5701
Two different tensor train ranks are chosen for the compar- 4.3. YaleFace Dataset Completion
isons. The first tensor-train ranks are chosen as [8, 8, 8], and In this section, we consider Extended YaleFace Dataset
the completion with these ranks is called Low rank tensor B [6] that includes 38 people with 9 poses under 64 illu-
train (LR-TT) completion. The second tensor-train ranks mination conditions. Each image has the size of 192 ⇥ 168,
are chosen as the double of the first ( [16, 16, 16]), and the where we down-sample the size of each image to 48⇥42 for
completion with these ranks is called High rank tensor train ease of computation. We consider the image subsets of 38
(HR-TT) completion. Another comparable used is the SiL- people under 64 illumination with 1 pose by formatting the
RTC algorithm proposed in [16], where the rank is adjusted data into a 4-order tensor in R48⇥42⇥64⇥38 , which is further
according to the dimensionality of the tensor data, and a reshaped into a 8-order tensor X 2 R6⇥8⇥6⇥7⇥8⇥8⇥19⇥2 .
heuristic factor of f = 1 in the proposed algorithm of [16] We consider the case when 10% of pixels are randomly ob-
is selected for testing. served. YaleFace sets completion is considered to be harder
Fig.1a shows the completion error of TR-ALS, LR-TT, than an image completion since features under different il-
HR-TT, and SiLRTC for observation ratio from 10% to lumination and across human features are harder to learn
60%. TR-ALS shows the lowest recovery error compared than information from the color channels of images. Ta-
with other algorithms and the recovery error drops to 10−10 ble 1 shows that for any considered rank, TR-ALS recovers
for observation ratio larger than 14%. The large completion data better than TT-ALS and the best completion result in
errors of all tensor train algorithm at every observation ra- the given set-up is 16.25% for TR-ALS as compared with
tio show that tensor train algorithm can not effectively com- 25.55% given by TT-ALS. Further we reshape the data into
plete the tensor data generated under tensor ring model. Fig. an 11-order tensor and 4-order tensor to evaluate the effect
1b shows the convergence of TR-ALS under sampling ra- of reshaped tensor size on tensor completion. The result in
tios 10%, 15%, 20%, 30%, 40%, and 50%, and the plot in- Table 1 shows that in the given reshaping set-up, reshaping
dicates the higher the observation ratios, the faster the al- tensor from 4-order tensor to 7-th order tensor significantly
gorithm converges. When the observation ratio is lower improve the performance of tensor completion by decreas-
than 10%, the tensor with missing data can not be com- ing recovery error from 21.48% to 16.25%. However, fur-
pleted under the proposed set-up. The fast convergence of ther reshaping to 11-order tensor slightly degrades the per-
the proposed TR-ALS algorithm indicates that alternating formance of completion, resulting in an increased recovery
least square is effective in tensor ring completion. error to 16.34%.
Fig. 3 shows the original image, missing images, and
4.2. Image Completion recovered images using TR-ALS and TT-ALS algorithms
In this section, we consider the completion of for ranks of 10, 20, and 30, where the completion results
RGB Einstein Image [1], treated as a 3-order tensor given by TR-ALS better captures the detail information
X 2 R600⇥600⇥3 . A reshaping operation is applied given from the image and recovers the image with a better
to transform the image into a 7-order tensor of size resolution.
R6⇥10⇥10⇥6⇥10⇥10⇥3 . Reshaping low order tensors into 4.4. Video completion
high order tensors is a common practice in literature and The video data we used in this section is high speed
has shown improved performance in classification [13] and camera video for gun shooting [2]. It is downloaded
completion [16]. from Youtube with 85 frames in total and each frame
Fig. 2a shows the recovery error versus rank for TR- is consisted by a 100 ⇥ 260 ⇥ 3 image. Thus the
ALS and TT-ALS when the percentage of data observed video is a 4-order tensor of size 100 ⇥ 260 ⇥ 3 ⇥ 85,
are 5%, 10%, 20%, 30%. At any considered ranks, TR-ALS which is further reshaped into a 11-order tensor of size
completes the image with a better accuracy than TT-ALS. 5 ⇥ 2 ⇥ 5 ⇥ 2 ⇥ 13 ⇥ 2 ⇥ 5 ⇥ 2 ⇥ 3 ⇥ 5 ⇥ 17 for comple-
For any given percentage of observations, the recovery er- tion. Video is a multi-dimensional data with different color
ror first decreases as the rank increases which is caused by channel a time dimension in addition to the 2D image struc-
the increased information being captured by the increased ture.
number of parameters in the tensor structure. The recovery In Table 2, we show that TR-ALS achieves 6.25% re-
error then starts to increase after a thresholding rank, which covery error when 10% of the pixels are observed, which is
can be ascribed to over-fitting. The plot also indicates that much better than the best recovery error of 14.83% achieved
higher the observation ratio, larger the thresholding rank, by TT-ALS. The first frame of the video is shown in Fig. 4,
which to the best of our knowledge is reported for the first where the first row shows the original frame and the com-
time. Fig. 2b shows the recovered image of Einstein im- pleted frames by TR-ALS, and the second row shows the
age when 10% pixels are randomly observed. TR-ALS with frame with missing entries and the frames completed by
rank 28 gives the best recovery accuracy in the considered TT-ALS. The resolution, and the display of the bullets and
ranks. the smoke depict that the proposed TR-ALS achieves better
5702
2 5
0.1 0.2 0.4
0.15 0.3 0.5
0
0
-2 TR
LR-TT
-4 HR-TT
SiLRTC -5
-6
-8 -10
-10
-12 -15
0.1 0.2 0.3 0.4 0.5 0.6 0 20 40 60 80 100
Observation Ratio Iteration
(a) Recovery error (log 10) versus observation Ratio. The plots are
the average of 10 experiments and the error bar are marked using one (b) Convergence plot for TR-ALS under observation ratio from 0.1 to
standard deviation. 0.5
Figure 1: Completion for synthetic data. Synthetic data is a 4th order tensor of dimension 20 × 20 × 20 × 20 with TR-Rank being 8.
-0.6
-0.8
-1
(b) Einstein image completion when 10% of pixels are randomly ob-
served. (a) and (f) are the original Einstein image and the Einstein
-1.2 image with 10% randomly observed entries. (b)-(e) are the completed
5 10 15 20 25 30
images via TR-ALS with TR-Rank 2, 10, 18, 28 and completion er-
Rank
rors 33.97%, 14.03%, 10.83%, 14.55% respectively. (g)- (j) are the
(a) The recovery error versus rank for TR-ALS and TT-ALS under completed images via TT-ALS with TT-Rank 2, 10, 18, 28 and com-
observation ratio 5%, 10%, 20%, 30%. pletion errors 38.51%, 22.89%, 20.70%, 23.19% respectively.
Figure 2: Completion for Einstein image. Einstein image is of size 600 × 600 × 3, and is reshaped into a 7-order tensor of size
6 × 10 × 10 × 6 × 10 × 10 × 3 tensor for tensor ring completion
Rank 5 10 15 20 25 30
6⇥8⇥6⇥7⇥8⇥8⇥19⇥2
TT-ALS (R ) 37.08% 29.65% 27.91% 26.84% 26.16% 25.55%
TR-ALS (R6⇥8⇥6⇥7⇥8⇥8⇥19⇥2 ) 33.45% 24.67% 20.72% 18.47% 16.92% 16.25%
TR-ALS (R2⇥3⇥2⇥4⇥2⇥3⇥7⇥8⇥8⇥19⇥2 ) 33.73% 25.08% 21.20% 18.97% 17.34% 16.34%
TR-ALS (R48⇥42⇥64⇥38 ) 30.36% 26.08% 23.74% 22.22% 21.48% 21.57%
Table 1: Completion error of 10% observed Extended YaleFace data via TT-ALS and TR-ALS under rank 5, 10, 15, 20, 25, 30.
Rank 10 15 20 25 30
TT-ALS 19.16% 14.83% 16.42% 16.86% 16.99%
TR-ALS 13.90% 10.12% 8.13% 6.88% 6.25%
Table 2: Completion error of 10% observed Video data via TT-ALS and TR-ALS under rank 10, 15, 20, 25, 30.
5703
Original
Missing
TR(10)
TR(20)
TR(30)
TT(10)
TT(20)
TT(30)
Figure 3: YaleFace dataset is sub-sampled to formulate into a tensor of size 48 × 42 × 64 × 38, which is reshaped into a 8-order tensor
of size 6 × 8 × 6 × 7 × 8 × 8 × 19 × 2 for tensor ring completion. 90% of the pixels are assumed to be randomly missing. From top to
bottom are original images, missing images, TR-ALS completed images with TR-Ranks 10, 20, 30, and TT-ALS completed images with
TT-Ranks 10, 20, 30.
Figure 4: Gun Shot is a video of size 100 × 260 × 3 × 80 download from Youtube, which is reshaped into a 11-order tensor of size
5 × 2 × 5 × 2 × 13 × 2 × 5 × 2 × 3 × 5 × 17 for tensor ring completion. 90% of the pixels are assumed to be randomly missing. (a) and (g)
are the first frame of the original video and missing video. (b)-(f) are the completed frame via TR-ALS using TR-Rank 10, 15, 20, 25, 30.
(h)-(l) are the completed frame via TT-ALS using TR-Rank 10, 15, 20, 25, 30.
completion results as compared to the TT-ALS algorithm. Dataset B, and video completion demonstrates the signifi-
cant improvement of tensor ring completion as compared to
5. Conclusion tensor train completion.
Deriving provable performance guarantees on tensor
We propose a novel algorithm for data completion using completion using the proposed algorithm is left as further
tensor ring decomposition. This is the first paper on data work. In this context, the statistical machinery for prov-
completion exploiting this structure which is a non-trivial ing analogous results for the matrix case [10, 8] can be used.
extension of the tensor train structure. Our algorithm ex-
ploits the matrix product state representation and uses alter- Acknowledgements. This work was supported in
nating minimization over the low rank factors for comple- part by the National Science Foundation under grants CCF
tion. The evaluation of the proposed approach on a variety 1527486, CCF 1553075, and CCF 1319653.
of datasets, including Einstein’s image, Extended YaleFace
5704
References [18] Q. Zhao, G. Zhou, S. Xie, L. Zhang, and A. Cichocki. Tensor
ring decomposition. arXiv preprint arXiv:1606.05535, 2016.
[1] Albert Einstein image. http://orig03.
deviantart.net/7d28/f/2012/361/1/6/
albert_einstein_by_zuzahin-d5pcbug.jpg.
[2] Pistol shot recorded at 73,000 frames per second. https:
//youtu.be/7y9apnbI6GA. Published by Discovery
on 2015-08-15.
[3] M. Ashraphijuo, V. Aggarwal, and X. Wang. Deterministic
and probabilistic conditions for finite completability of low
rank tensor. arXiv preprint arXiv:1612.01597, 2016.
[4] A. Cichocki. Tensor networks for big data analytics
and large-scale optimization problems. arXiv preprint
arXiv:1407.3124, 2014.
[5] A. Cichocki, D. Mandic, L. De Lathauwer, G. Zhou,
Q. Zhao, C. Caiafa, and H. A. Phan. Tensor decompositions
for signal processing applications: From two-way to multi-
way component analysis. IEEE Signal Processing Magazine,
32(2):145–163, 2015.
[6] A. S. Georghiades and P. N. Belhumeur. Illumination cone
models for faces recognition under variable lighting. In Pro-
ceedings of CVPR, 1998.
[7] L. Grasedyck, M. Kluge, and S. Kramer. Variants of alter-
nating least squares tensor completion in the tensor train for-
mat. SIAM Journal on Scientific Computing, 37(5):A2424–
A2450, 2015.
[8] M. Hardt. On the provable convergence of alternating min-
imization for matrix completion. CoRR, abs/1312.0925,
2013.
[9] S. Holtz, T. Rohwedder, and R. Schneider. On mani-
folds of tensors of fixed tt-rank. Numerische Mathematik,
120(4):701–731, 2012.
[10] P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix com-
pletion using alternating minimization. In Proceedings of the
forty-fifth annual ACM symposium on Theory of computing,
pages 665–674. ACM, 2013.
[11] T. Kolda and B. Bader. Tensor decompositions and applica-
tions. SIAM Review, 51(3):455–500, 2009.
[12] J. Li, J. Choi, I. Perros, J. Sun, and R. Vuduc. Model-driven
sparse cp decomposition for higher-order tensors. In Parallel
and Distributed Processing Symposium (IPDPS), 2017 IEEE
International, pages 1048–1057. IEEE, 2017.
[13] A. Novikov, D. Podoprikhin, A. Osokin, and D. P. Vetrov.
Tensorizing neural networks. In Advances in Neural Infor-
mation Processing Systems, pages 442–450, 2015.
[14] R. Orús. A practical introduction to tensor networks: Matrix
product states and projected entangled pair states. Annals of
Physics, 349:117–158, 2014.
[15] I. V. Oseledets. Tensor-train decomposition. SIAM Journal
on Scientific Computing, 33(5):2295–2317, 2011.
[16] H. N. Phien, H. D. Tuan, J. A. Bengua, and M. N. Do.
Efficient tensor completion: Low-rank tensor train. arXiv
preprint arXiv:1601.01083, 2016.
[17] M. Vasilescu and D. Terzopoulos. Multilinear image anal-
ysis for face recognition. Proceedings of the International
Conference on Pattern Recognition ICPR 2002, 2:511–514,
2002. Quebec City, Canada.
5705