Tensor Ring Image Super-Resolution Method
Tensor Ring Image Super-Resolution Method
reconstruction, in-painting, and so on, is an important issue in different research areas. Different Processing Research
methods which have been exploited for image analysis were mostly based on matrix or low order Center, School of Advanced
analysis. However, recent researches show the superior power of tensor-based methods for image Technologies in Medicine,
enhancement. Method: In this article, a new method for image super-resolution using Tensor Isfahan University of Medical
Sciences, Isfahan, Iran
Ring decomposition has been proposed. The proposed image super-resolution technique has been
nYQp/IlQrHD3i3D0OdRyi7TvSFl4Cf3VC1y0abggQZXdgGj2MwlZLeI= on 03/20/2024
derived for the super-resolution of low resolution and noisy images. The new approach is based on a
modification and extension of previous tensor-based approaches used for super-resolution of datasets.
In this method, a weighted combination of the original and the resulting image of the previous stage
has been computed and used to provide a new input to the algorithm. Result: This enables the
method to do the super-resolution and de-noising simultaneously. Conclusion: Simulation results
show the effectiveness of the proposed approach, especially in highly noisy situations.
© 2024 Journal of Medical Signals & Sensors | Published by Wolters Kluwer - Medknow 1
Sedighin: TR based image enhancement
Downloaded from [Link] by BhDMf5ePHKav1zEoum1tQfN4a+kJLhEZgbsIHo4XMi0hCywCX1AW
Figure 2: Tensor ring decomposition of a 4-th order tensor. For tensor train
decomposition, R0 = R4 = 1
nYQp/IlQrHD3i3D0OdRyi7TvSFl4Cf3VC1y0abggQZXdgGj2MwlZLeI= on 03/20/2024
Ii − P
with
= J i P( + 1) and O is the overlap between
Figure 1: CANDECOMP/PARAFAC (first row) and Tucker (second row) P −O
decompositions of a third order tensor patches.[37]
Determining proper ranks (i.e., Ri’s) is an important issue
datasets. when working with TT or TR decompositions. The ranks
Tensor Train (TT) and Tensor Ring (TR) decompositions indeed determine the size of the core tensors and can highly
are two members of tensor networks. In TT, an I1 × I2 × affect the performance of the decomposition. Several papers
... × IN tensor decomposes into a series of third order core using predetermined fixed ranks for TT\TR decomposition.
tensors interconnected to each other. The core tensors are Some of other papers use incremental approaches in
of size G ( n ) ∈ Rn−1 × I n × Rn , where Rn is the n-th TT rank which the ranks are increased during iterations. However,
selecting very low ranks, results in losing small details
and R0 = RN = 1, so the first and the last core tensors are
while too much increasing the ranks results in a degraded
two matrices.[34] TR decomposition can be considered as a output, especially in noisy cases. Therefore, when the
generalized version of TT decomposition in which an I1 × original image is noisy, the TR ranks usually limited to a
I2 × ... × IN tensor decomposes into a series of third order maximum value to prevent appearing the noise in the final
core tensors interconnected in a loop. The n-th core tensor, image.
i.e., G ( n ) , is of size Rn-1 × In × Rn where Rn is the n-th TR
In this paper, we have proposed a new method for super-
rank, but with R0 = Rn > 1.[35] TT and TR decompositions resolution of noisy images which allows the ranks to be
are denoted as increased to higher values without highly degrading the
final image. To allow the ranks to be increased more, in
=X G (1) , G ( 2) , …, G ( N ) , this new method for the available noisy pixels, a weighted
where X is an N-th order tensor to be decomposed. An combination of the original input and the output of the
previous stage have been used as a new input for the next
example for TR (TT) decomposition has been illustrated in
stage, while the ranks are also increased. The approach can
Figure 2.
be considered as a generalized version of[38] which has been
TT and TR decompositions are more effective for previously proposed for super-resolution of noisy images.
analyzing higher order datasets. Due to this reason and also This results in a more accurate super-resolution method
for better exploiting the correlations of image pixels, the comparing to the other existing methods when the input is
original low order image is usually reformatted into higher noisy (details will be discussed in Section III).
order tensors using different methods. A common approach
The remaining of this paper has been organized as:
is Hankelization in which a raw dataset reformatted into
Notations and preliminaries are reviewed in Section II. The
a matrix with Hankel structure. Different Hnakelization
proposed super-resolution algorithm has been proposed
methods are Multi-way delay-embedding transform
in Section III. Simulation results have been presented in
(MDT),[24] patch Hankelization[36] and overlapped patch
Section IV and finally Section V concludes the paper.
Hankelization.[37] Considering this, several TT or TR based
methods Hankelize the input datasets before applying TT Notations and Preliminaries
or TR decompositions.[36,37] By applying Hankelization with
overlapped patches, an I1 × I2 image transferred into a 6th Notations in this paper are basically the same as.[25] Tensors
order tensor of size P × P × T1 × D1 × T2 × D2, where P is and matrices are denoted by underlined bold capital (X)
Ji and bold capital letters (X), respectively. Unfolding of a
the patch size, Ti’s are window sizes, =
Di PTi ( − Ti + 1) tensor of size I1 × I2 × ... × IN into a matrix of size In × I1 I2
P
2 Journal of Medical Signals & Sensors | Volume 14 | Issue 1 | January 2024
Sedighin: TR based image enhancement
× ... In-1 In+1...IN is called mod-n matricization and is denoted final result (see the simulations). This is a very important
by X(n). Mod-{N} canonical unfolding of an I1 × I2 × ... × point, since in classic approaches, when the input image
IN tensor, denoted by X[n], results in an I1 I2...In × In+1... IN is noisy, the ranks (TR or Tucker ranks) cannot be highly
matrix. Hadamard or elements wise product of two tensors increased. This is due to the fact that increasing the ranks
or two matrices of the same size has been denoted by ⊛. allows the noise to be appeared in the final image. For
Frobenius norm and trace of a matrix are denoted by ‖.‖F preventing this issue, the maximum rank should be limited
which from other side results in losing small details in the
Downloaded from [Link] by BhDMf5ePHKav1zEoum1tQfN4a+kJLhEZgbsIHo4XMi0hCywCX1AW
Table 1: Results of the algorithms for the super‑resolution of low resolution noisy image with σ=0.02
Original image Low resolution ‑ noisy image HaLRTC[40] FSRRA[41]
Downloaded from [Link] by BhDMf5ePHKav1zEoum1tQfN4a+kJLhEZgbsIHo4XMi0hCywCX1AW
Table 2: Results of the algorithms for the super‑resolution of low resolution noisy image with σ=0.04
Original image Low resolution ‑ noisy image HaLRTC[40] FSRRA[41]
algorithm are reported in Table 4. As the results show, the been reported. The proposed approach has been compared
proposed approach has higher performance in comparison with the situation when α = 1 and the maximum rank set
to the other algorithms. The computed P values were less to the value which has the best result for α = 1 and also
that 0.05 (P < 0.05) which show there is a statistically compared with the situation when α = 1 and the maximum
significant difference between the proposed approach and rank set the same as the rank for α = 0.5. The results show
other methods. that the proposed algorithm achieved its best performance
For better understanding the effect of α on the performance in higher ranks in comparison to the situation when α = 1.
of the algorithm, the results of previous simulations In addition, putting α = 0.5 has the ability of improving
(with α = 0.5) have been compared with the situation when the performance of the output. This higher performance is
α = 1. By putting α = 1, the super-resolution procedure more clear for higher level of noise. It can also be inferred
changes to the approach of.[37] The comparison has been that increasing the ranks to higher values for α = 1, results
done for two levels of noise variance and the results are in decreasing the performance of the algorithm and the
shown in Table 5. For the second and fourth columns, output image becomes noisy, while the proposed approach
the ranks and the results with the highest PSNR’s have is more robust to the rank incremental.
Table 3: Results of the algorithms for super‑resolution of low resolution noisy image with σ=0.06
Original image Low resolution‑ noisy image HaLRTC[40] FSRRA[41]
Downloaded from [Link] by BhDMf5ePHKav1zEoum1tQfN4a+kJLhEZgbsIHo4XMi0hCywCX1AW
Table 4: Averaged PSNRs and SSIMs of the algorithms for the super‑resolution of 20 first cases of the dataset with
σ=0.06 and rate 2
HaLRTC[40] FSRRA[41] TT‑WOPT[42] DSR[43] MDT[24] Proposed
PSNR 16.3586±1.0410 25.1877±0.4279 25.1763±1.1106 23.6635±0.1801 25.5649±2.0908 27.7732±1.3619
SSIM 0.1643±0.0132 0.4376±0.0239 0.4181±0.0217 0.3624±0.0387 0.4733±0.0746 0.6175±0.044
The P value between the proposed approach and other methods is <0.05 (P<0.05) which shows a statistically significant difference between
the proposed approach and other methods
The PSNRs and SSIMs of the algorithm during rank α = 1 achieves in smaller ranks in comparison to α = 0.5.
incremental for α = 1 and α = 0.5 and σ = 0.06 are In addition the maximum value of PSNR for α = 1 is
illustrated in Figures 3 and 4. As it is expected, for both less than the maximum value of PSNR for α = 0.5. This
cases, the PSNR starts to increase until some rank values shows the effectiveness of the proposed algorithm which
and then decreases for higher ranks. This is due to the allows the ranks to be increased to higher values without
presence of noise which decreases the performance of the decreasing the performance of the output. The resulting
output for higher ranks. However, the maximum PSNR for PSNRs and SSIMs of the algorithm for the super-resolution
Figure 3: PSNRs of the outputs during rank incremental for a low resolution Figure 4: SSIMs of the outputs during rank incremental for a low resolution
noisy input with σ = 0.06 and for α = 1 and α = 0.5 noisy input with σ = 0.06 and for α = 1 and α = 0.5
Table 6: PSNRs and SSIMs of the super‑resolution of the noisy image with σ=0.06 for different values of α
α=1 (Rmax=5) α=0.75 (Rmax=6) α=0.5 (Rmax=8) α=0.25 (Rmax=13)
(28.7621, 0.6789) (28.9470, 0.6875) (29.0146, 0.6905) (28.8881, 0.6880)
Table 7: Applying vessel segmentation algorithm to the resulting output of each algorithm
Original image Low resolution noisy image
HaLRTC[40] FSRRA[41]
TT‑WOPT[42] DSR[43]
MDT[24] Proposed
Super‑resolution rate is 2
of the noisy image with σ = 0.06 and for more different of the algorithm increases. However, more reduction in
values of α have been shown in Table 6. The results show α can result in a high increase in the computational burden
that, by decreasing α to some values, the performance without so much increasing the performance. This show
Table 8: Super‑resolution of low resolution noisy images with rate 3 and σ=0.06
Original image Low resolution noisy image DSR[43] MDT[24] Proposed
Downloaded from [Link] by BhDMf5ePHKav1zEoum1tQfN4a+kJLhEZgbsIHo4XMi0hCywCX1AW
of deep learning-based image super-resolution on binary signal 27. Goulart JH, Boizard M, Boyer R, Favier G, Comon P.
detection. J Med Imaging (Bellingham) 2021;8:065501-20. Tensor CP decomposition with structured factor matrices:
14. Chen Y, Xie Y, Zhou Z, Shi F, Christodoulou AG, Li D. Brain Algorithms and performance. IEEE J Sel Top Signal Process
MRI super resolution using 3D deep densely connected neural 2015;10:757-69.
networks. In: 2018 IEEE 15th International Symposium on 28. Battaglino C, Ballard G, Kolda TG. A practical randomized
Biomedical Imaging (ISBI 2018). Washington, D.C.: IEEE; CP tensor decomposition. SIAM J Matrix Anal Appl
2018. p. 739-42. 2018;39:876-901.
15. Park J, Hwang D, Kim KY, Kang SK, Kim YK, Lee JS. 29. Veganzones MA, Cohen JE, Farias RC, Chanussot J,
Downloaded from [Link] by BhDMf5ePHKav1zEoum1tQfN4a+kJLhEZgbsIHo4XMi0hCywCX1AW
Computed tomography super-resolution using deep convolutional Comon P. Nonnegative tensor CP decomposition of hyperspectral
neural network. Phys Med Biol 2018;63:145011. data. IEEE Trans Geosci Remote Sens 2015;54:2577-88.
16. Fang L, Li S, McNabb RP, Nie Q, Kuo AN, Toth CA, et al. Fast 30. Yokota T, Zhao Q, Cichocki A. Smooth PARAFAC
acquisition and reconstruction of optical coherence tomography decomposition for tensor completion. IEEE Trans Signal Process
images via sparse representation. IEEE Trans Med Imaging 2016;64:5423-36.
nYQp/IlQrHD3i3D0OdRyi7TvSFl4Cf3VC1y0abggQZXdgGj2MwlZLeI= on 03/20/2024
2013;32:2034-49. 31. Kim YD, Choi S. Nonnegative tucker decomposition. In: 2007
17. Christensen-Jeffries K, Brown J, Harput S, Zhang G, IEEE Conference on Computer Vision and Pattern Recognition.
Zhu J, Tang MX, et al. Poisson statistical model of ultrasound Minneapolis, MN, USA: IEEE; 2007. p. 1-8.
super-resolution imaging acquisition time. IEEE Trans Ultrason 32. Malik OA, Becker S. Low-Rank tucker decomposition of large
Ferroelectr Freq Control 2019;66:1246-54. tensors using tensorsketch. In: Advances in Neural Information
18. Daneshmand PG, Rabbani H, Mehridehnavi A. Super-resolution Processing Systems. Montreal, Canada: NeurIPS; 2018. p. 31.
of optical coherence tomography images by scale mixture 33. Mørup M, Hansen LK, Arnfred SM. Algorithms for sparse
models. IEEE Trans Image Process 2020;29:5662-76. [doi: nonnegative Tucker decompositions. Neural Comput
10.1109/TIP.2020.2984896]. 2008;20:2112-31.
19. Gao H, Zhang G, Huang M. Hyperspectral image superresolution 34. Oseledets IV. Tensor-train decomposition. SIAM J Sci Comput
via structure-tensor-based image matting. IEEE J Sel Top Appl 2011;33:2295-317.
Earth Obs Remote Sens 2021;14:7994-8007. 35. Zhao Q, Zhou G, Xie S, Zhang L, Cichocki A. Tensor Ring
20. Xu Y, Wu Z, Chanussot J, Wei Z. Hyperspectral images Decomposition. arXiv Preprint; 2016.
super-resolution via learning high-order coupled tensor 36. Sedighin F, Cichocki A, Yokota T, Shi Q. Matrix and tensor
ring representation. IEEE Trans Neural Netw Learn Syst completion in multiway delay embedded space using tensor
2020;31:4747-60. train, with application to signal reconstruction. IEEE Signal
21. Zhang M, Sun X, Zhu Q, Zheng G. A survey of hyperspectral Process Lett 2020;27:810-4.
image super-resolution technology. In: 2021 IEEE International 37. Sedighin F, Cichocki A, Rabbani H. Optical Coherence
Geoscience and Remote Sensing Symposium IGARSS. Brussels, Tomography Image Enhancement via Block Hankelization and
Belgium: IEEE; 2021. p. 4476-9. Low Rank Tensor Network Approximation. arXiv Preprint; 2023.
22. Dian R, Fang L, Li S. Hyperspectral Image Super-Resolution Via 38. Sedighin F, Cichocki A. Image completion in embedded space
Non-Local Sparse Tensor Factorization. In: Proceedings of the using multistage tensor ring decomposition. Front Artif Intell
IEEE Conference on Computer Vision and Pattern Recognition; 2021;4:687176.
2017. p. 5344-53. 39. Hajeb Mohammad Alipour S, Rabbani H, Akhlaghi M. A new
23. Dian R, Li S. Hyperspectral image super-resolution via combined method based on curvelet transform and morphological
subspace-based low tensor multi-rank regularization. IEEE operators for automatic detection of foveal avascular zone.
Trans Image Process 2019;28:5135-46. [doi: 10.1109/ Signal Image Video Process 2014;8:205-22.
TIP.2019.2916734]. 40. Liu J, Musialski P, Wonka P, Ye J. Tensor completion for
24. Yokota T, Erem B, Guler S, Warfield SK, Hontani H. Missing estimating missing values in visual data. IEEE Trans Pattern
Slice Recovery for Tensors Using A Low-Rank Model In Anal Mach Intell 2013;35:208-20.
Embedded Space. In: Proceedings of the IEEE Conference on 41. Elad M, Hel-Or Y. A fast super-resolution reconstruction
Computer Vision and Pattern Recognition; 2018. p. 8251-9. algorithm for pure translational motion and common
25. Cichocki A, Lee N, Oseledets IV, Phan AH, Zhao Q, Mandic D. space-invariant blur. IEEE Trans Image Process 2001;10:1187-93.
Low-Rank Tensor Networks for Dimensionality Reduction and 42. Yuan L, Zhao Q, Cao J. Completion of high order tensor
Large-Scale Optimization Problems: Perspectives and Challenges data with missing entries via tensor-train decomposition. In:
Part 1. arXiv Preprint; 2016. International Conference on Neural Information Processing.
26. Cichocki A, Phan AH, Zhao Q, Lee N, Oseledets I, Cham: Springer International Publishing; 2017. p. 222-9.
Sugiyama M, et al. Tensor networks for dimensionality 43. Peng S, Haefner B, Quéau Y, Cremers D. Depth Super-Resolution
reduction and large-scale optimization: Part 2 applications Meets Uncalibrated Photometric Stereo. In: Proceedings of the
and future perspectives. Found Trends® Mach Learn IEEE International Conference on Computer Vision Workshops.
2017;9:431-673. 2017. p. 2961-8.