You are on page 1of 4

Article

Channel Estimation in The Interplanetary Internet Using Deep


Learning and Federated Learning

Santiago Gonzalez-Irigoyen 1,‡, , Ana C. Castillo 1,‡ , Jesus A. Marroquin-Escobedo 1,‡ , Marlene
Martinez-Santoyo 1,‡ , Juan Misael Gongora-Torres 1,‡ and Cesar Vargas-Rosales 1,‡

1 Affiliation 1; Instituto Tecnológico de Monterrey


* Correspondence: cvargas@tec.mx
‡ These authors contributed equally to this work.

Abstract: Intelligent signal processing holds great importance for the future of resilient, adaptable 1

communications networks. The unique qualities of deep space require an interplanetary Internet to 2

be highly autonomous, efficient, and adaptable to varying Quality of Service (QoS). Deep learning has 3

shown great promise in the field of signal processing for being computationally efficient, capable of 4

handling errors from nonlinear effects (e.g., hardware impairments), and handling low signal-to-noise 5

ratios. A recent survey by Pham, Q.V., et al. notes that none of the papers studied the improvements 6

in classification in the high-order modulation regime. Additionally, these papers did not explore 7

performance of their models in resource limited environments. A hierarchical interplanetary Internet 8

that imposes a variety of constraints on its nodes offers a unique opportunity to explore realistic 9

tradeoffs in model performance. This paper seeks to leverage the processing, storage, and data 10

transmission capabilities of each level of the interplanetary Internet through federated learning. 11

This will reduce data redundancy between nodes and minimize overhead transmission costs on the 12

network. The goals of this project are the following: (i) Detail possible insights into future channel 13

estimation techniques applied to noisy, nonlinear models. (ii) Explore application of deep learning 14

models for high-order modulation schemes. (iii) Quantify the resource-demand reduction resulting 15

from the use of a deep neural network for intelligent signal processing. (iv) Analyze the adaptability 16

of an interdependent system of deep neural networks in the context of a centralized/decentralized 17

federated learning network. 18

Keywords: channel estimation; distributed machine learning; federated learning 19

Citation: Lastname, F.; Lastname, F.; 1. Introduction 20


Lastname, F. Title. Journal Not Specified
A requirement for efficient utilization of available bandwidth in communications links 21
2022, 1, 0. https://doi.org/
is the estimation of the channel’s distortion on transmitted signals. This is because the 22

Received: channel can distort the signals’ amplitude, phase, and frequency spread, varying with 23

Accepted: distance, frequency, and in time. Variations in time-frequency phase space must nominally 24
Published: be characterized by known pilot waves sent at time intervals and on frequencies of interest. 25

26
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in The final result of detecting these channel distortions characterized is using signal fil- 27

published maps and institutional affil- tration for appropriate detection of subsequent symbol transmission. The complete process 28

iations. is computationally expensive and creates expensive transmission time and processing over- 29

heads in communications links. Statistical inference of channel states for blind estimation 30
Copyright: © 2022 by the authors.
has shown promise; a hybrid approach leveraging machine learning could reduce much of 31
Submitted to Journal Not Specified
for possible open access publication
the typical system overheads. 32

33
under the terms and conditions
of the Creative Commons Attri- Neural networks have recently been used for channel estimation, notably with deep 34

bution (CC BY) license (https:// neural networks [4] [3], convolutional neural networks [5], and long short term memory 35

creativecommons.org/licenses/by/ neural networks [6]. These designs have been with the intention of reducing inefficiencies 36

4.0/). in typical channel estimation techniques including long CP prefixes and redundancy in 37

Version October 11, 2022 submitted to Journal Not Specified https://www.mdpi.com/journal/notspecified


Version October 11, 2022 submitted to Journal Not Specified 2 of 4

channel estimation by multiple nodes. Likewise, we can reduce the percentage of transmis- 38

sion blocks that is spent on pilot symbols by predicting the evolution of the channel noise. 39

Additionally, the requirement for millisecond-scale channel estimation refresh for satellites 40

strongly penalizes time-consuming algorithms. For this reason, this paper explores the 41

possibility of using a decentralized architecture to share local models’ learnings and speed 42

up the processing of pilot symbols. 43

44

In most low-altitude satellite networks, connection topologies are variable and cen- 45

tralized processing is not viable. The lack of a global model state for optimization by the 46

network will require the identification of parameters that do and do not benefit neighboring 47

satellite nodes. Previous research into particle swarm optimization shows that it is possible 48

to get quicker and more accurate results than with traditional synchronous computational 49

methods [9]. Furthermore, it gives greater flexibility for individual nodes in a network to 50

form more accurate estimations of channels and their evolution. 51

52

2. Methods 53

2.1. Channel estimation with neural networks 54

We propose conducting channel estimation with Convolutional Neural Networks 55

(CNNs) which in previous studies has permitted quicker and more precise characterization 56

of communications channels. Its architecture should comprise of convolutional layers 57

which are arranged in series and followed by one or more fully connected layers (FC). 58

Convolutional neural networks have been used to determine and adjust for channel distor- 59

tion in previous studies [5]. Convolutions will allow us to maintain some access to higher 60

levels of abstraction in recognition of channel distortion. This is desirable given that the 61

objective is to share limited amounts of parameters to great effect on the models of other 62

communications nodes. 63

64

We can use Long Short Term Memory (LSTM) for recognition of channel evolution 65

patterns. This deep learning architecture is based on an artificial recurrent neural network 66

(RNN). It consists of a set of recurrently connected sub-networks, known as memory blocks, 67

that fulfill the function of maintaining their state over time and regulate the information 68

flow thought non-linear gating units for several cycles. This permits the comparison of 69

sequential signals in the context of previously received data; typical applications include 70

speech recognition and language analysis. It has previously been used for successfully 71

predicting channel evolution over some time [6]. 72

73

2.2. Decentralized federated learning 74

The main objective of our research is to identify the parameters to be shared between 75

nodes in a satellite network to eliminate redundant optimization of deep learning models 76

used in channel estimation and prediction. 77

78

In Federated Learning (FL) not all data can be processed in the central server, so in 79

our model you will be doing is predicting and transmitting the channel between satellite 80

1 to satellite 2, where the data sent by 1 to 2 will be used to train channel estimation and 81

prediction encoders on the latter. Satellite 2 will be communicate learned parameters with 82

satellite 3, reducing the computational load on one element in the network. Big picture, 83

this should produce marked improvements in a wider network. 84

85

Particle sharing, as done in PSO-PS:Parameter Synchronization with Particle Swarm 86

Optimization for Distributed Training of Deep Neural Networks, can quickly inform the prox- 87

imal nodes in the network of the parameters ’found’ to be optimal in channel denois- 88
Version October 11, 2022 submitted to Journal Not Specified 3 of 4

Figure 1. Sharing of learned parameters in satellite network.

ing/classification encoders. 89

90

2.3. Benchmark measurements 91

We propose the following benchmarks for the appropriate evaluation of the function- 92

ing of the channel estimation and channel evolution models: 93

• Bit error rate as a function of signal for given to signal-to-noise ratio (BER vs. SNR). 94

• Number of pilot symbols required for conventional channel estimation vs. channel 95

estimation with deep learning. 96

For evaluation of the impact of distributed learning and parameter sharing on compu- 97

tational demand on elements in the network and speed of optimization, we propose the 98

following metrics: 99

• Improved time of channel coherence for given signal-to-noise ratio (SNR) with predic- 100

tion of channel evolution. 101

• Parameters of greatest impact for sharing in federated learning framework. 102

• Predicted improvements in channel estimation and prediction metrics with learnings 103

from a netowrk of satellites. 104

3. Conclusions 105

Once the basic architecture is verified, next steps include estimating the effects of 106

network density on learning-sharing transmission overhead and possibly greater impact 107

of learnings shared between more proximal nodes. If learnings from Single Input, Single 108

Output systems prove valuable for sharing between communications nodes, it opens 109

the possibility of sharing those learnings between communications links on a MIMO 110

system. This would allow for greater scalability of the system without a linear increase in 111

computational complexity. That in turn would generate more data for optimization within 112

single satellite and reduce demand for learnings from other nodes. 113

Author Contributions: Conceptualization, A00827670 and A00829556; methodology, A01173670, 114

A01541084, and A00829556; investigation, A01173670 and A00827670; resources, A01541084; data 115

curation, A00829556; writing-original draft preparation, everyone; writing-review and editing, Cesar 116

Vargas-Rosales; visualization, X.X.; supervision, Cesar Vargas-Rosales and Juan Misael Gongora- 117

Torres. All authors have read and agreed to the published version of the manuscript. 118

Funding: This research received no external funding. 119

Data Availability Statement: Not applicable. 120


Version October 11, 2022 submitted to Journal Not Specified 4 of 4

Acknowledgments: The administrative and technical support, was given by Cesar Vargas-Rosales. 121

Conflicts of Interest: The authors declare no conflict of interest. 122

References 123

1. Q. -V. Pham, N. T. Nguyen, T. Huynh-The, L. Bao Le, K. Lee, and W. -J. Hwang, Intelligent Radio Signal Processing: A Survey, IEEE 124

Access, 2021, 9, 83818–83850, doi: 10.1109/ACCESS.2021.3087136. 125

2. W. Liu, L. Chen, and W. Zhang, Decentralized Federated Learning: Balancing Communication and Computing Costs, IEEE Transactions 126

on Signal and Information Processing over Networks, 2022, 8, 131–143, doi: 10.1109/TSIPN.2022.3151242. 127

3. J. Tang, Q. Yang, and Z. Zhang, Blind Channel Estimation for MIMO Systems via Variational Inference, IEEE International Conference on 128

Communications, 2022, 1276–1281, doi: 10.1109/ICC45855.2022.9839066. 129

4. H. Ye, G. Y. Li and B. -H. Juang, Power of Deep Learning for Channel Estimation and Signal Detection in OFDM Systems, IEEE Wireless 130

Communications Letters, 2022, 7, 114–117, doi: 10.1109/LWC.2017.2757490. 131

5. Luan, Dianxin Thompson, John, Low Complexity Channel estimation with Neural Network Solutions, arXiv, 2022, doi: 132

10.48550/ARXIV.2201.09934. 133

6. D. Madhubabu and A. Thakre, Long-Short term Memory based Channel Prediction for SISO System, 2019 International Conference on 134

Communication and Electronics Systems (ICCES), 2019, 1–5, doi: 10.1109/ICCES45898.2019.9002073. 135

7. T. Huynh-The, C. -H. Hua, Q. -V. Pham, and D. -S. Kim, MCNet: An Efficient CNN Architecture for Robust Automatic Modulation 136

Classification, IEEE Communications Letters, April 2020, 24, 811-815, doi: 10.1109/LCOMM.2020.2968030. 137

8. N. Onoszko, G. Karlsson, O. Mogren, and E. Listo Zec, Decentralized federated learning of deep neural networks on non-iid data, arXiv, 138

2021, doi: 10.48550/ARXIV.2107.08517. 139

9. Q. Ye, Y. Han, Y. Sun and J. Lv, PSO-PS:Parameter Synchronization with Particle Swarm Optimization for Distributed Training of Deep 140

Neural Networks, 2020 International Joint Conference on Neural Networks (IJCNN), 2020, 1–8, doi: 10.1109/IJCNN48605.2020.9207698. 141

10. Zhou, Gui Pan, Cunhua Ren, Hong Popovski, Petar Swindlehurst, A., Channel Estimation for RIS-Aided Multiuser Millimeter-Wave 142

Massive MIMO Systems, Institute of Electrical and Electronics Engineers (IEEE), 2022, 70, 1478–1492, doi: 10.1109/tsp.2022.3158024. 143

11. M.K., Arti., Channel Estimation and Detection in Satellite Communication Systems, IEEE Transactions on Vehicular Technology, 2016, 65, 144

1–1, doi: 10.1109/TVT.2016.2529661. 145

You might also like