You are on page 1of 6

Machine Learning with Applications 12 (2023) 100462

Contents lists available at ScienceDirect

Machine Learning with Applications


journal homepage: www.elsevier.com/locate/mlwa

Reducing MEG interference using machine learning


Sammi Hamdan a , Kyle DuBray a , Jordan Treutel a , Rajendra Paudyal b , Khem Poudel a ,∗
a
Middle Tennessee State University, 1301 E Main St, Murfreesboro, 37132, TN, United States of America
b
George Mason University, 4400 University Drive, Fairfax, 22030, VA, United States of America

ARTICLE INFO ABSTRACT


Dataset link: https://10.24433/CO.7062415.v1 Magnetoencephalography (MEG) is a non-invasive imaging technique that measures the naturally occurring
electrical activity of the brain. A MEG signal contains important information about the health of the brain
MSC:
and can be used to detect any abnormalities that could point to a neurological disease. MEG sensors are very
0000
1111
sensitive, and so they are very susceptible to noise. Denoising these signals efficiently will make analyzing
the data much easier. In this paper, we have utilized several components in order to obtain, denoise, and
Keywords: then store MEG data. First, data is submitted into a React application which then stores the raw data, along
TensorFlow with user information into a MYSQL database. Then, the data passes through a 9-layer Denoising Autoencoder
MNE-python
(DAE). Afterwards the output is then stored in a separate MYSQL database and its noisy version. The SNR of
Supervised learning
a signal after passing it through the model was able to be increased by a maximum of 88%. On average, the
Denoising Autoencoder (DAE)
REACT model was able to increase the SNR by 45.63%. Besides providing neurologists valuable information regarding
the brain, it also serves as an easily accessible tool for viewing and cleaning MEG data.

1. Introduction an issue to arise with the MEG. The MEG is sensitive to tangentially
oriented currents that are close to the sensors, like in the cortex (Hansen
Magnetoencephalography (MEG) is a non-invasive functional brain- et al., 2010).
imaging technique that measures the magnetic field created from the There are various types of interference that the MEG can face.
naturally occurring electrical activity of the brain. It commonly uses ar- Magnetic materials, electric currents, and radio-frequency signals can
rays of special magnetometers, called SQUIDS (superconducting quan- all cause interference (Hansen et al., 2010). The movement of magnetic
tum interference devices), that are placed around the head of a patient materials, within the same room as the MEG, can interfere with the
and do not make physical contact with the scalp (Hämäläinen, Hari, recording. Although the material may not pose a problem if immobi-
Ilmoniemi, Knuutila, & Lounasmaa, 1993; Singh, 2014). It is commonly lized, it is very difficult to do so, and the smallest of movements from
combined with magnetic resonance imaging (MRI), the combination of the material could cause it to affect the recording. Because of this, only
which is called magnetic source imaging (MSI). In the United States, the non-magnetic materials can be present inside the room when the MEG
approved uses for MEG include preoperative and intraoperative brain is in operation. As for electric currents, the movement of electricity
mapping, particularly for patients with epilepsy. In many cases, MEG can cause a magnetic field. The strength of it is directly proportional
is diagnostically superior to electroencephalography (EEG) due to its to the strength of the current and to the surface area of the current
lack of distortion and improved spatial and temporal resolution (Singh, loop (Hansen et al., 2010). In order to make the field as weak as
2014; Tovar-Spinoza, Ochi, Rutka, Go, & Otsubo, 2008). possible, the current is kept very weak, and the loop is kept very small.
One of the qualities that make MEG signals important to analyze is Radio frequencies are the worst out of the three. They can decrease the
their reflection of real-time information transfer between different neu- modulation depth, increase the white-noise level, and can introduce a
rons in the brain. While EEG signals have the same quality, analysis of it DC shift in the output signal (Hansen et al., 2010). This makes them a
has been limited to the temporal part of the information, since it is very big issue. The devices that create these frequencies do not even need
sensitive to changes in electrical conductivity (Hansen, Kringelbach, to be in the same room as the MEG in order to affect the recording.
& Salmelin, 2010). Unlike the EEG, MEG signals do not come across In order to combat this issue, devices that create radio frequencies,
this issue, since they are based around magnetic fields and not electric like cell phones, cannot be allowed in the room. All cables entering
fields. Thanks to this, it is able to measure brain activity outside the the room must pass through a low pass filtering in order to prevent
head just as well as if it were inside the head. Unfortunately, this causes radio frequencies from entering the room (Hansen et al., 2010). Even

∗ Corresponding author.
E-mail addresses: sah9j@mtmail.mtsu.edu (S. Hamdan), krd4d@mtmail.mtsu.edu (K. DuBray), jat7h@mtmail.mtsu.edu (J. Treutel), rpaudyal@gmu.edu
(R. Paudyal), khem.poudel@mtsu.edu (K. Poudel).

https://doi.org/10.1016/j.mlwa.2023.100462
Received 3 February 2023; Received in revised form 15 March 2023; Accepted 15 March 2023
Available online 18 March 2023
2666-8270/© 2023 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
S. Hamdan, K. DuBray, J. Treutel et al. Machine Learning with Applications 12 (2023) 100462

Fig. 1. MEG signals graphed over time, along with snapshots of the activity of the scalp.

if one implements all these methods, interference can still be present in 2018). In one case, it was able to improve the SNR of MEG data by
the signal due to biological artifacts. Eye blinks, swallowing, and heart 990%, which was in part thanks to the high noise amplitude in various
sounds are main examples of this. Unlike the environmental kinds of channels.
interference, biological artifacts cannot be so easily controlled, since REST (representational state transfer) APIs (application program-
they come from the body. For this reason, it is important to have an ming interfaces) are conventionally used for data transfer and, by
efficient denoising algorithm in order to filter out this kind of noise. extension, the underlying structure for the World Wide Web. They use a
Fig. 1 shows the measurements taken by 203 MEG sensors over a uniform series of operations (Post, Get, Put, Delete, Options, Head) that
0.5-second interval. The scalp topographs branching from the signals standardizes available actions for developers (Rodríguez, Baez, Daniel,
depict the electrical activity of the brain at one time point. Casati, Trabucco, Canali, & Percannella, 1970). Utilizing REST APIs
Machine learning is an application of AI that allows machines to allows for creation of client–server architectures that are flexible for
adapt and learn from experience without being programmed to Selig numerous applications using the HTTP protocol. Although this client–
(2022). What makes it so useful is the fact that it can find patterns server architecture may increase development complexity compared to
and correlations in data much faster than a human can. For example, traditional applications that run on a single machine, the ability to
a research team was able to use machine learning in order to detect transfer data without the need for database replication can be beneficial
the abnormalities in a phonocardiograph (PCG) signal (Chowdhury, in certain situations (Tarkowska, Carvalho-Silva, Cook, Turner, Finn, &
Poudel, & Hu, 2020). This helped them classify each PCG signal. Yates, 2018).
Another research team used multi-view learning, which is a type of The target data type with MEG readings is the FIF (Fractal Image
machine learning, and MEG data for basic mind reading (Klami et al., Format) file, smaller files of which can be stored in an SQL database
2011). They managed to successfully do this a little over 50% of the with the BLOB (binary large object) data type. The LONGBLOB type
time. One research group did a similar research topic to ours except supports up to 4 GB files. In some cases, files such as these might be
they used a denoising autoencoder for solving the Electromagnetic better stored in a file system rather than a relational database. The
source imaging (ESI) problem (Huang et al., 2021). The model was able study by Sears et al. showed that if files are under 256 kilobytes, an
to robustly estimate source signals under a variety of source configura- SQL database tends to have a clear advantage, but if they are over 1
tions. Another research team used machine learning in order to try to MB, a file system will most likely have the advantage. Resilience to
find the localization of MEG brain signals (Pantazis & Adler, 2021). This fragmentation also plays an important role in the ideal storage method
would help to create new treatments and helpful technologies. Machine (with file systems handling fragmentation better) (Sears, Ingen, & Gray,
learning is a very useful tool for research. 2006). In the case of our MEG data, it is not expected that any file
When it comes to noise reduction methods that were applied to MEG will need frequent editing, if at all. Therefore, fragmentation is of little
data, there are a variety of different ones that have been employed. One concern.
noise reduction method was a combination of Kalman Filtering and
Factor Analysis (Okawa & Honda, 2005). 90% goodness-of-fit (GOF) 2. Methods
was obtained in dipole fitting after averaging in the order of 100 trials.
It was for the case of a typical evoked field. Another method used was The full stack of the application includes two main parts: the web
Independent Component Analysis (ICA) along with some categorization application and the machine learning model. The two parts handle
approaches (Rong & Contreras-Vidal, 2006). They were able to use data differently and therefore use separate databases. Fig. 2 contains
an automatic ART-2 network categorization method in order to get an overview diagram.
very high identification rates and correctness at a vigilance level of
0.97. Wavelet transformation, combined with multiresolution signal 2.1. Web application
decomposition and thresholding, was another effective method that
was given by Abhisek Ukil to denoise and analyze the frequency of MEG A front-facing web application is used to upload and view MEG
signals (Ukil, 2012). The db4 wavelet obtained an SNIR of 4.3841 dB data among projects that users can create and assign other users to.
for a post-stimulus period and 2.7804 dB for a pre-stimulus period. These actions are made possible through database queries that the front
The SOUND algorithm is an algorithm that performs well in reducing end can make with the use of back end routes shown in Fig. 3. Login
the noise in MEG data (Mutanen, Metsomaa, Liljander, & Ilmoniemi, authentication is required for all users so their respective permissions

2
S. Hamdan, K. DuBray, J. Treutel et al. Machine Learning with Applications 12 (2023) 100462

is good for classification purposes. After the loss function is applied,


a backpropagation algorithm is used to adjust the parameters of the
model for further training. By looking at the loss, the algorithm can
adjust the weights and biases to decrease the loss, which may or may
not work.
In Fig. 4, we can see how the encoder and decoder is arranged.
An important thing to note is that the encoder and decoder can have
any number of hidden layers. The area that is inside of the encoder
and decoder is called a ‘‘bottleneck’’. This is because the input data
is squished, or compressed, in order to fit within the lower number
of nodes. That area is where the core features can be found. On the
right, we can see how the reconstructed data is compared with the true
output, which is the clean data.

2.3. Signal-to-Noise-Ratio (SNR)

SNR gives the strength of the signal compared to the noise. It is a


good metric to determine how well a denoising technique works. It is
given by this equation:
𝑃
𝑆𝑁𝑅 = 10 ⋅ log( 𝑃𝑠 ) (2)
Fig. 2. Visual layout of the project. 𝑛

where 𝑃𝑠 refers to the power of the signal and 𝑃𝑛 refers to the power
of the noise. Typically, this metric is given in decibels (dB), which this
equation calculates. The higher the SNR is, the better the technique is.
can be retrieved from the database. JSON Web Tokens (JWTs) are
used on the back end for authentication persistence on any given page,
2.4. Min-max normalization
as well as protection for private back-end routes. MEG data can be
uploaded to the server as an FIF file, then associated with projects so In order for the binary cross-entropy loss function to work properly,
other users can access the data. Through the use of our API, the data the input to the model must be between 0 and 1. If it is not, the loss
can be uploaded to the server and ran through the denoising machine could be inaccurate (Versloot, 2022). Min-Max Normalization solves
learning algorithm. this issue by taking the minimum and maximum value in a dataset and
setting them as 0 and 1 respectively. The other values are then scaled
2.2. Denoising autoencoder and binary cross-entropy loss function to fit between these two values. The equation for this is:
𝑥−𝑚𝑖𝑛(𝑥)
An autoencoder is a machine learning model that takes input and 𝑥𝑠𝑐𝑎𝑙𝑒𝑑 = 𝑚𝑎𝑥(𝑥)−𝑚𝑖𝑛(𝑥)
(3)
attempts to deconstruct it, then reconstruct it. It is a neural network, where 𝑥𝑠𝑐𝑎𝑙𝑒𝑑 is the normalized value (Loukas, 2020).
so it can have many layers, each layer holding up to hundreds of
neurons. Each neuron in a layer passes a signal to a neuron in the 2.5. MySQL database
next layer. Moreover, each neuron has a threshold value in the form
of an activation function to pass the signal to the next connected The Entity-Relationship Diagram (ERD) for the database is shown
neuron. A neuron in one layer is connected to every neuron in the next in Fig. 5, which includes the tables, attributes, relationships, and car-
layer. These connections are measured through weight. The weight is a dinality constraints of this schema. The user table is used for login in
parameter that transforms the input data within the hidden layers. The and associating users with projects or events. In this schema, a project
layers can be grouped into two parts: the encoder and the decoder. The simply consists of MEG event data stored in the event table. The event
encoder compresses the input down a dimension, outputting the core table makes use of the LONGBLOB data type to represent binary data
features of the input. The decoder takes the core features and attempts in the form of a FIF file. When a user creates an account through the
to reconstruct the original input using it. If the decoder portion is taken web application, the password is hashed and stored in the user table
away, the autoencoder can work as a feature extractor. For training along with the name, email, and organization of the user. Any user that
purposes, the original input is used to validate and test the accuracy creates a project is also considered an ‘‘admin’’, granting the ability
of the reconstruction. As we can see from Fig. 4, input data is fed to add other users to the project (either as a regular member of a
into the neurons of the input layer. The output of the input layer is project or an admin member). Any time a user is added to a project,
taken as input to the first hidden layer. This process will continue they also receive a notification, stored in the notification table and
until the final layer, which is the output layer. The output layer will retrieved on login for any user. This is achieved with only a single
give the final prediction or reconstruction for the data. What makes query using a stored procedure in the database called by the back end
a Denoising autoencoder different from regular autoencoders is that that will both add the requested user to the works_on table and writes
instead of using the original data as validation, the clean, or not noisy, a notification for that user in the notification table. The TensorFlow
data is used instead. The noisy data is taken as input, and the model model was initially trained with directly uploaded files, but future
attempts to transform it into the clean version. The binary cross-entropy event data can be stored in the events table (either raw or denoised).
loss function is a good method for comparing the reconstruction with
3. Results and analysis
the desired reconstruction.
The binary cross-entropy loss function is a metric used to determine
3.1. Dataset
how far away a predicted value is from the true value. The function is
expressed as: In this paper, MNE-Python (Gramfort, Luessi, Larson, Engemann,

𝑁 Strohmeier, Brodbeck, & Hämäläinen, 2013) datasets was used to test
𝐻𝑝 (𝑡) = − 𝑁1 𝑦𝑖 ⋅ log(𝑝(𝑦𝑖 )) + (1 − 𝑦𝑖 ) ⋅ log(1 − 𝑝(𝑦𝑖 )) (1) the model. MNE-Python contains datasets that are useful for practicing
𝑖=1 and testing many preprocessing methods. The artifacts stand out visu-
where 𝑝(𝑦) is the probability of a sample point being the correct value ally when the signals are graphed, and there are tutorials given that
(Godoy, 2022). This loss function is usually used for deep learning and use these datasets for a variety of different methods.

3
S. Hamdan, K. DuBray, J. Treutel et al. Machine Learning with Applications 12 (2023) 100462

Fig. 3. REST API request–response flow.

Table 1
This table describes the general design of the denoising autoencoder.
Denoising autoencoder
Hyperparameters Values
# of layers 9
# of hidden layers 7
Activation function Relu for Hidden Layers and Sigmoid for output
Loss function Binary Cross-Entropy
Optimizer Adam

Table 2
Summary of results.
Trials Before cleaning (SNR dB) After cleaning (SNR dB)
1 5.98 10.08
2 11.32 14.81
Fig. 4. General design of a denoising autoencoder. 3 9.16 11.16
4 8.27 9.94
5 9.26 12.66
6 5.75 9.90
3.2. Web application 7 11.22 14.11

The current architecture for the web application proves to be viable


for user and project separation. Further work is needed to determine
best practices for transfer and storage of the binary FIF file. A propri- capture enough of the core features. In the output layer, the sigmoid
etary storage method, such as array representation with specific parsing activation function was utilized, as was the binary cross-entropy loss
and display methods, might be needed if a database is to be utilized for function. The activation function was used as it ensured that the output
the MEG data. Otherwise, a file system could prove more useful. was always in the range of 0 and 1. The Adam optimizer was used to
Other considerations for the application include a cloud infras- minimize the loss. The model was trained using artificial MEG signals.
tructure, potentially utilizing a public solution such as AWS. A mi- They were created using a summation of sine and cosine functions as
croservices type architecture might be beneficial to further separate the well as some gaussian noise. After training the model, it was tested
concerns of user-specific and MEG data-specific services. It would also using the MNE datasets. In the sixth trial, the DAE was able to increase
automate scaling as the application grows and serves more users and the SNR of the signal by 72%. You can see the affect in Fig. 6.
projects. Table 2 depicts seven noisy MEG signals, each with a different level
of noise. After sending those signals through the denoising autoencoder,
3.3. Results of denoising they became cleaner. The autoencoder was able to increase the SNR of a
signal by a maximum of 88%. On average, the DAE was able to increase
The presence of various different kinds of artifacts can make it ex-
the SNR by approximately 45.63%.
tremely difficult to get any meaningful information from MEG signals.
Because of this, it is necessary to eliminate any noise present in the
signal. The number of layers and the number of kernels in each layer 3.4. Discussion
were each tuned in order to get the best results. Table 1 shows the basic
design of the model. A 9-layer sequential DAE model trained by Keras
The main goal of this research was to analyze a possible method
was used in our research for denoising the data. Keras is the high-level
for denoising MEG data. A front-end application was created in order
API of TensorFlow, which was used to train the model quickly.
After applying min–max normalization to the noisy and clean to take in MEG data files, preferably in the FIF format. Two MYSQL
datasets, the noisy data was fed into the model, while the clean data databases were created in order to hold information regarding the users
was used to validate the reconstruction. Afterwards, 7 hidden layers registered in the front-end application and the signals, both clean and
with 49216, 6176, 1552, 784, 1568, 6208, and 49408 filters were noisy. The noisy signal was sent through the denoising autoencoder
implemented with the Relu activation function for non-linearity and and came out with some noise still remaining. Through training the
efficiency. The number of layers chosen gave the best results when model with simulated MEG signals, the model was able to understand
compared to the models with more or fewer layers. If there were too the core features of MEG signals and determine what kind of data was
many layers, then the model would be very complex and hard to train. not supposed to be reconstructed. Thanks to this, it was able to filter
If there were not enough layers, then the model would not be able to out a lot of the noise in the original MEG signal and increase the SNR.

4
S. Hamdan, K. DuBray, J. Treutel et al. Machine Learning with Applications 12 (2023) 100462

Fig. 5. Entity-relationship diagram for the database.

prevent incorrect conclusions. A quick approach to denoising MEG


signals using denoising autoencoders is discussed in this paper. This
method is efficient in noise reduction of MEG signals. Through further
training, it may be possible to combine this preprocessing method with
others in order to create a more efficient way to reduce noise. It should
be noted that this model needs to be trained with a large amount of
data. Autoencoders, in general, need a lot of training in order to be
efficient at recreating the input. Therefore, it is necessary to train the
model using multiple datasets with different forms of noise in future
works. In the future, other methods will be combined to denoise MEG
data in order to lessen the noise as much as possible.

CRediT authorship contribution statement

Sammi Hamdan: Conceptualization, Methodology, Software,


Data curation, Writing – original draft, Writing – review &
editing. Kyle DuBray: Conceptualization, Methodology, Software,
Data curation, Writing – original draft, Writing – review &
editing, Visualization. Jordan Treutel: Conceptualization, Method-
ology, Software, Data curation, Writing, Visualization. Rajendra
Paudyal: Software, Validation. Khem Poudel: Conceptualization,
Methodology, Software, Supervision, Writing – review & editing.

Data availability

https://10.24433/CO.7062415.v1

References

Chowdhury, T. H., Poudel, K. N., & Hu, Y. (2020). Time-frequency analy-


sis, denoising, compression, segmentation, and classification of PCG signals.
IEEE Access, 8, Article 160882-160890. http://dx.doi.org/10.1109/access.2020.
Fig. 6. The first graph contains the noisy signal (blue) and the pure signal (orange).
30208hellocheckthecommand6.
The second graph contains the cleaned signal (blue) and the pure signal (orange). (For
Godoy, D. (2022). Understanding binary cross-entropy / log loss: A visual explanation.
interpretation of the references to color in this figure legend, the reader is referred to
Retrieved January 25, 2023, from https://towardsdatascience.com/understanding-
the web version of this article.)
binary-cross-entropy-log-loss-a-visual-explanation-a3ac6025181a.
Gramfort, A., Luessi, M., Larson, E., Engemann, D. A., Strohmeier, D., Brodbeck, C., et
4. Conclusion al. (2013). Meg and EEG data analysis with MNE-python. Frontiers in Neuroscience,
7, http://dx.doi.org/10.3389/fnins.2013.00267.
MEG signals have been used for a while for functional brain imaging Hämäläinen, M., Hari, R., Ilmoniemi, R. J., Knuutila, J., & Lounasmaa, O. V. (1993).
Magnetoencephalography—theory, instrumentation, and applications to noninva-
and detecting neurological abnormalities. Being able to process the
sive studies of the working human brain. Reviews of Modern Physics, 65(2), 413–497.
signal accurately without the worry of noise is important for identifying http://dx.doi.org/10.1103/revmodphys.65.413.
any pattern that could point to a neurological disorder. For that rea- Hansen, P. C., Kringelbach, M. L., & Salmelin, R. (2010). Meg: An introduction to methods.
son, noise reduction techniques are heavily recommended in order to Oxford, New York: Oxford University Press.

5
S. Hamdan, K. DuBray, J. Treutel et al. Machine Learning with Applications 12 (2023) 100462

Huang, G., Yu, Z. L., Wu, W., Liu, K., Gu, Z. H., Qi, F., et al. (2021). Electromag- Rong, F., & Contreras-Vidal, J. L. (2006). Magnetoencephalographic artifact identi-
netic source imaging via a data-synthesis-based denoising autoencoder. arXiv.org. fication and automatic removal based on independent component analysis and
Retrieved March 3, 2023, from https://arxiv.org/abs/2010.12876v5. categorization approaches. Retrieved January 25, 2023, from https://pubmed.ncbi.
Klami, A., Ramkumar, P., Virtanen, S., Parkkonen, L., Hari, R., & Kaski, S. (2011). nlm.nih.gov/16777232/.
ICANN/Pascal2 Challenge: Meg Mind reading — overview and results. Retrieved Sears, R., Ingen, C. V., & Gray, J. (2006). To BLOB or not to BLOB: Large object
January 26, 2023, from https://research.cs.aalto.fi/pml/online-papers/megicann_ storage in a database or a filesystem?. Retrieved January 26, 2023, from https://
klamietal.pdf. www.microsoft.com/en-us/research/wp-content/uploads/2006/04/tr-2006-45.pdf.
Loukas, S. (2020). Everything you need to know about min–max normalization Selig, J. (2022). What is machine learning? A definition. [Web Blog post]. Retrieved
in Python. Retrieved January 25, 2023, from https://towardsdatascience. January, 2023, from https://www.expert.ai/blog/machine-learning-definition/.
com/everything-you-need-to-know-about-min--max-normalization-in-python- Singh, S. (2014). Magnetoencephalography: Basic principles. Annals of Indian Academy
b79592732b79. of Neurology, 17(5), 107. http://dx.doi.org/10.4103/0972-2327.128676.
Mutanen, T. P., Metsomaa, J., Liljander, S., & Ilmoniemi, R. J. (2018). Automatic and Tarkowska, A., Carvalho-Silva, D., Cook, C., Turner, E., Finn, R., & Yates, A. (2018).
robust noise suppression in EEG and meg: The sound algorithm. NeuroImage, 166, Eleven quick tips to build a usable REST API for life sciences. Retrieved January
135–151. http://dx.doi.org/10.1016/j.neuroimage.2017.10.021. 25, 2023, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6292566/.
Okawa, S., & Honda, S. (2005). Reduction of noise from magnetoencephalography data Tovar-Spinoza, Z., Ochi, A., Rutka, J., Go, C., & Otsubo, H. (2008). The role of
- medical & biological engineering & computing. Retrieved January 25, 2023, from magnetoencephalography in epilepsy surgery. Retrieved January 25, 2023, from
https://link.springer.com/article/10.1007/BF02351037. http://dx.doi.org/10.3171/FOC/2008/25/9/E16.
Pantazis, D., & Adler, A. (2021). Meg source localization via deep learning. Ukil, A. (2012). Denoising and frequency analysis of noninvasive magnetoencephalog-
Retrieved January 25, 2023, from https://www.ncbi.nlm.nih.gov/pmc/articles/ raphy sensor signals for functional brain mapping. IEEE Sensors Journal, 12(3),
PMC8271934/. 447–455. http://dx.doi.org/10.1109/jsen.2010.2096465.
Rodríguez, C., Baez, M., Daniel, F., Casati, F., Trabucco, J., Canali, L., et al. (1970). Versloot, C. (2022). Creating a signal noise removal autoencoder with keras. Retrieved
Rest apis: A large-scale analysis of compliance with principles and best practices. January 25, 2023, from https://github.com/christianversloot/machine-learning-
Retrieved January 25, 2023, from https://link.springer.com/chapter/10.1007/978- articles/blob/main/creating-a-signal-noise-removal-autoencoder-with-keras.md.
3-319-38791-8_2.

You might also like