Professional Documents
Culture Documents
Optics Communications
journal homepage: www.elsevier.com/locate/optcom
a r t i c l e i n f o a b s t r a c t
Keywords: Color detection from a light emitting diode (LED) array using a smartphone camera is very difficult in a visual
LED array multiple-input multiple-output (visual-MIMO) system. In this paper, we propose a method to determine the LED
Smartphone camera color using a smartphone camera by applying regression analysis. We employ a multivariate regression model to
Visual-MIMO identify the LED color. After taking a picture of an LED array, we select the LED array region, and detect the LED
K-means clustering
using an image processing algorithm. We then apply the k-means clustering algorithm to determine the number
Multivariate regression model
of potential colors for feature extraction of each LED. Finally, we apply the multivariate regression model to
predict the color of the transmitted LEDs. In this paper, we show our results for three types of environmental
light condition: room environmental light, low environmental light (560 lux), and strong environmental light
(2450 lux). We compare the results of our proposed algorithm from the analysis of training and test R-Square
(%) values, percentage of closeness of transmitted and predicted colors, and we also mention about the number
of distorted test data points from the analysis of distortion bar graph in CIE1931 color space.
© 2017 Elsevier B.V. All rights reserved.
* Corresponding author.
E-mail address: kdk@kookmin.ac.kr (K.-D. Kim).
https://doi.org/10.1016/j.optcom.2017.11.086
Received 23 August 2017; Received in revised form 28 October 2017; Accepted 29 November 2017
0030-4018/© 2017 Elsevier B.V. All rights reserved.
P.P. Banik et al. Optics Communications 413 (2018) 121–130
122
P.P. Banik et al. Optics Communications 413 (2018) 121–130
123
P.P. Banik et al. Optics Communications 413 (2018) 121–130
from (3). We calculate six more features, namely 𝐶𝑎𝑣𝑔 + 𝐶𝑠𝑡𝑑 , 𝐶𝑎𝑣𝑔 − 𝐶𝑠𝑡𝑑 , Then, we use the calculated coefficients to determine the transmitted
𝐶𝑎𝑣𝑔 + 𝐶min , 𝐶min − 𝐶𝑠𝑡𝑑 , 𝐶max + 𝐶min − 𝐶𝑠𝑡𝑑 , and 𝐶min + 𝐶𝑠𝑡𝑑 . The features color for an unseen dataset. Here, the test dataset is called an unseen
are added to the feature matrix, 𝐹𝐶 in (1). However, we calculate one dataset because it is not used during the training session. We also
more feature for the red color channel, which is given in (6). Thus, the calculate the R-Square values for the training and test dataset. The
total number of features is 15 for the red color channel and 14 for the R-Square value represents the percentage of total variation of the
green, and blue color channels. We select these features because their output according to the variation of the input. A higher R-Square value
correlation coefficients for 𝐶𝑎𝑣𝑔 , 𝐶max , and 𝐶min are greater than 0.75. indicates that the model has a good fit for the data.
Not all of these features are used in the multivariate regression model.
We select features with a correlation coefficient greater than 0.75 for the ∑
𝑛
( )2
transmitted color channel. A higher correlation value indicates that the 𝑆𝐸𝐿𝑖𝑛𝑒,𝐶 = 𝐶𝑘,𝑝𝑟𝑒𝑑 − 𝑌𝑘,𝐶 (11)
𝑘=1,𝐶∈{𝑅,𝐺,𝐵}
feature has a strong linear relationship with the transmitted color [19–
22]. The range of the correlation coefficient is −1 to +1. A correlation ∑𝑛
( )2
𝑆𝐸𝑌 ,𝐶 = 𝑌𝑘,𝐶 − 𝑌𝑎𝑣𝑔,𝐶 (12)
coefficient close to 1 indicates a strong linear relationship between two 𝑘=1,𝐶∈{𝑅,𝐺,𝐵}
variables. We select features with high correlation values and calculate ( ( ))
the coefficient for the multivariate regression model. The coefficient 𝑅2𝐶 = 1 − 𝑆𝐸𝐿𝑖𝑛𝑒,𝐶 ∕𝑆𝐸𝑦,𝐶 × 100. (13)
measurement equation is given below:
In (11),𝑆𝐸𝐿𝑖𝑛𝑒,𝐶 represents the sum of the square error from the
−1
⎡𝛽1,𝐶 ⎤ ⎛⎡𝐹1,𝐶,𝑠𝑒𝑙 ⎤
𝑇
⎡𝐹1,𝐶,𝑠𝑒𝑙 ⎤⎞
𝑇
⎛⎡𝐹1,𝐶,𝑠𝑒𝑙 ⎤ ⎡𝑌1,𝐶 ⎤⎞ fitted line. In (12), 𝑆𝐸𝑌 ,𝐶 represents the sum of the square error from
⎢𝛽 ⎥ ⎜⎢𝐹 ⎥ ⎢𝐹 ⎥⎟ ⎜⎢𝐹 ⎥ ⎢𝑌 ⎥⎟ the transmitted color average line for each color channel. In (13), the
⎢ 2,𝐶 ⎥ ⎜⎢ 2,𝐶,𝑠𝑒𝑙 ⎥ ⎢ 2,𝐶,𝑠𝑒𝑙 ⎥⎟ ⎜⎢ 2,𝐶,𝑠𝑒𝑙 ⎥ ⎢ 2,𝐶 ⎥⎟ R-Square value, 𝑅2𝐶 is the measurement of the variation of output in
⎢𝛽3,𝐶 ⎥ ⎜⎢𝐹3,𝐶,𝑠𝑒𝑙 ⎥ ⎢𝐹3,𝐶,𝑠𝑒𝑙 ⎥⎟ ⎜⎢𝐹3,𝐶,𝑠𝑒𝑙 ⎥ ⎢𝑌3,𝐶 ⎥⎟
⎢. ⎥ = ⎜⎢. ⎥ ⎢. ⎥⎟ ⎜⎢. ⎥ ⎢. ⎥⎟ (7) percentage with the variation of the input for every color channel.
⎢ ⎥ ⎜⎢ ⎥ ⎢ ⎥⎟ ⎜⎢ ⎥ ⎢ ⎥⎟
⎢ . ⎥ ⎜⎢ . ⎥ ⎢ . ⎥⎟ ⎜⎢. ⎥ ⎢. ⎥⎟
⎢. ⎥ ⎜⎢. ⎥ ⎢. ⎥⎟ ⎜⎢. ⎥ ⎢. ⎥⎟ 4. Experimental setup
⎢ ⎥ ⎜⎢ ⎥ ⎢ ⎥⎟ ⎜⎢ ⎥ ⎢ ⎥⎟
⎣ 𝑠𝑒𝑙,𝐶 ⎦ ⎝⎣ 𝑚,𝐶,𝑠𝑒𝑙 ⎦
𝛽 𝐹 ⎣ 𝑚,𝐶,𝑠𝑒𝑙 ⎦⎠
𝐹 ⎝⎣ 𝑚,𝐶,𝑠𝑒𝑙 ⎦
𝐹 ⎣ 𝑚,𝐶 ⎦⎠
𝑌
For this experiment, we use the WS2812B LED array, as the trans-
mitter, which is programmed with an Arduino Nano MEGA328P based
⎡𝐶1,𝑝𝑟𝑒𝑑 ⎤ ⎡𝐹1,𝐶,𝑠𝑒𝑙 ⎤ ⎡𝛽1,𝐶 ⎤
⎢𝐶 ⎥ ⎢𝐹 ⎥ ⎢𝛽 ⎥ microcontroller. We generate random LED colors into the transmitter.
⎢ 2,𝑝𝑟𝑒𝑑 ⎥ ⎢ 2,𝐶,𝑠𝑒𝑙 ⎥ ⎢ 2,𝐶 ⎥ We take pictures using the SAMSUNG GALAXY GRAND PRIME device,
⎢ 3,𝑝𝑟𝑒𝑑 ⎥ ⎢ 3,𝐶,𝑠𝑒𝑙 ⎥ ⎢ 3,𝐶 ⎥
𝐶 𝐹 𝛽
⎢. ⎥ = ⎢. ⎥ ⎢. ⎥. equipped with a 1.2 GHz quad-core processor, 1 GB RAM, and an 8-
(8)
⎢ ⎥ ⎢ ⎥⎢ ⎥ megapixel camera with low exposure mode. We take 85 images to
⎢ . ⎥ ⎢ . ⎥⎢ . ⎥
⎢. ⎥ ⎢. ⎥ ⎢. ⎥ construct the dataset and maintain a 0.3 meter distance between the
⎢ ⎥ ⎢ ⎥⎢ ⎥ object, and the smartphone. We take all pictures with same distance
⎣𝐶𝑛,𝑝𝑟𝑒𝑑 ⎦ ⎣𝐹𝑛,𝐶,𝑠𝑒𝑙 ⎦ ⎣𝛽𝑠𝑒𝑙,𝐶 .⎦
and angle between the object, and smartphone by fixing smartphone
In (7), 𝛽1,𝐶 , 𝛽2,𝐶 , 𝛽3,𝐶 , . . . ,𝛽𝑠𝑒𝑙,𝐶 are the coefficients of the selected camera with a tripod. Light meter (CL-200A) was used to measure the
features where the suffix ‘𝑠𝑒𝑙’ and ‘𝐶’ represent the selected features and environmental light. We change the position of the light meter from
the RGB color channel, respectively. The size of the coefficient matrix is the transmitter to the receiver but measured value is almost same.
𝑠𝑒𝑙×1. 𝐹1,𝐶,𝑠𝑒𝑙 , 𝐹2,𝐶,𝑠𝑒𝑙 ,. . . ,𝐹𝑚,𝐶,𝑠𝑒𝑙 are the features of 𝑚 number of training So, the measured light value in lux is independent of the position
data points. The size of this matrix is 𝑚 × 𝑠𝑒𝑙 and 𝑌1,𝐶 , 𝑌2,𝐶 , … , 𝑌𝑚,𝐶 are of the light meter. We capture our images in two light intensities,
the transmitted colors for the training dataset. The size of this matrix 560 lux and 2450 lux, named as low environmental light, and strong
is 𝑚 × 1. In (8), 𝐶1,𝑝𝑟𝑒𝑑 , 𝐶2,𝑝𝑟𝑒𝑑 , … , 𝐶𝑛,𝑝𝑟𝑒𝑑 are the predicted colors for 𝑛 environmental light, respectively. Room environmental light means the
number of test data points for each color channel, the size of this matrix adequate illumination condition of a room. So, we do not need to
is 𝑛 × 1. After training the model, we calculate the coefficients for the mention the value of room light intensity. The images contain 340
selected features. Now, we can rewrite (8) in a compact form as in (9). random RGB colors because each image has four different LED colors.
In the similar manner, we can also write the equation of predicted colors The algorithm is executed in MATLAB 2017 on a computer with 8
for training dataset as in (10). GB RAM running on Microsoft Windows 10. The experimental setup
is shown in Fig. 6.
𝐶𝑛,𝑝𝑟𝑒𝑑 = 𝛽1,𝐶 𝐹𝑛,𝐶,1 + 𝛽2,𝐶 𝐹𝑛,𝐶,2 + 𝛽3,𝐶 𝐹𝑛,𝐶,3 + ⋯ + 𝛽𝑠𝑒𝑙,𝐶 𝐹𝑛,𝐶,𝑠𝑒𝑙 (9)
5. Results and discussion
𝐶𝑚,𝑝𝑟𝑒𝑑 = 𝛽1,𝐶 𝐹𝑚,𝐶,1 + 𝛽2,𝐶 𝐹𝑚,𝐶,2 + 𝛽3,𝐶 𝐹𝑚,𝐶,3 + ⋯ + 𝛽𝑠𝑒𝑙,𝐶 𝐹𝑚,𝐶,𝑠𝑒𝑙 . (10)
We use 80% of the data to train our regression model to determine
In (10), 𝐶𝑚,𝑝𝑟𝑒𝑑 is the predicted output of training dataset, 𝛽1,𝐶 , 𝛽2,𝐶 , the coefficients of the selected features and 20% of the data to test our
𝛽3,𝐶 ,. . . ,𝛽𝑠𝑒𝑙,𝐶 are the coefficients of input selected features, 𝐹𝑚,𝐶,1 , 𝐹𝑚,𝐶,2 , algorithm at room environmental light, low, and strong environmental
𝐹𝑚,𝐶,3 ,. . . ,𝐹𝑚,𝐶,𝑠𝑒𝑙 and suffix ‘𝑠𝑒𝑙’, ‘𝑚’, and ‘𝐶’ are the total number of light. The selected features, correlation coefficients, and R-Square (%)
selected features, the number of data points of training dataset and RGB values of the test dataset are listed in Tables 1–3 for the red, green,
color channel, respectively. and blue color channels, respectively at three different environmental
124
P.P. Banik et al. Optics Communications 413 (2018) 121–130
Table 1
Selected features for red color channel.
Correlation coefficients
Types of environmental light 𝑅2𝑡𝑟𝑎𝑖𝑛,𝑟𝑒𝑑 𝑅2𝑡𝑒𝑠𝑡,𝑟𝑒𝑑
𝑅𝑎𝑣𝑔 𝑅𝑎𝑣𝑔 − 𝑅𝑠𝑡𝑑 𝑅min 𝑅𝑎𝑣𝑔 + 𝑅min 𝑅min − 𝑅𝑠𝑡𝑑 𝑅𝐴𝑣𝑔𝐸
Room 0.81 0.83 0.83 0.82 0.83 0.76 95.13% 94.85%
Low (560 lux) 0.89 0.90 0.90 0.90 0.91 0.82 96.45% 95.64%
Strong (2450 lux) 0.84 0.85 0.84 0.85 0.81 0.80 94.37% 95.05%
Table 2
Selected features for green color channel.
Correlation coefficients
Types of environmental light 𝑅2𝑡𝑟𝑎𝑖𝑛,𝑔𝑟𝑒𝑒𝑛 𝑅2𝑡𝑒𝑠𝑡,𝑔𝑟𝑒𝑒𝑛
𝐺𝑎𝑣𝑔 𝐺𝑎𝑣𝑔 − 𝐺𝑠𝑡𝑑 𝐺min 𝐺𝑎𝑣𝑔 + 𝐺min 𝐺max + 𝐺min − 𝐺𝑠𝑡𝑑
Room 0.84 0.87 0.86 0.86 0.83 94.91% 94.80%
Low (560 lux) 0.81 0.84 0.83 0.82 0.81 94.29% 94.69%
Strong (2450 lux) 0.76 0.77 0.76 0.77 0.77 91.03% 92.44%
Table 3
Selected features for blue color channel.
Correlation coefficients
Types of environmental light 𝑅2𝑡𝑟𝑎𝑖𝑛,𝑏𝑙𝑢𝑒 𝑅2𝑡𝑒𝑠𝑡,𝑏𝑙𝑢𝑒
𝐵𝑎𝑣𝑔 − 𝐵𝑠𝑡𝑑 𝐵min 𝐵𝑎𝑣𝑔 + 𝐵min 𝐵min + 𝐵𝑠𝑡𝑑
Room 0.91 0.90 0.91 0.90 96.61% 96.24%
Low (560 lux) 0.89 0.88 0.88 0.88 94.83% 94.14%
Strong (2450 lux) 0.88 0.86 0.87 0.87 94.56% 93.58%
125
P.P. Banik et al. Optics Communications 413 (2018) 121–130
than red color channel at room environmental light. The maximum test lights, we can say that we get maximum test R-Square (%) value at six,
R-Square (%) values, 94.85, 94.80, and 96.24 are observed with six, five, five, and four number of features at three types of environmental light
and four number of features for the red, green, and blue color channel, for each color channel while we select features on the basis of correlation
respectively. coefficient greater than 0.75, as listed in Tables 1–3.
In Fig. 8(a) and (b), at low environmental light, the red color Figs. 10–12 show the transmitted and predicted curve for the red,
channel curve has better performance than other two color channel green, and blue color channels, respectively at room environmental
curves indicating that red color channel is less influenced by the low light. They also show that our prediction model predicts data almost
environmental light. The maximum test R-Square (%) values, 95.64, accurately. There are also some predicted data points varying up and
94.69, and 94.14 are observed with six, five, and four number of
down from the transmitted curve. We measure the closeness of two
features for the red, green, and blue color channel, respectively. It is
curves for the red, green, and blue color channels, approximately
also important to note that the performance of red color channel in
92.33%, 90.30%, and 92.80%, respectively. The closeness of each color
training and test dataset at low environmental light is better than room
channel is over 90%, strongly validating our prediction algorithm. This
environmental light.
In Fig. 9(a) and (b), at strong environmental light, the red, and blue shows that our multivariate regression model predicts the value of RGB
color channel curves have better performance than green color channel LED color with an accuracy of more than 90% at room environmental
curve indicating that red, and blue color channels are less influenced by light.
the strong environmental light. The maximum test R-Square (%) values, Figs. 13–15 show the transmitted and predicted curve for the red,
95.05, 92.44, and 93.58 are observed with six, five, and four number of green, and blue color channels, respectively at low environmental light.
features for the red, green, and blue color channel, respectively. We measure that the closeness of transmitted and predicted curve of red
After these analysis of R-Square (%) values for training and test color channel is 90.93%. For green, and blue color channel, the curves
dataset of red, green, and blue color channel at different environmental are close to each other, 88.68%, and 87.33%, respectively This shows
126
P.P. Banik et al. Optics Communications 413 (2018) 121–130
Fig. 10. Transmitted and predicted colors for red color channel in test dataset at room Fig. 14. Transmitted and predicted colors for green color channel in test dataset at low
environmental light. environmental light.
Fig. 11. Transmitted and predicted colors for green color channel in test dataset at room Fig. 15. Transmitted and predicted colors for blue color channel in test dataset at low
environmental light. environmental light.
Fig. 12. Transmitted and predicted colors for blue color channel in test dataset at room Fig. 16. Transmitted and predicted colors for red color channel in test dataset at strong
environmental light. environmental light.
Fig. 13. Transmitted and predicted colors for red color channel in test dataset at low Fig. 17. Transmitted and predicted colors for green color channel in test dataset at strong
environmental light. environmental light.
that our multivariate regression model predicts the value of RGB LED to transmitted color, respectively. So, we can also say that our proposed
color with an accuracy of more than 87%at low environmental light. multivariate regression model predicts the value of RGB LED color with
Figs. 16–18 also shows the closeness of the transmitted and predicted an accuracy of more than 85% at strong environmental light.
curve for the red, green, and blue color channel at strong environmental Fig. 19 shows the distortion (%) bar graph in the CIE1931 color
light. We estimate that the predicted values of red color channel are space between the original transmitted colors and the predicted received
89.76% close to transmitted color. We also measure that the predicted colors for the test dataset at room environmental light. We measure
values of green, and blue color channel are 85.20%, and 87.20% close the distortion with respect to the maximum distance in the CIE1931
127
P.P. Banik et al. Optics Communications 413 (2018) 121–130
R-Square (%) values for each color channel are very close, more than
90, the closeness of curve between predicted and transmitted colors
is greater than 85% and more than 50% of test data are distorted
less than 15% at low and strong environmental light. This implies
that the effect of low, and strong environmental light cannot influence
our result so largely but we get better result and less distortion in
room environmental light, and low environmental light than strong
environmental light.
6. Conclusions
Fig. 18. Transmitted and predicted colors for blue color channel in test dataset at strong
environmental light. From the results of our test dataset at different environmental lights,
our proposed algorithm improves the detection of the received colors on
the receiver side. The algorithm can be further improved by increasing
the size of the feature matrix and finding more highly correlated inde-
color space. The maximum possible distance in the CIE1931 color space
can be estimated by calculating the distance between two opposite pendent features, also known as predictors. During feature extraction,
corners from (0, 0) coordinate to (0.8, 0.8) coordinate, which is 1.13 we calculated some features that were not completely independent,
√
( 2 × 0.80 × 0.80). We measure the number of data points with high which might have led to the high correlation coefficient among the
and low levels of distortion. In Fig. 19, we identify 53 data points with features. It is important to detect some independent features that are
distortion less than 15% out of 68 data points in the test dataset and one highly correlated with the transmitted color. This will make the color
data point with high distortion close to 75%. We also determine that the detection more robust and reduce the amount of distortion. From
minimum distortion is 0.35% and the maximum distortion is 74%. From Tables 1–3, we can see that the blue color channel has the larger test
these statistics, we can say that approximately 80% of the test data have R-Square (%) value, 96.61 at room environmental light than the other
small distortion (less than 15%) at room environmental light. color channels. The R-Square (%) values for all of the color channels are
Figs. 20 and 21 show the distortion (%) bar graph in the CIE1931 very close to 95 at different environmental lights. From Table 3, we can
color space between the original transmitted colors and the predicted also see that the correlation coefficients of the features are very high and
received colors for the test dataset at low environmental light, and all selected features have correlation coefficients more than 0.85. This
strong environmental light, respectively. In Fig. 20, we identify 45 data indicates a stronger linear relationship for the blue color channel than
points with distortion less than 15% out of 68 data points in the test others and is responsible for the high R-Square (%) value for the blue
dataset, 4 data points with distortion greater than 50% and 2 data points color channel. But due to the interference of strong environmental light,
with high distortion close to 70%. We also determine that the minimum the training and test R-Square (%) values of green color channel are
distortion is 0.35% and the maximum distortion is about 70%. From
decreased, about 91 and the correlation coefficient of selected features
these statistics, we can say that approximately 66% of the test data have
are also decreased, less than 0.80, as listed in Table 2. The prediction
small distortion (less than 15%) at low environmental light.
of red color channel is better at low environmental light because we get
In Fig. 21, we estimate 38 data points having less than 15% distor-
the maximum training and test R-Square (%) value at low environmental
tion, 6 data points having greater than 50% distortion and 1 data point
light, as listed in Table 1. We also observe that at strong environmental
having near about 75% distortion. We also determine that the minimum
distortion is 0.11% and the maximum distortion is about 75%. For this light, red color channel is not influenced so much like other color
strong environmental light (2450 lux) interference, we can show that channels. Some of the predicted colors differ from the original colors by
about 56% of test data have small distortion (less than 15%). approximately 15%, described in distortion bar graph. This distortion
So, from the above analysis of test R-Square (%) values, the closeness can be minimized by increasing the size of the dataset for every color
of curve between predicted and transmitted colors and distortion bar channel. This will also increase the R-Square (%) value above 95 for each
graph on test dataset at room environmental light, low, and strong color channel. In the near future, we plan to improve the test dataset R-
environmental light, we can say that our multivariate regression model Square (%) value and will attempt to show the symbol error rate with
is not influenced so much in environmental light interference. The test respect to the distance between the transmitter and receiver.
Fig. 19. Bar graph of distortion (%) in CIE1931 color space between transmitted and predicted LED colors of test dataset at room environmental light.
128
P.P. Banik et al. Optics Communications 413 (2018) 121–130
Fig. 20. Bar graph of distortion (%) in CIE1931 color space between transmitted and predicted LED colors of test dataset at low environmental light.
Fig. 21. Bar graph of distortion (%) in CIE1931 color space between transmitted and predicted LED colors of test dataset at strong environmental light.
Acknowledgments [7] J.-E. Kim, J.-W. Kim, K.-D. Kim, LEA detection and tracking method for color
independent visual-MIMO, Sensors 16 (2016) 1027.
This research was supported by the Basic Science Research Program [8] S.-H. Chen, C.-W. Chow, Color-shift keying and code-division multiple-access trans-
through the National Research Foundation (NRF) of Korea funded by mission for RGB-LED visible light communications using mobile phone camera, IEEE
Photon. J. 6 (2014) 7904106.
the Ministry of Education [2015R1D1A1A01061396] and was also
[9] N. Rajagopal, P. Lazik, A. Rowe, Visual light landmarks for mobile devices, in:
supported by the National Research Foundation of Korea Grant funded
Proceedings of the 13th International Symposium on Information Processing in
by the Ministry of Science, ICT, Future Planning [2015R1A5A7037615]. Sensor Networks (IPSN-14), 2014, pp. 249–260.
[10] P. Hu, P.H. Pathak, X. Feng, H. Fu, P. Mohapatra, ColorBars: Increasing data rate of
References LED-to-Camera communication using color shift keying, in: Proceedings of the 11th
ACM Conference on Emerging Networking Experiments and Technologies, 2015,
[1] A. Ashok, M. Gruteser, N. Mandayam, J. Silva, M. Varga, K. Dana, Challenge: Mobile article no. 12.
optical networks through visual MIMO, in MobiCom’10 : Proceedings of the 16th [11] Analytic Vidhya, learn everything about analytics, https://www.analyticsvidhya.
Annual International Conference on Mobile Computing and Networking, New York, com/blog/2015/08/common-machine-learning-algorithms/, 2015 (Accessed on 17
NY, USA, 2010, pp. 105–112. August 2017).
[2] A. Ashok, M. Gruteser, N. Mandayam, K. Dana, Characterizing multiplexing and [12] M. Fatima, M. Pasha, Survey of machine learning algorithms for disease diagnostic,
diversity in visual MIMO, in: Proceedings of the IEEE 45th Annual Conference on J. Intell. Learn. Syst. Appl. 9 (2017) 1–16.
Information Sciences and Systems (CISS), 2011, pp. 1–6. [13] L. Penubaku, L.S. K, An attempt to transfer information using light as a medium
[3] J.-E. Kim, J.-W. Kim, Y. Park, K.-D. Kim, Applicability of color-independent Visual- and camera as receiver, in: International conference on computing and network
MIMO for V2X communication, 7th International Conference on Ubiquitous and communications (CoCoNet), 2016, pp. 964–968.
Future Networks (ICUFN), 2015, pp. 898–901. [14] H.-Y. Lee, H.-M. Lin, Y.-L. Wei, H.-I. Wu, H.-M. Tsai, K.C.-J. Lin, RollingLight:
[4] J.-E. Kim, J.-W. Kim, Y. Park, K.-D. Kim, Color-space-based visual-MIMO for V2X enabling line-of-sight light-to-camera communications, in: Proceedings of the 13th
communication, Sensors 16 (2016) E591. Annual International Conference on Mobile Systems, Application, and Services
[5] I. Takai, T. Harada, M. Andoh, K. Yasutomi, K. Kagawa, S. Kawahito, Optical vehicle- (MobiSys), 2015, pp. 167–180.
to-vehicle communication system using LED transmitter and camera receiver, IEEE [15] J.-E. Kim, K.-D. Kim, Symbol decision method of color-independent Visual-MIMO
Photon. J. 6 (2014) 7902513. system using a dynamic palette, Submitted for ICCE, January 2018.
[6] T. Yamazato, I. Takai, H. Okada, T. Fujii, T. Yendo, S. Arai, M. Andoh, T. Harada, K. [16] S.U. Lee, S.Y. Chung, A comparative performance study of several global thresh-
Yasutomi, K. Kagawa, S. Kawahito, Image-sensor-based visible light communication olding techniques for segmentation, Comput. Vis. Graph. Image Process. 52 (1990)
for automotive applications, IEEE Commun. Mag. 52 (2014) 88–97. 171–190.
129
P.P. Banik et al. Optics Communications 413 (2018) 121–130
[17] T. Kanungo, D.M. Mount, N.S. Netanyahu, C.D. Piatko, R. Silverman, A.Y. Wu, An [20] M.A. Hall, Correlation-based feature selection for discrete and numeric class machine
efficient k-means clustering algorithm: Analysis and implementation, IEEE Trans. learning, Hamilton, New Zealand: University of Waikato, Department of Computer
Pattern Anal. Mach. Intell. 24 (2002) 881–892. Science, Working Paper 00/08, 2000.
[18] G.K. Uyanik, N. Guler, A study on multiple linear regression analysis, Procedia Soc. [21] L. Yu, H. Liu, Feature selection for high-dimensional data: A fast correlation-based
Behav. Sci. 106 (2013) 234–240. filter solution, in: Proceedings of the 20th International Conference on Machine
[19] M.A. Hall, L.A. Smith, Feature selection for machine learning: Comparing a Learning (ICML-03), 2003, pp. 856–863.
correlation-based filter approach to the wrapper, FLAIRS (1999) 235–239. [22] X. Li, J. Zheng, Active learning for regression with correlation matching and labeling
error suppression, IEEE Signal Process. Lett. 23 (2016) 1081–1085.
130