You are on page 1of 12

1

Single Image Rain Streak Decomposition


Using Layer Priors
Yu Li, Member, IEEE, Robby T. Tan, Member, IEEE, Xiaojie Guo, Member, IEEE,
Jiangbo Lu, Senior Member, IEEE, and Michael S. Brown, Member, IEEE

Abstract—Rain streaks impair visibility of an image and


introduce undesirable interference that can severely affect the
performance of computer vision and image analysis systems.
Rain streak removal algorithms try to recover a rain streak free
background scene. In this paper, we address the problem of rain
streak removal from a single image by formulating it as a layer
decomposition problem, with a rain streak layer superimposed
on a background layer containing the true scene content. Ex-
isting decomposition methods that address this problem employ
either sparse dictionary learning methods or impose a low rank
Rain patch
Input
structure on the appearance of the rain streaks. While these
methods can improve the overall visibility, their performance
can often be unsatisfactory, for they tend to either over-smooth
the background images or generate images that still contain
noticeable rain streaks. To address the problems, we propose
a method that imposes priors for both the background and rain
streak layers. These priors are based on Gaussian mixture models
learned on small patches that can accommodate a variety of
background appearances as well as the appearance of the rain
streaks. Moreover, we introduce a structure residue recovery step
to further separate the background residues and improve the Background Rain streaks
decomposition quality. Quantitative evaluation shows our method
outperforms existing methods by a large margin. We overview Fig. 1: Upper Row: An example rain image and a zoomed-in
our method and demonstrate its effectiveness over prior work patch that mainly contains an annoying effect of rain streaks.
on a number of examples. Our method learns a rain streak layer prior on this region and
Index Terms—Rain removal, image decomposition, visibility used it for layer separation. Lower Row: The background
recovery, image prior, Gaussian mixture models. and rain streak layers recovered by our proposed method,
respectively. Note the rain streak layer is amplified for better
visualization.
I. I NTRODUCTION

M OST computer vision and image processing algorithms


assume that the input image is of scene content that is
clear and visible. However, for outdoor images, undesirable
obstruct, deform, and/or blur the imagery of the background
scenes. Distant rain streaks accumulated throughout the scene
interference from rainy weather is often inevitable. Rain in- reduce the visibility in a manner similar to fog–namely by
troduces several different types of visibility degradation. Rain- scattering light out and into the line of sight, creating a veiling
drops that fall and flow on a camera lens or a windscreen can phenomenon. Nearby rain streaks, where the individual rain
streaks are visible, can also significantly degrade visibility
This study is supported by the HCCS research grant from A*STAR. R. T.
Tan’s work in this research is supported by the National Research Foundation, due to their specular highlights, scattering, and blurring effect.
Prime Ministers Office, Singapore under its International Research Centre in Figure 1 shows an example of visibility degradation due to rain
Singapore Funding Initiative. X. Guo is supported by National Natural Science streaks and our attempt to remove them.
Foundation of China (grant no. 61402467). This research was undertaken
thanks in part to funding from the Canada First Research Excellence Fund Mathematically, the observed rain image O ∈ RM ×N can
for the Vision: Science to Applications (VISTA) programme. be modeled as a linear superimposition [1], [2] of the desired
Y. Li is with the Advanced Digital Sciences Center, Singapore (e-mail: background layer B ∈ RM ×N and the rain streak layer
ian.li.liyu@gmail.com).
R. T. Tan is with Yale-NUS College and National University of Singapore R ∈ RM ×N , expressed as O = B+R. The goal of rain streak
(email:tanrobby@gmail.com). removal is to decompose the rain-free background B and the
X. Guo is with State Key Lab of Information Security, IIE, CAS, China rain streak layer R from an input image O, and hence enhance
(email:guoxiaojie@iie.ac.cn).
J. Lu is with the Shenzhen Cloudream Technology Co., Ltd., Shenzhen, the visibility of the image. This layer separation problem is
China and also holds an adjunct position with the Advanced Digital Sciences ill-posed, as the number of unknowns to be recovered is twice
Center, Singapore (e-mail: jiangbo.lu@gmail.com). as many as that of the input. One common strategy is to use
M. S. Brown is with York University, Canada (e-mail:
mbrown@cse.yorku.ca). multiple images, or a video sequence, to mitigate the difficulty
This work was partly done when J. Lu was working in ADSC. of the background scene recovery given the rich temporal
2

information. In this paper, however, we focus on the problem II. R ELATED WORK
of rain streak removal given a single image only.
There are a number of methods proposed to improve the
While removing rain streaks in a single image is challenging visibility of images captured in bad weather like haze and
as less information is available, single image approaches are fog (e.g. [10], [11], [12]), rain, or snow (e.g., [13], [1], [2]).
desired for two main reasons. First, in many situations, the Methods specifically dealing with rain streak interference fall
input is only a single image captured in a rainy environment into two categories: multiple image/video-based and single
(e.g., archived images, images available on the Internet, images image methods.
taken by still cameras). Second, for dynamic scenes, such as
when the camera moves and some part of the scene also
moves, temporal information might not be reliable. As a A. Video-Based Methods
result, being able to remove rain streaks for each frame will Early methods to remove rain streaks include work by
benefit the overall final result, since most existing video-based Garg and Nayar [14], [13], which introduces a rain streak
methods assume a static background. detection and removal method from a video sequence. The
detection is based on two constraints: first, since rain streaks
To make the problem well-posed and tractable, we enforce
are dynamic, their changes in intensity within a few frames are
layer priors on both the background and rain components.
relatively large. Second, since other objects are also possibly
More specifically, the idea of using patch-based priors is
dynamic, rain streaks can be differentiated from these objects
inspired by Zoran and Weiss [3], who used Gaussian mix-
by verifying whether the intensity changes along the streak are
ture models (GMMs) to model natural image patches. This
photometrically linearly related to the background intensity.
approach is simpler to compute than the existing prior models,
This second constraint will reduce the false alarms introduced
such as FoE [4] or Weiss-Freeman’s priors [5]. Recent works
by the first constraint. Having detected the rain streaks, the
from Shih et al. [6] shows the superiority of using GMMs as
method removes them by taking the average intensity of the
priors in solving the reflection removal problems [7].
pixels taken from the previous and subsequent frames.
In our rain removal approach, to model the background Garg and Nayar [15] propose another method that exploits
patch priors, we use a GMM trained on patches from natural the ability to control a video camera’s operational parameters
images. An additional gradient sparsity constraint is imposed when capturing a rainy scene. To this end, they show that
to further regularize the background. As for the rain layer, we rain visibility in images relies significantly on the exposure
do the same: we train a GMM from small image patches of time and depth of field of the camera. Thus adjusting these
rain streaks. The difference in rain GMM learning is that we parameters while taking the video will allow us to reduce the
use only the input image and select a region with textureless appearance of the rain streaks. Zhang et al. [16] added an
background to generate the rain patches. Unlike existing additional constraint called the chromaticity constraint. They
methods in single-image rain streak removal ([2], [1], [8]), our found the intensity changes in the RGB color channels are the
method is straightforward and generates considerably better same for pixels representing rain streaks.
results qualitatively and quantitatively. To our knowledge, this More recently, Bossu et al. [17] propose a rain detection
is the first method to use GMM patch priors for the purpose algorithm based on the histogram of streak orientations. The
of rain streak removal. main idea is to fit a Gaussian distribution on rain streak
histograms, such that rain streaks can be differentiated from
A shorter version of this work appeared in [9]. This journal
other dynamic objects. It uses background subtraction to obtain
version presents the algorithm in more technical details and
the rain histograms. Barnum et al. [18] develop a model of the
includes a new structure residue recovery step to recover
rain profiles in a frequency domain and analyze the properties
the additional background residue in the rain streak output
of rain in the domain. Their frequency analysis allows for
from our optimization. This step can further improve layer
detection and removal of rain or snow streaks. Santhaseelan
separation quality over our original method in [9]. In addition,
and Asari [19] apply phase congruency features to detect rain
we evaluate the proposed method on a synthetic data set by
streaks, and then reconstruct the background scene using the
measuring the quality of both the recovered background and
registration of phase information obtained from optical flow (to
the rain streak layer. We also add a comparison to a recent
account for moving objects present in the scene). There are
work of [8]. Experiments show that our method, particularly
also methods that treat rain removal in video as a low-rank
with the structure residue recovery step, can outperform the
tensor completion problem–for example, [20], [21], assuming
existing methods by a large margin in terms of both SSIM and
a constant background (see [22] for a complete review on the
PSNR metrics.
existing video-based rain streak removal).
The remainder of this paper is organized as follows. Section
II discusses the related methods that deal with rain, including
B. Single Image Methods
video-based rain streak removal and single image-based rain
streak removal. Section III details our method, including the For single-image rain streak removal, Kang et al. [1]
problem formulation and optimization. Section IV shows the propose a method that decomposed an input image into its
results and analyzes them in comparison with the results of low-frequency component (the structure layer) and a high-
other methods. The paper is concluded in Section V. frequency component (the texture layer). The high-frequency
3

part contains rain streaks and edges of background object- with slight algebraic manipulation on its negative log function,
s. They separate the rain streak frequency from the high- we obtain the following energy minimization problem:
frequency layer via sparse coding-based dictionary learning min kO − B − Rk2F + Φ(B) + Ψ(R)
with HoG features. The output is obtained by combining back B,R
(2)
the low-frequency and processed high-frequency layers. While s. t. ∀i 0 ≤ Bi , Ri ≤ Oi ,
the decomposition idea is elegant, the overall framework in [1] where k · kF represents the Frobenius norm and i is the
is complex. The results are not optimal either, particularly as pixel index. The first term kO − B − Rk2F is essentially
the background tends to be blurry, which is caused by too few concerned with the fidelity between the observed and the
high-frequency component in the background found using the recovered signals, while Φ(B) and Ψ(R) designate the priors
dictionary. These problems remain in the follow-up works of that will be respectively imposed on B and R to regularize the
this method–for example, [23], [24]. inference. The inequality constraint ensures that the desired B
More recently, Chen and Hsu [2] introduce a single objec- and R are positive images. More importantly, this inequality
tive function to decompose the background and rain streak constraint plays a critical role in estimating reliable solutions
layers. They formulate a cost function with three terms: the as it regularizes the DC component of the recovered layers as
likelihood, a smoothed background layer, and a low-rank rain verified in recent works by [31], [6].
streak layer. Although the idea of posing the problem into Focusing our discussion on the priors, we first define the
an objective function is attractive, the constraints seem not priors of the background layer as:
sufficiently strong. In our experiments, we still observe a X
large amount of rain streaks in the output. Kim et al. [25] Φ(B) := −γ log GB (P(Bi )) + αk∇Bk1 , (3)
detect rain streaks by assuming the elliptical shape and the i

vertical orientation of the rain, and remove the detected streaks where γ(= 0.01) and α(= 0.05) are two non-negative coeffi-
using nonlocal mean filtering. This idea works for some cients balancing the corresponding terms. The function P(Bi )
cases of rain streaks, but unfortunately detecting individual is to extract the n × n (pre-defined size) patch around pixel
streaks is challenging, because they can possibly have different Bi and reshape it into a vector of length n2 with the DC
orientations, scales, and densities. The most recent work of [8] component removed.P The term G(x) stands for the GMM of
K
proposes a discriminative sparse coding framework to remove x–that is, G(x) := k=1 πk N (x|µk , Σk ), where K is the
rain streaks in a single image. While effective, the approach total number of Gaussian
PK components, πk is the component
often still leaves noticeable thin structures from the rain streaks weight such that k=1 k = 1, while µk and Σk are the
π
in the final output. mean and covariance corresponding to the kth component,
Aside from dealing with rain streaks, several methods have respectively. As the function P(·) has removed the mean of
been proposed to address artifacts that arise when raindrops every patch, µk = 0 for all k. The benefit of this patch
adhered to the camera lens or a windscreen in front of regularizer based on GMM has been demonstrated in [3]. In
the camera (e.g., [26], [27], [28], [29], [30]). The problems addition, it has been widely recognized that natural images
specific to adherent raindrops, however, are notably different are largely piecewise smooth and their gradient fields are
from the interference caused by rain streaks. In particular, typically sparse. Therefore we employ k∇Bk1 to achieve such
static adherent raindrops, which is the problem most existing a functional, where ∇ denotes the gradient operator and k · k1
methods target, are generally less dense than rain streaks. In is the `1 norm.
addition, their size in an image is generally much larger than As for the priors of the rain layer, we write it in the
rain streaks, and they tend to completely occlude parts of the following form:
background scene. X
Ψ(R) := −γ log GR (P(Ri )) + βkRk2F , (4)
i
III. O UR M ETHOD where GR (P(Ri )) is similar with GB (P(Bi )). Note that we
A. Problem Formulation use two different GMMs for the background and rain layers,
termed GB , and GR , respectively (we discuss further this
As introduced, the observed rain image O ∈ RM ×N can be GMM modeling in Section III-C). Since the rain component
modeled as a linear superimposition of the desired background tends to make up a small fraction of the observation, we
layer B ∈ RM ×N and the rain streak layer R ∈ RM ×N , such impose kRk2F to penalize it, the importance of which is
that: controlled by the parameter β(= 0.01).
O = B + R. (1) Putting all terms together leads to the complete formulation
of the energy function:
Hence, the goal of rain streak removal is to decompose the
rain-free background B and the rain streak layer R from min kO − B − Rk2F + αk∇Bk1 + βkRk2F −
B,R
a given input image O. As previously stated, this problem X 
is ill-posed. To resolve it, we propose to maximize the γ log GB (P(Bi )) + log GR (P(Ri )) (5)
joint probability of the background layer and the rain layer i

using the MAP (maximum a posteriori): that is, maximize s. t. ∀i 0 ≤ Bi , Ri ≤ Oi .


p(B, R|O) ∝ p(O|B, R) · p(B) · p(R) with the assumption The optimization approach to minimize this energy function
that the two layers B and R are independent. Equivalently, is discussed in the next section.
4

Input image and selected rain region Natural image examples

Rain GMM visualization Background GMM visualization


Fig. 2: (Upper left) The input image and the selected region used to learn the rain streak GMM. (Bottom left) Visualization
of the eigenvectors of covariance of three randomly picked rain GMM components, sorted by their eigenvalues in descending
order. The visualization helps to reveal that the GMM can capture the rain orientation and structure information. (Right) Some
example natural images and the visualization of the eigenvectors of covariance of three randomly picked GMM components.

B. Optimization b) Solving {B, R}: By fixing H, gBi , and gRi , the


As noticed in Eq. (5), the cost function is non-convex optimization problem corresponding to {B, R} becomes:
due to the patch GMM priors. A commonly used scheme {B(t+1) ,R(t+1) } = argmin kO − B − Rk2F + βkRk2F +
to solve this kind of problem is the half-quadratic splitting B,R
X (t) 
technique [32]. To cast our problem into the half-quadratic ω
(t)
kP(Bi ) − gBi k22 + kP(Ri ) − gRi k22
splitting framework, we need to make the objective function i
separable. Hence, auxiliary variables gBi , gRi and H are s. t. ∀i 0 ≤ Bi , Ri ≤ Oi .
introduced to replace P(Bi ), P(Ri ) and ∇B, respectively. (9)
By doing so, the optimization problem turns out to be in the Following [6], we use L-BFGS [33] to minimize this con-
following form: strained L2 problem. L-BFGS is an effective solver for this
min kO − B − Rk2F + αkHk1 + βkRk2F − problem, since the involved terms in this problem are all
X  quadratic.
γ log GB (gBi ) + log GR (gRi ) + ωk∇B − Hk2F c) Solving gBi (gRi ): Since the gBi and gRi sub-
i
X  problems share the same formulation with the other variables
+ω kP(Bi ) − gBi k22 + kP(Ri ) − gRi k22 given, we detail only the solver of gBi here, while gRi can
i be updated analogously. The optimization problem associated
s. t. ∀i 0 ≤ Bi , Ri ≤ Oi , (t+1)
with each gBi is expressed as:
(6)
(t+1) (t+1)
where k · k2 represents the `2 norm. Notice that ω is a positive gBi = argmin ωkP(Bi ) − gBi k22 − γ log GB (gBi ).
parameter that monotonically increases after each iteration ( g Bi

starting from ω0 = 0.02). As ω grows, the solutions to Eq. (6) (10)


infinitely approach those to Eq. (5). The proposed algorithm The approximate optimization suggested by [3] is used. This
iteratively updates the variables as described in the following. approach applies Wiener filtering using only the component
a) Solving H: Discarding the variables unrelated to H with the largest likelihood in the GMM. The whole process is
yields: summarized in Algorithm 1.
Note that we first convert the RGB input image to the YUV
H(t+1) = argmin αkHk1 + ωk∇B(t) − Hk2F . (7) space and remove rain streaks on the luminance (Y) channel.
H
We get the final output by converting the rain streak removal
This is a classic LASSO problem. Its closed-form solution can result in YUV back to RGB space.
be efficiently obtained by the shrinkage operator, the definition
of which on scalars is S>0 [x] := sgn(x) max(|x| − , 0). The
C. Gaussian Mixture Model Learning
extension of the shrinkage operator to vectors and matrices is
simply applied element-wise. As a result, we have: In this section, we describe how to obtain the two GMMs
for the background and rain layers– namely GB and GR . To
H(t+1) = Sα/2ω [∇B(t) ]. (8) obtain GB , we utilize a pre-trained GMM model provided by
5

Input Direct rain removal Dehazing output Final result

Fig. 3: Rain streak removal example for heavy rain. We found that first applying a dehazing method (3rd column), followed
by the rain streak removal, helps improve results.

Algorithm 1 Rain Streak Removal Using Layer Priors with mean intensity higher than a threshold (0.8). This strategy
Input: input image O; GMMs for two layers: GB and GR ; of using local patches for global image restoration shares the
Initialization: B ← O; R ← 0; ω ← ω◦ ; similar spirit in [34].
repeat After detecting the pure rain streak region (e.g., see the
update H using Eq. (8); rectangular box in Figure 1), we samples 8 × 8 small patches
solve {B, R} by Eq. (9); from this region and perform EM to learn the parameters of
solve {gBi ,gRi } by Eq. (10); GR . We set the cluster number for GR to a small one–that is,
ω = 2 ∗ ω; 20–compared with 200 for GB , as the rain streak appearance
until convergence or maximum iteration number; of one single image has less variance than the background
Output: The estimation of two layers B and R; layer.
Figure 2 (left) shows the eigenvectors of the three randomly
selected mixture components from the learned GR . Note that
[3] with 200 mixture components and patch size 8 × 8. This they have rain streak structures that contribute much to the
model is learned using Expectation–Maximization (EM) from expressive power of the model for the rain streak layer. On the
a set of 2 × 106 patches sampled from 300 natural images right side of Figure 2, we also visualize the eigenvectors of the
with their DC removed. The images are all rain streak-free three randomly selected mixture components from GB , learned
images and thus can directly serve our purpose of modeling on natural image patches. As can be seen, the components in
the background layer. GB show clear differences with those in GR , as they contain
To obtain GR , existing methods attempt to extract the more diversity.
internal properties of rain streaks within the input image itself,
like [2], [1]. This means we do not require building a dataset
of rain streaks to learn the rain streak model. Instead, we D. Dealing with Heavy Rain
directly learn the priors from the specific input image. Doing
this guarantees the correctness of the rain streak appearance, For images taken in scenes with heavy rainfall, the density
which otherwise can be considerably different from one image of the rain streaks accumulated throughout the environment
to another image. Unlike [2], [1], which work on the entire and partially occludes the scene content. As a result, the rain
image, we found that GR requires only small regions (e.g. , a streaks scatter the atmosphere light and produce a ‘washed-
region with size 100 × 100 contains about 10K overlapping out’ effect similar to haze and fog [11]. In such cases,
8 × 8 patches inside), since rain streaks mostly form repetitive the individual streaks, mainly nearby rain streaks, cannot be
patterns. observed anymore. For this case, we found that applying a
We observe that most natural images contain regions that dehazing method–for example, [35]–as a preprocessing step
are relatively homogenous background–for instance, sky and can be useful. Two examples are shown in Figure 3. We can
walls. The image patches within these regions can be treated see the individual rain streaks are clearer after the dehazing
as pure rain streaks, and used to train GR . To select such preprocessing step. Having applied the dehazing, we then run
regions, we calculate the variance within every possible region our rain streak removal method, which produces clearer results
in a sliding window fashion and pick the ones with the least in terms of visibility, as shown in Figure 3. This preprocessing
variances. We further remove uniformly bright regions where step could be an option to the users when they want to handle
rain streaks are hardly observed by removing the candidates heavy rain cases.

1
66

Input O B R Residue B̂ B̃ R̃

Fig.
Fig. 4:
4: Background-structure-residue
Background-structure-residue recovery illustration. We can effectively retrieve the background structure
structure residues
residues B̂ in
B̂ in
the
the initial
initial recovery
recovery of
of the
the background.
background. Finding these residues helps separate the two layers, further as shown
shown inin B̃ and R̃.
B̃ and R̃.
(Rain
(Rain streak
streak and
and structure
structure residue
residue layers
layers are amplified for better visualization.)

E.
E. Recovering
Recovering the
the Background
Background Structure
Structure Residue IV. E XPERIMENTAL R ESULTS
ESULTS

While
While our
our proposed
proposed layer
layer decomposition
decomposition method can effec- We evaluate our method using both synthetic synthetic and and real
real
tively separate
tively separate most
most background
background components
components from rain streak, images, and compare our results with the the state-of-the-art
state-of-the-art
aa closer
closer look
look at at our
our rain
rain streak
streak layer
layer may still reveal weak methods, including the sparse representation-based
representation-based dictionary
dictionary
background structure
background structure residues
residues in in some
some cases, as shown in learning method [1] (denoted as SR), the low-rank
low-rank appearance
appearance
Figure 4.
Figure 4. These
These background
background structure
structure residues, which should method [2] (denoted as LRA), and the discriminative
discriminative sparse sparse
belong to
belong to the
the background
background layer, layer, are
are falsely assigned to the coding approach [8] (denoted as DSC). For For SR SR andand DSC,
DSC,
rain streak
rain streak layer
layer in in our
our optimization.
optimization. To address this prob- we use the codes provided by the author with with their
their default
default
lem, we
lem, we introduce
introduce aa refinement
refinement step step to further retrieve these parameter settings. We have implemented LRA LRA by by strictly
strictly
background structure
background structure residues
residues from
from the rain streak layer. following the procedure described in [2] and the parameters
and the parameters
To achieve
To achieve this,
this, we
we useuse the
the background
background output B after the are fixed to get the best overall performance in in the
the quantitative
quantitative
optimization as
optimization as guidance
guidance to to further
further clean
clean the rain streak output evaluation. For the quantitative experiments on on synthetic
synthetic data,
data,
using the
R, using
R, the guided
guided image
image smoothing
smoothing framework [36], [37]. the ground truth images are available, and we we cancan evaluate
evaluate
Specifically, we
Specifically, we attempt
attempt to to obtain
obtain the background structure and compare the results using SSIM [38] as well well asas PSNR
PSNR on on
residues B̂
residues by optimizing
B̂ by optimizing the the following
following objective function: the luminance channel (in both two metrics,
metrics, aa larger
larger number
number
indicates a closer proximity to the ground truth).
truth).
XX XX X X
min ((B̂
min −R
B̂ii − Rii))22 +
+ ττ φi,j 2
i,j(B)(B̂ i − B̂ j ) (11)
Our Matlab implementation takes 93s (5 iterations)
iterations) // 370s
370s

B̂ ii ii j∈N
j∈N44(i)
(i) (20 iterations) to
to process
process one
one 480
480××640640 color
colorimage
image on aa
PC with Intel I7 CPU (3.4GHz) and 8GB RAM, RAM, and and most
most of of
where ii isis the
where the pixel
pixel index.
index. N represents the four neighbors
(i) represents
N44(i) the time is spent
spent on
on solving
solving Eq.
Eq. (9)
(9) using
usingL-BFGS.
L-BFGS. Note Note that
that
for pixel
for pixel i.i. The
The spatially
spatially varying
varying weight
weight function φi,j measures in all the results presented in the section, we we do do not
not apply
apply
how similar
how similar two two pixels
pixels ii and
and jj are
are at the guidance image B. the dehazing processing as described in Section
Section III-D
III-D for
for fair
fair
The first
The first term
term is is the
the data
data constraint,
constraint, which forces the closeness comparison with other methods.
of residue
of residue B̂ to R,
B̂ to while the
R, while the second
second term encourages the
smoothness under the guidance
smoothness under the guidance of B. of B. The weight τ balances
A. Synthetic Data
the data term and the regularization term. We use the fast
the data term and the regularization
guided image
guided image smoothing
smoothing techniques
techniques [37] to solve Eq. (11). 1) Study on parameters: First we present present aa quick
quick study
study
Having obtained
Having obtained the the background
background structure residues B̂, we on the convergence and parameter settings of of our
our method.
method. We We
remove itit from
remove from the the rain
rain streak
streak layer
layer R and add it back to tested the effect on the SSIM of recovered layers layers to
to the
the ground
ground
the background
the background layer layer B. The final
B. The final background
background layer
layer and
and rain
rain truth on our synthetic dataset and plotted in in Figure
Figure 7. 7. ItIt can
can
streak layer
streak layer are are denoted
denoted as as B̃ and R̃
B̃ and respectively and
R̃ respectively and can
can be
be be seen from Figure 7(a) that our method converges converges fast
fast such
such
computed as
computed as that it can get to a good solution after about about 1010 iterations.
iterations.
Figure 7(b)(c)(d)
Figure 7(b)(c)(d) shows
shows the
the layer
layer recovery
recovery results
results on
on different
different
R̃ =
= max(R
max(R − − B̂,
B̂, 0);
0); parameter setting

(12)
(12) parameter setting of
of α, β, and
α, β, and γ.γ. Our
Our method
method is is not
not sensitive
sensitive to to
B̃ =
B̃ =BB++ B̂.
B̂. the parameter
the parameter settings
settings and
and is
is quite
quite stable
stable when
when thethe parameters
parameters
are within
are within aa proper
proper range.
range.
Figure 44 shows
Figure shows thethe results
results of
of this
this background
background structure
structure From the objective function
From the objective function inin Eq.
Eq. (5),
(5), one
one can
can notice
notice thatthat
residue recovery step. As can be seen, this step
residue recovery step. As can be seen, this step can more can more by simply disabling the terms related
by simply disabling the terms related to gB to g and
Bii and gR
g Rii (set
(set
successfully recover the background residues and
successfully recover the background residues and make the make the γγ = 0), our
= 0), our model
model becomes
becomes dominated
dominated by by the
the gradient
gradient sparsity
sparsity
rain streak cleaner. An improvement in the Structure Similarity
rain streak cleaner. An improvement in the Structure Similarity term αk∇Bk
term αk∇Bk11,, whichwhich is is the
the Total
Total Variation
Variation (TV)
(TV) model
model
Index (SSIM)
Index (SSIM) isis also
also demonstrated.
demonstrated. [39]. To reveal the benefit of the GMM priors, a comparison
[39]. To reveal the benefit of the GMM priors, a comparison
77

w/o GMM GMM


w/ GMM
Input

SSIM: 0.7777 SSIM: 0.7116 SSIM: 0.9290 SSIM: 0.8791


SSIM: 0.8791
Fig.
Fig. 5:
5: Illustration
Illustration of the effect of the GMM. Our objective without the GMM components (second column)
column) cannot
cannot distinguish
distinguish
the
the rain
rain streaks
streaks and the image details like the one with GMM (left two columns). The Structure Similarity
Similarity Index
Index (SSIM)
(SSIM) for
for
each
each result
result is
is shown below the image.

Ground truth Input SR LRA DSC Ours

0.5605 0.6655 0.7009 0.7088 0.8145

0.5886 0.7647 0.7812 0.6527 0.8479


Fig. 6:
Fig. 6: Rain
Rain streak
streak removal results for the two data sets in [1]. The numbers below images are the SSIM
SSIM wrt
wrt ground
ground truth.
truth.
Ours shows better background recovery both quantitatively and qualitatively.
Ours shows better background recovery both quantitatively and qualitatively.

between the
between the result
result with
with and
and without
without thethe GMM
GMM is is conducted.
conducted. rain
rain inin the
the outputs.
outputs. Our Our proposed
proposed methodmethod removes
removes the the rain
rain
The effect
The effect isis shown
shown in in Figure
Figure 5, 5, from
from which,
which, we we can
can seesee streaks
streaks while
while keeping
keeping image
image details
details in
in the
the background
background layer.layer.
that the
that the result
result without
without GMM GMM part part is
is able
able to
to filter
filter out
out the
the rain
rain For more comparison, we synthesize synthesize aa new new datadata set
set with
with 12
12
streaks but
streaks but also
also falsely
falsely removes
removes many
many details
details that
that belong
belong to to the
the images using the photorealistic rendering rendering techniques
techniques proposed
proposed
background as
background as itit will
will have
have the
the behavior
behavior similar
similar to to the
the ofof TV
TV by
by Grag
Grag and and Shree
Shree [41].
[41]. The
The background
background images images are are from
from the
the
filter. The
filter. The GMM
GMM term term greatly
greatly helps
helps distinguish
distinguish thethe rain
rain streaks
streaks BSD300
BSD300 Dataset [42], which are all natural images. Table
Dataset [42], which are all natural images. Table
from other
from other scene
scene details
details in
in the
the recovering
recovering rainrain layers.
layers. TheThe II lists
lists the
the performance
performance of of the
the methods.
methods. Our Our method
method withwith
result of
result of SSIM
SSIM score
score increases
increases from
from (0.7777,
(0.7777, 0.7116) without
0.7116) without the
the background structure residue recovery step described
background structure residue recovery step described in in
the GMM
the GMM to to (0.9290,
(0.9290, 0.8791) with the
0.8791) with the GMM.
GMM. This This increase
increase Section
Section III-E
III-E is is denoted
denoted as as ‘Ours+BSR,’
‘Ours+BSR,’ while while ourour original
original
demonstrates the effectiveness of
demonstrates the effectiveness of the GMM model.the GMM model. method
method is is denoted
denoted as as ‘Ours.’
‘Ours.’ We We have
have also
also added
added aa generic
generic
2) Comparison:
2) Comparison: We compare our method with SR [1], edge-aware
edge-aware smoothing filter, the guided filter (GF)
smoothing filter, the guided filter (GF) [40],
[40], here
here
LRA
LRA [2], [2], and
and DSCDSC [8]. [8]. Figure
Figure 66 shows
shows the
the results
results of of these
these for
for comparison.
comparison. The The numerical
numerical performance
performance using using SSIM
SSIM and and
methods on the data set provided in [1].
methods on the data set provided in [1]. This is the only This is the only PSNR
PSNR metrics
metrics of of different
different methods
methods is is listed
listed inin Table
Table I.I. Figure
Figure 88
dataset we can find that is provided by previous work and
dataset we can find that is provided by previous work and shows
shows thethe visual
visual results
results ofof two
two examples.
examples.
has ground
has ground truth
truth forfor quantitative
quantitative evaluation.
evaluation. As As observed,
observed, our our We
We first compare the background recovery
first compare the background recovery quality.
quality. ItIt is
is
method considerably
method considerably outperforms
outperforms the the other
other three
three methods
methods in in expected
expected that the performance of the generic filtering method
that the performance of the generic filtering method
terms of
terms of both
both visual
visual quality
quality and
and SSIM.
SSIM. SR SR [1]
[1] tends
tends toto over-
over- (GF)
(GF) is is inferior
inferior to to those
those task-specific
task-specific rain rain streak
streak removal
removal
smooth the
smooth the image
image content,
content, LRA
LRA [2][2] sometimes
sometimes failsfails to
to capture
capture methods.
methods. It is surprising to find that the background recovery
It is surprising to find that the background recovery
the rain streaks, and DSC [8] leaves a noticeable amount of
the rain streaks, and DSC [8] leaves a noticeable amount of of
of GF
GF has
has anan even
even higher
higher SSIM
SSIM and and PSNR
PSNR (0.8179
(0.8179 // 29.43
29.43 dB)
dB)
8

8 8

Input + Ground truth GF [40] SR [1] LRA [2] DSC [8] Ours+BSR
Input + Ground truth GF SR LRA DSC Ours+BSR

Fig.8:8:
Fig.
Fig. 8:Visual
Visualcomparison
Visual comparison
comparisonofofof different
different
differentrainrain streak
streak
rain removal
removal
streak methods
methods
removal a on
onon
methods a synthetic
synthetic data
a synthetic data
set. set.
set.
data

/33.91
/ /33.91 33.91
dB).dB).dB).
Visually,Visually,
Visually, ourourfullour full
fullmethod method
method withwith with background-
background-
background-
structure-residue
structure-residue procedure
procedure (Ours+BSR) (Ours+BSR)
structure-residue procedure (Ours+BSR) obtains much cleaner obtains obtains
much much
cleaner cleaner
background
background recoveryrecoverythan
background recovery than other methods. other
than methods.
other methods.
ForFor thethethe
For rain
rain streak
rain streak layer
streak recovery,
layer
layer GFGF
recovery,
recovery, gets
GF the
gets gets lowest
the the
lowest lowest
numerical
numerical
numerical scores
scores (0.6977
scores /
(0.6977
(0.6977 28.87 dB).
/ 28.87
/ 28.87 This
dB).dB). point
ThisThispointalso
point reflects
alsoalso
reflectsreflects
ininFigure
inFigure 8 8that
Figure that GFGF
8 that [40]
GF[40] falsely
[40] assigns
falsely
falsely many
assigns
assigns many image
many image details
imagedetailsdetails
into
into thethethe
into rain
rain streak
rain layer.
streak
streak layer. In In
layer. contrast,
In thethe
contrast,
contrast, restthe ofrest
rest the the
of rainthe
of rain rain
streak
streak removal
streak removal
removal methods methods
methods [1],[1],[2],
[1],
[2],[8]
[2], capture
[8] [8] thethe
capture
capture rainthestreaks
rain rain
streaksstreaks
(a) (b)
better
better than
than GF GFGF[40].
[40].It is observable
It isIt observable that LRA
thatthat
LRA [2] obtains
[2] obtains lessless
(a) (b)
better than [40]. is observable LRA [2] obtains less
accurate rain streak recovery (0.7089 / 29.86 dB) among all
accurate
accurate rainrain streak
streak recovery
recovery (0.7089
(0.7089 / 29.86
/ 29.86 dB)dB) among among all all
rain streak removal methods, indicating that low rank may not
rain
rainstreak
streak removal
removal methods,
methods, indicating
indicating thatthat
lowlow rankrankmaymay not not
be a sufficient prior for this task. The visual example reveals
bebea sufficient prior forfor thisthistask. TheThe visual example reveals
the limitation of using the low rank prior to model the rain reveals
a sufficient prior task. visual example
thethelimitation
limitation of ofusingusing the lowlow rankrank prior to model the therain rain
streak layer [2] because it maythe treat other prior
repetitive topatterns
model in
streak
streak layer
layer [2][2] because
because it may
it may treattreat
other repetitive
other repetitivepatterns
patternsin in
the image as rain streaks (e.g., the bricks, the building facades).
thetheimage as rain streaks (e.g., the bricks, the building facades).
SR [1] image
(0.7454as/ rain 30.58 streaks
dB) can (e.g., the bricks,
capture some the rainbuilding
streaks in facades).
(c) (d) theSRscene
SR[1][1] (0.7454
but(0.7454
cannot / 30.58
/ 30.58
distinguishdB)dB) can
rain capture
can capture
streaks some some
in highly raintextured
streaks
rain streaksin in
the
regions.scene
the scene but
Ours but cannot
cannot
(0.8462 distinguish
distinguish
/ 32.10 rain
dB) and streaks
rain in highly
streaks in(0.8483
Ours+BSR textured
highly textured
/
Fig. (c) (d)
7: Study on the parameter settings. regions.
32.58 dB) Ours
regions. Ours
show (0.8462
(0.8462
noticeable / 32.10
/ 32.10
advantages dB)dB) and
over Ours+BSR
and Ours+BSR
existing (0.8483
methods (0.8483 / /
Fig. 7:7: Study
Study on
on the
the parameter
parametersettings.
settings. 32.58 dB) show noticeable advantages over existing methods
both32.58
numerically
dB) show and noticeable
visually. advantages over existing methods
both numerically and visually.
than that of SR [1] (0.7437 / 27.69 dB). An inspection of the both numerically and visually.
recovered image in B. Results on Real Images
than that
than that of
of SR [1]Figure
SR [1] (0.7437
(0.7437 8 shows
// 27.69
27.69thedB).
drawback
dB).An of SR [1]of
Aninspection
inspection that
ofthethe
it tends toimage
over-smooth the88background layer. LRA [2] fails B. Results
B. Results onon Real
RealImages
Images
recovered
recovered image in
in Figure
Figure shows
showsthe thedrawback
drawback ofofSRSR[1] that Figure
[1]that 9 and Figure 10 show the results and comparisons
to
it remove
tends to the rain streaks
over-smooth in
the these
backgroundtwo
it tends to over-smooth the background layer. LRA [2] fails examples
layer. in
LRA Figure
[2] 8.
fails using real
Figure images
9 and where
Figure
Figure 9 and Figure 10 the
10 images
showshow in the
the Figure
results 9and
results areand
from [8]
comparisons
comparisons
While
to DSCthe
remove [8],rain
withstreaks
a higher in overall
these SSIM
two and PSNR
examples in (0.86578. and
Figure the real
using images in Figure
images where 10the
areimages
found on in the Internet.
Figure 9 are from [8] [8]
to remove the rain streaks in these two examples in Figure 8. using real images where the images in Figure 9 are from
/While
30.96DSCdB),[8],
canwith
obtaina slightly
higher better
overall visual
SSIM andresults,
PSNR it still
(0.8657 Similar
and the to the inprevious
images Figure experiments,
10 are found thethe
on drawbacks
Internet. of
While DSC [8], with a higher overall SSIM and PSNR (0.8657 and the images in Figure 10 are found on the Internet.
leaves
30.96many
/ 30.96 dB),thin
can rain structures in the backgroundresults,
images.itOur SR [1], LRAto [2], and DSC [8] are still present. SR [1] of
/original dB),
method
can obtain
obtainSSIM slightly
slightly better
bettervisual
visual results, itstill
still Similar
Similar tothetheprevious
previous experiments,
experiments, the thedrawbacks
drawbacks of
leaves many thin obtains
rain structures =in0.9143 and PSNR
the background = 32.85
images. Our always
SR [1],produces
LRA over-smooths
[2], and DSCbackgrounds.
[8] are stillLRA [2] either
present. SR [1]
leaves
dB many
overall, thin rain
whichobtains structures
is far SSIMbetter =thanin the background
SR [1], images. Our SR [1], LRA [2], and DSC [8] are still present. 3 SR [1]
original method 0.9143 and LRA
PSNR[2], and over-smooth
= 32.85 always producesthe background
over-smooths (casebackgrounds.
1 in Figure 9LRA and case
[2] either
original
DSC method
[8]. Ourwhich obtains
method SSIM
withbetter = 0.9143 and PSNR
the background-structure-residue = 32.85 always produces over-smooths backgrounds. LRA [2] either
dB overall, is far than SR [1], LRA [2], and inover-smooth Figure 10) or thefails to remove(case
background the rain
1 in streaks
Figure as the low
9 and case 3
dB overall,
recovery which is far better than SR [1], LRA [2], and over-smooth the background (case 1 in Figure 9 and case 3
DSC [8]. step Our shows
methodfurther with the improvement over our original rank
background-structure-residue assumption
in Figure 10) oris fails
not always
to remove met.the DSC rain[8]streaks
exhibitsas good
the low
DSC
method, [8]. Ourobtains
method with best the background-structure-residue in Figurethe10) orstreaks
fails toareremove the rain9) streaks as the low
recovery and step shows the further background
improvement recovery most results
over ouronoriginal rank when
assumption rainis not always sparse
met.(Figure
DSC [8] but the rain
exhibits good
recovery
cases, and step
also shows
obtains further
the best improvement
overall over our
performance originalstreak
(0.9228 rank assumption
removal performanceis notwill
always
drop met.
in the DSC
cases [8]
withexhibits
more good
method, and obtains the best background recovery on most results when the rain streaks are sparse (Figure 9) but the rain
cases, and also obtains the best overall performance (0.9228 streak removal performance will drop in the cases with more rain
method, and obtains the best background recovery on most results when the rain streaks are sparse (Figure 9) but the
cases, and also obtains the best overall performance (0.9228 streak removal performance will drop in the cases with more
999

TABLE I:I: Quantitative


TABLE Quantitative comparison
comparison of
of rain
rain streak
streak removal
removal results
results on
on our
our synthetic
synthetic data
data sets
sets (12
(12 images).
images).
Method
Method #1
#1 #2
#2 #3
#3 #4
#4 #5
#5 #6
#6 #7
#7 #8
#8 #9
#9 #10
#10 #11
#11
#11 #12
#12
#12 Average
Average
Average
GF[40]
GF[40] 0.7643
0.7643 0.8335
0.8335 0.8792
0.8792 0.7793
0.7793 0.7815
0.7815 0.8552
0.8552 0.8929
0.8929 0.8165
0.8165 0.8126
0.8126 0.8156
0.8156 0.7553
0.7553
0.7553 0.8287
0.8287
0.8287 0.8179
0.8179
0.8179
SR[1]
SR[1] 0.7309
0.7309 0.7859
0.7859 0.8349
0.8349 0.7617
0.7617 0.6229
0.6229 0.7347
0.7347 0.8169
0.8169 0.7661
0.7661 0.7331
0.7331 0.7410
0.7410 0.6326
0.6326
0.6326 0.7637
0.7637
0.7637 0.7437
0.7437
0.7437
SSIM
SSIM

LRA[2]
LRA[2] 0.8277
0.8277 0.8686
0.8686 0.7910
0.7910 0.8478
0.8478 0.8797
0.8797 0.8993
0.8993 0.9245
0.9245 0.8182
0.8182 0.8734
0.8734 0.8252
0.8252 0.8514
0.8514
0.8514 0.8104
0.8104
0.8104 0.8514
0.8514
0.8514
DSC[8] 0.8245 0.8816 0.7550 0.9534 0.9150 0.9340 0.9439 0.8108 0.8974 0.8218 0.8487 0.8027 0.8657
Background

DSC[8] 0.8245 0.8816 0.7550 0.9534 0.9150 0.9340 0.9439 0.8108 0.8974 0.8218 0.8487
0.8487 0.8027
0.8027 0.8657
0.8657
Background

Ours[9]
Ours[9] 0.8844
0.8844 0.9288
0.9288 0.9279
0.9279 0.9327
0.9327 0.8956
0.8956 0.9525
0.9525 0.9574
0.9574 0.8970
0.8970 0.9122
0.9122 0.8989
0.8989 0.8638
0.8638
0.8638 0.9205
0.9205
0.9205 0.9143
0.9143
0.9143
Ours+BSR
Ours+BSR 0.8911
0.8911 0.9380
0.9380 0.8759
0.8759 0.9676
0.9676 0.9319
0.9319 0.9693
0.9693 0.9716
0.9716 0.9002
0.9002 0.9396
0.9396 0.9023
0.9023 0.8846
0.8846
0.8846 0.9015
0.9015
0.9015 0.9228
0.9228
0.9228
GF[40]
GF[40] 30.33
30.33 30.49
30.49 28.03
28.03 30.77
30.77 27.23
27.23 29.42
29.42 31.03
31.03 29.02
29.02 29.41
29.41 29.47
29.47 27.99
27.99
27.99 30.00
30.00
30.00 29.43
29.43
29.43
SR[1]
SR[1] 29.89
29.89 28.99
28.99 26.42
26.42 29.94
29.94 25.11
25.11 25.68
25.68 27.52
27.52 27.94
27.94 27.85
27.85 28.09
28.09 26.42
26.42
26.42 28.37
28.37
28.37 27.69
27.69
27.69
PSNR
PSNR

LRA[2]
LRA[2] 31.04
31.04 31.88
31.88 25.79
25.79 32.46
32.46 28.56
28.56 30.85
30.85 33.38
33.38 28.96
28.96 30.87
30.87 29.71
29.71 29.33
29.33
29.33 29.80
29.80
29.80 30.22
30.22
30.22
DSC[8]
DSC[8] 31.31
31.31 31.37
31.37 26.53
26.53 36.00
36.00 29.50
29.50 31.67
31.67 34.83
34.83 28.90
28.90 32.66
32.66 29.46
29.46 29.11
29.11
29.11 30.17
30.17
30.17 30.96
30.96
30.96
Ours[9]
Ours[9] 33.44
33.44 34.65
34.65 30.47
30.47 36.90
36.90 29.86
29.86 33.48
33.48 35.11
35.11 31.72
31.72 33.16
33.16 32.12
32.12 30.41
30.41
30.41 32.84
32.84
32.84 32.85
32.85
32.85
Ours+BSR
Ours+BSR 33.40
33.40 35.30
35.30 29.90
29.90 39.50
39.50 30.42
30.42 37.13
37.13 39.32
39.32 31.94
31.94 34.25
34.25 32.26
32.26 30.71
30.71
30.71 32.85
32.85
32.85 33.91
33.91
33.91
GF[40]
GF[40] 0.7499
0.7499 0.7358
0.7358 0.6980
0.6980 0.6939
0.6939 0.5499
0.5499 0.6455
0.6455 0.7561
0.7561 0.7268
0.7268 0.6891
0.6891 0.7258
0.7258 0.6423
0.6423
0.6423 0.7591
0.7591
0.7591 0.6977
0.6977
0.6977
SR[1]
SR[1] 0.7207
0.7207 0.7750
0.7750 0.6007
0.6007 0.8769
0.8769 0.7294
0.7294 0.8361
0.8361 0.9012
0.9012 0.7552
0.7552 0.8090
0.8090 0.7015
0.7015 0.5548
0.5548
0.5548 0.6846
0.6846
0.6846 0.7454
0.7454
0.7454
SSIM
SSIM

LRA[2]
LRA[2] 0.7135
0.7135 0.7509
0.7509 0.5931
0.5931 0.8059
0.8059 0.7122
0.7122 0.8247
0.8247 0.8674
0.8674 0.6997
0.6997 0.7528
0.7528 0.6341
0.6341 0.5599
0.5599
0.5599 0.5925
0.5925
0.5925 0.7089
0.7089
0.7089
DSC[8]
DSC[8] 0.7020
0.7020 0.7587
0.7587 0.5194
0.5194 0.8836
0.8836 0.6917
0.6917 0.8072
0.8072 0.8809
0.8809 0.7035
0.7035 0.7796
0.7796 0.6750
0.6750 0.5784
0.5784 0.6288
0.6288 0.7175
0.7175
streak

0.5784 0.6288 0.7175


Rain streak

Ours[9]
Ours[9] 0.8506
0.8506 0.8802
0.8802 0.7679
0.7679 0.9267
0.9267 0.7753
0.7753 0.8866
0.8866 0.9264
0.9264 0.8612
0.8612 0.8661
0.8661 0.8340
0.8340 0.7385
0.7385
0.7385 0.8405
0.8405
0.8405 0.8462
0.8462
0.8462
Ours+BSR
Ours+BSR 0.8485
0.8485 0.8793
0.8793 0.7920
0.7920 0.9268
0.9268 0.7757
0.7757 0.8872
0.8872 0.9266
0.9266 0.8597
0.8597 0.8659
0.8659 0.8327
0.8327 0.7387
0.7387
0.7387 0.8462
0.8462
0.8462 0.8483
0.8483
0.8483
Rain

GF[40]
GF[40] 29.65
29.65 30.00
30.00 27.01
27.01 30.64
30.64 26.65
26.65 29.21
29.21 30.85
30.85 28.13
28.13 28.93
28.93 28.75
28.75 27.35
27.35
27.35 29.25
29.25
29.25 28.87
28.87
28.87
SR[1]
SR[1] 30.14
30.14 31.89
31.89 26.74
26.74 33.72
33.72 28.61
28.61 33.51
33.51 36.04
36.04 28.92
28.92 31.40
31.40 29.21
29.21 27.47
27.47
27.47 29.32
29.32
29.32 30.58
30.58
30.58
PSNR
PSNR

LRA[2]
LRA[2] 29.45
29.45 31.00
31.00 26.03
26.03 33.76
33.76 28.16
28.16 33.52
33.52 35.17
35.17 27.44
27.44 30.08
30.08 28.19
28.19 27.45
27.45
27.45 28.06
28.06
28.06 29.86
29.86
29.86
DSC[8]
DSC[8] 29.98
29.98 30.05
30.05 25.05
25.05 34.52
34.52 27.91
27.91 30.10
30.10 33.31
33.31 27.59
27.59 31.33
31.33 28.03
28.03 27.75
27.75
27.75 28.87
28.87
28.87 29.54
29.54
29.54
Ours[9]
Ours[9] 32.35
32.35 34.00
34.00 28.74
28.74 37.78
37.78 29.10
29.10 33.30
33.30 34.93
34.93 30.25
30.25 32.69
32.69 31.18
31.18 29.41
29.41
29.41 31.51
31.51
31.51 32.10
32.10
32.10
Ours+BSR
Ours+BSR 32.14
32.14 34.11
34.11 28.80
28.80 38.33
38.33 29.17
29.17 35.69
35.69 37.75
37.75 30.31
30.31 32.70
32.70 31.01
31.01 29.35
29.35
29.35 31.61
31.61
31.61 32.58
32.58
32.58

Before: labeled as rain After: labeled as vehicle

Fig. 12:
Fig. 12: Image
Image recognition
recognition results
results on
on the
the images
images before
before and
and
after rain streak removal.
after rain streak removal.
Input Output
Fig. 11:
11: Rain
Rain streak
streak removal
removal on
on raw
raw data
data using
using our
our method.
method. such asour
output, image recognition.
method Figure in12computer
is also helpful (left) shows
vision an tasks,
input
Fig.
that isassupposed
such to be a car Figure
image recognition. on the road, but with
12 (left) showsrain anstreaks
input
presented.
that This istoa be
is supposed typical
a carexample of images
on the road, taken
but with outside
rain streakson
1
rain streaks (Figure 9). Our method provides more favorable a rainy day.
presented. Weisuse
This Clarifai,
a typical an advanced
example of imagesimage
taken recognition
outside on
rain streaks
results (Figure 9).
by effectively Our method
removing the rain provides
streaks more favorable
and meanwhile asystem based
rainy day. Weonuse a deep convolutional
Clarifai, 11
an advancednetwork.
image This system
recognition
results by effectively
retaining image details. removing the rain streaks and meanwhile classifies
system the input
based on a image as rain. After network.
deep convolutional the rain streak
This removal
system
retaining
Results on imagerawdetails.
data In the previous results, we do not using ourthe
classifies method, the output
input image as rain.in Figure 12 rain
After the (right) is correctly
streak removal
Results on raw on
assume anything data In theresponse
the camera previousfunction,
results, which
we doisnot the labeled
using ourasmethod,
vehicle.the output in Figure 12 (right) is correctly
assume
commonanything
practiceon in the
the camera
field of response function,
rain removal. which to
According is the
our labeled as vehicle.
common
rain model practice
in Eq.(in1),
theas field of as
long raintheremoval. According
input image is the to our
linear V. C ONCLUSION
rain model inofEq.(
combination rain1),layer
as long as the input layer,
and background imageour
is the linear
proposed We have introduced a Cnovel
V. ONCLUSION
approach for solving the de-
combination
method should of work
rain layer and background
properly. However, one layer, our argue
might proposed that composition problem ofa anovel
We have introduced background
approachscene and rainthe
for solving streaks
de-
method should work
Eq.( 1) assumes properly.
linearity of theHowever, one mightfunction.
camera response argue that To using a single
composition image.ofUnlike
problem existingscene
a background methods, we streaks
and rain impose
Eq.( 1) assumes
address linearity
this concern, of the
we also camera
tested our response
method on function.
raw image To
constraints
using on image.
a single both theUnlike
background
existingand rain layers.
methods, These
we impose
address
that has this concern,
linear camerawe also tested
response our method
function. Figure on
11 raw
showsimagethat constraints on
constraints areboth
simplethe Gaussian
background mixture
and rain models
layers.(GMMs)
These
that has linear
our method camera
works responseunder
effectively function.
such Figure 11 shows
a response that
function. learned from
constraints areimage
simple patches.
Gaussian Based on ourmodels
mixture experiments,
(GMMs) we
our method works
Application effectivelyvision
in computer under such a response
Besides visuallyfunction.
pleasing
output, our method
Application in computeris also vision
helpful Besides
in computer vision
visually tasks,
pleasing 111https://www.clarifai.com/.
https://www.clarifai.com/.
10
10

Input SR LRA DSC Ours

Fig. 9:
Fig. 9: Visual
Visual comparisons
comparisons of
of different
different rain
rain streak
streak removal
removal methods
methods on
on real
real images
images in
in [8].
[8].

showed from
learned that these
imagetwo constraints
patches. Based prove
on ourto be more effective
experiments, we [3]
[2] D. Zoran
Y.-L. Chen andandY. Weiss,
C.-T. Hsu, “From“Alearning models
generalized of natural
low-rank image patches
appearance model
than methods based on sparse dictionary learning and low rank to
forwhole image restoration,”
spatio-temporally in IEEE
correlated Int’l Conf.inComputer
rain streaks,” IEEE Int’lVision,
Conf. 2011.
Com-
showed that these two constraints prove to be more effective [4] S. Roth and M. J. Black, “Fields of experts: A framework for learning
constraints. puter Vision, 2013.
than methods based on sparse dictionary learning and low rank [3] image
D. Zoranpriors,”
and Y. in Weiss,
IEEE Conf.“FromComputer Vision and
learning models Patternimage
of natural Recognition,
patches
Our constraint on the rain layer is particularly interesting
constraints. 2005.
to whole image restoration,” in IEEE Int’l Conf. Computer Vision, 2011.
as Our
rain constraint
streaks have onspecial
the rain appearances and structures.
layer is particularly GMM
interesting [5]
[4] Y. Weissand
S. Roth andM. W.J. T.Black,
Freeman,
“Fields“What makes A
of experts: a good modelfor
framework of learning
natural
can effectively capture a considerably narrower distribution images?” in IEEE
image priors,” Conf. Conf.
in IEEE Computer VisionVision
Computer and Pattern Recognition,
and Pattern 2007.
Recognition,
as rain streaks have special appearances and structures. GMM [6] Y. Shih, D. Krishnan, F. Durand, and W. T. Freeman, “Reflection removal
2005.
to describe
can rain capture
effectively streaks aand distinguish narrower
considerably them fromdistribution
the wider
[5] using
Y. Weissghosting
and W.cues,” in IEEE“What
T. Freeman, Conf.makes
Computera good Vision
model andof Pattern
natural
to describe rain streaks and distinguish them from theverified
range of texture for the background layer. We have wider Recognition,
images?” in IEEE 2015.Conf. Computer Vision and Pattern Recognition, 2007.
that this
range GMM for
of texture priortheforbackground
rain streaks is aWecritical
layer. part of
have verified [7]
[6] Y. Shih,
Li and D. M. S. Brown,
Krishnan, “Exploiting
F. Durand, and W.reflection
T. Freeman, change for automatic
“Reflection removal
reflection
using ghostingremoval,”cues,”in IEEE
in IEEEInt’l Conf.
Conf. Computer
Computer Vision,Vision 2013.
and Pattern
the decomposition.
that this GMM prior Without
for rain the streaks
GMM priors, the estimated
is a critical part of [8] Y. Luo, Y. Xu,
Recognition, 2015.and H. Ji, “Removing rain from a single image via
background is much more blurred, and the
the decomposition. Without the GMM priors, the estimated rain layer contains [7] discriminative
Y. Li and M. sparse S. Brown, coding,” in IEEEreflection
“Exploiting Int’l Conf. Computer
change Vision,
for automatic
too much high-frequency
background is much more texturesblurred, from the rain
and the background layer.
layer contains 2015.
reflection removal,” in IEEE Int’l Conf. Computer Vision, 2013.
Our proposed method not only is simple and effective but also [9]
[8] Y. Luo,
Li, R.Y.T.Xu, Tan,
andX.H.Guo, J. Lu, and rain
Ji, “Removing M. from
S. Brown,
a single“Rain
image streak
via
too much high-frequency textures from the background layer. removal using layer
discriminative sparse priors,” in IEEE
coding,” Conf.Int’l
in IEEE Computer
Conf. Vision
Computer and Pattern
Vision,
does not assume the rain streak orientations,
Our proposed method not only is simple and effective but also sizes, or scales. Recognition,
2015. 2016.
It clearly
does demonstrates
not assume the rainthe usefulness
streak of the
orientations, GMM
sizes, priors
or scales. [10]
[9] R.
Y. Fattal,
Li, R.“Single
T. Tan, image
X. Guo,dehazing,”
J. Lu, ACMandTrans.
M. S. Graphics,
Brown, vol. 27, no.
“Rain 3,
streak
to the decomposition framework, which
It clearly demonstrates the usefulness of the GMM priors is a step forward in p. 72, 2008.
removal using layer priors,” in IEEE Conf. Computer Vision and Pattern
addressing rain streak removal. Recognition,
[11] R. 2016.
T. Tan, “Visibility in bad weather from a single image,” in IEEE
to the decomposition framework, which is a step forward in [10] Conf. Computer
R. Fattal, “SingleVision
imageand PatternACM
dehazing,” Recognition, 2008. vol. 27, no. 3,
Trans. Graphics,
In addition,
addressing we have
rain streak proposed a background-structure-
removal. p. 72,
[12] Y. 2008.
Li, R. T. Tan, and M. S. Brown, “Nighttime haze removal with glow
residue recovery step
In addition, we have proposed to retrieve the incorrectly assigned
a background-structure- [11] and
R. T.multiple
Tan, “Visibility
light colors,” in bad weather
in IEEE Int’lfrom
Conf.a Computer
single image,”
Vision,in2015.
IEEE
background details in the rain layers. This step can make Conf.
[13] K. Computer
Garg and S. K. Vision
Nayar,and“Vision
Patternand Recognition,
rain,” Int’l.2008.
J. Computer Vision,
residue recovery step to retrieve the incorrectly assigned [12] vol.
Y. Li,75,R.no.
T. Tan,
1, pp.and M. S.
3–27, Brown, “Nighttime haze removal with glow
2007.
the rain streak layer cleaner and also further
background details in the rain layers. This step can make improve the and multiple light andcolors,” in IEEE Int’l from
Conf. videos,”
ComputerinVision,
[14] ——, “Detection removal of rain IEEE 2015.
Conf.
separation quality. Experimental results have
the rain streak layer cleaner and also further improve the shown that our [13] Computer
K. Garg and S. K.
Vision and Nayar,
Pattern“Vision and rain,”
Recognition, Int’l. J. Computer Vision,
2004.
method quantitatively
separation and qualitatively
quality. Experimental outperforms
results have shown thatexisting
our vol. 75,
[15] ——, no. 1,does
“When pp. 3–27,
a camera2007.see rain?” in IEEE Int’l Conf. Computer
rain streak removal methods in terms of both background [14] Vision,
——, “Detection
2005. and removal of rain from videos,” in IEEE Conf.
method quantitatively and qualitatively outperforms existing Computer
[16] X. Zhang, Vision
H. Li, and Pattern
Y. Qi, W. K.Recognition,
Leow, and T. 2004.
K. Ng, “Rain removal in
recovery
rain streakandremoval
rain streak recovery.
methods in terms of both background [15] video
——, by “When does atemporal
camera see
combining andrain?” in IEEE
chromatic Int’l Conf.
properties,” Computer
in IEEE Int’l
recovery and rain streakRrecovery. Vision,Multimedia
Conf. 2005. and Expo, 2006.
EFERENCES [16] J.
[17] X.Bossu,
Zhang,N.H.Hautière,
Li, Y. Qi, andW.J.-P.
K. Tarel,
Leow,“Rain
and T.orK. snowNg,detection
“Rain removal
in image in
video by combining
sequences through usetemporal and chromatic
of a histogram properties,”
of orientation in IEEE
of streaks,” Int’l
Int’l. J.
[1] L.-W. Kang, C.-W. Lin, and Y.-H. Fu, “Automatic single-image-based
rain streaks removal via R EFERENCES
image decomposition,” IEEE Trans. Image Conf. Multimedia
Computer and 93,
Vision, vol. Expo,no.2006.
3, pp. 348–367, 2011.
Processing,, vol. 21, no. 4, pp. 1742–1755, 2012. [17] P.
[18] J. Bossu,
C. Barnum,N. Hautière, and J.-P. Tarel,
S. Narasimhan, and T.“Rain or snow
Kanade, detection
“Analysis in image
of rain and
[1]
[2] L.-W. Kang,and
Y.-L. Chen C.-W. Lin,
C.-T. and“AY.-H.
Hsu, Fu, “Automatic
generalized low-ranksingle-image-based
appearance model sequences
snow through use
in frequency of a Int’l.
space,” histogram of orientation
J. Computer Vision,ofvol.
streaks,”
86, no.Int’l.
2-3,J.
rain streaks removal correlated
for spatio-temporally via imagerain
decomposition,” IEEE
streaks,” in IEEE Int’lTrans.
Conf. Image
Com- Computer
pp. 256–274, Vision,
2010.vol. 93, no. 3, pp. 348–367, 2011.
Processing,,
puter Vision,vol. 21, no. 4, pp. 1742–1755, 2012.
2013. [18] V.
[19] P. C. Barnum, S.
Santhaseelan andNarasimhan,
V. K. Asari,and T. Kanade,
“Utilizing local“Analysis of rain and
phase information to
11
11

Input SR LRA DSC Ours

Fig. 10: Visual comparison of different rain streak removal methods on more real example images.
Fig. 10: Visual comparison of different rain streak removal methods on more real example images.

remove rain from video,” Int’l. J. Computer Vision, vol. 112, no. 1, pp. ACM Trans. on Mathematical Software, vol. 23, no. 4, pp. 550–560,
snow
71–89,in2015.
frequency space,” Int’l. J. Computer Vision, vol. 86, no. 2-3, regularization,” IEEE Trans. Image Processing, vol. 4, no. 7, pp. 932–
1997.
[20] pp.
J. H. Kim, J. 2010.
256–274, Y. Sim, and C. S. Kim, “Video deraining and desnowing [34] 946,
Z. Hu1995.
and M.-H. Yang, “Good regions to deblur,” in European Conf.
[19] V. Santhaseelan
using temporal and V. K. Asari,
correlation “Utilizing matrix
and low-rank local phase information
completion,” IEEEto [33] C. Zhu, R.Vision,
Computer H. Byrd,2012.P. Lu, and J. Nocedal, “Algorithm 778: L-bfgs-
remove rain from
Trans. Image Processing, video,” Int’l. J. Computer
vol. 24, no. 9, pp. Vision, vol. 112,
2658–2670, 2015.no. 1, pp. [35] b: Fortran
G. Meng, Y. subroutines
Wang, J. for large-scale
Duan, S. Xiang, bound-constrained optimization,”
and C. Pan, “Efficient image
71–89, 2015. ACM Trans.
dehazing withonboundary
Mathematical Software,
constraint vol. 23, no.
and contextual 4, pp. 550–560,
regularization,” in
[21] T. H. Oh, Y. Matsushita, Y. W. Tai, and I. S. Kweon, “Fast randomized
[20] J.singular
H. Kim, J. Y.thresholding
Sim, and C.for S. Kim, “Video 1997. Int’l Conf. Computer Vision, 2013.
value nuclear normderaining and desnowing
minimization,” in IEEE IEEE
using [34] Z. Farbman,
Hu and M.-H. Yang,D. “Good regions
and to
R.deblur,”
Szeliski,in“Edge-preserving
European Conf.
Conf. Computer correlation
temporal and low-rank
Vision and Pattern matrix2015.
Recognition, completion,” IEEE [36] Z. R. Fattal, Lischinski,
Trans. Image Processing, vol. 24, no. 9, pp. 2658–2670, 2015.videos: a Computer Vision,for2012.
decompositions multi-scale tone and detail manipulation,” ACM
[22] A. K. Tripathi and S. Mukhopadhyay, “Removal of rain from [35] G. Meng, Y. Wang, J. no.
Duan, S. Xiang, and C. Pan, “Efficient image
[21] T. H. Oh,Signal,
review,” Y. Matsushita,
Image and Y. W. Tai,Processing,
Video and I. S. Kweon,
vol. 8, “Fast
no. 8,randomized
pp. 1421– Trans. Graph., vol. 27, 3, 2008.
singular value thresholding for nuclear norm minimization,” in IEEE [37] dehazing
D. Min, S.with
Choi,boundary
J. Lu, B.constraint and contextual
Ham, K. Sohn, and M. N.regularization,”
Do, “Fast global in
1430, 2014. IEEE Int’l Conf. Computer Vision, 2013.
Conf. image smoothing based on weighted least squares,” IEEE Trans. Image
[23] D.-A. Computer
Huang, L.-W. Vision andY.-C.
Kang, Pattern Recognition,
F. Wang, and C.-W.2015.Lin, “Self-learning [36] Z. Farbman,vol. R. Fattal,
[22] A. K. Tripathi and S. Mukhopadhyay, “Removal of rain fromdenoising,”
videos: a Processing, 23, no.D.12,Lischinski, and R. 2014.
pp. 5638–5653, Szeliski, “Edge-preserving
based image decomposition with applications to single image decompositions
review,” Signal, Image and Video Processing, vol. 8, no. 8, pp. 1421– [38] Z. Wang, A. C. for multi-scale
Bovik, tone and
H. R. Sheikh, anddetail
E. P. manipulation,”
Simoncelli, “ImageACM
IEEE Trans. Multimedia, vol. 16, no. 1, pp. 83–93, 2014. Trans. Graph., vol. 27,
1430, quality assessment: fromno.error
3, 2008.
visibility to structural similarity,” IEEE
[24] S.-H. 2014.
Sun, S.-P. Fan, and Y.-C. F. Wang, “Exploiting image structural [37] D. Min,Image
Trans. S. Choi, J. Lu, B.vol.
Processing, Ham,
13, K.no.Sohn,
4, pp.and M. N. 2004.
600–612, Do, “Fast global
[23] D.-A. Huang,
similarity for L.-W.
singleKang,
imageY.-C.
rain F.removal,”
Wang, and in C.-W.
IEEE Lin,Int’l“Self-learning
Conf. Image [39] image
L. smoothing
I. Rudin, basedand
S. Osher, on E.
weighted
Fatemi,least squares,”total
“Nonlinear IEEE Trans. Image
variation based
based image
Processing, 2014,decomposition with applications to single image denoising,”
pp. 4482–4486. Processing,
noise removal vol.algorithms,”
23, no. 12,Physica
pp. 5638–5653,
D: Nonlinear 2014.Phenomena, vol. 60,
[25] IEEE Trans.C.Multimedia,
J.-H. Kim, vol. 16,
Lee, J.-Y. Sim, andno. 1, pp.
C.-S. Kim, 83–93, 2014.
“Single-image deraining [38] Z. Wang,
no. A. C. Bovik,
1, pp. 259–268, 1992.H. R. Sheikh, and E. P. Simoncelli, “Image
[24] S.-H.
using Sun, S.-P. Fan,
an adaptive and Y.-C.
nonlocal F. Wang,
means filter,” “Exploiting
in IEEE Int’l image structural
Conf. Image quality
similarity for single image rain removal,” in IEEE Int’l Conf. Image [40] K. He, J.assessment:
Sun, and X.from Tang,error visibility
“Guided imagetofiltering,”
structuralinsimilarity,” IEEE
European Conf.
Processing, 2013. Trans. Image
Computer Processing,
Vision, 2010. vol. 13, no. 4, pp. 600–612, 2004.
Processing, 2014, pp. 4482–4486. [39] L. Garg
I. Rudin, S.K. Osher,
[26] D. Eigen, D. Krishnan, and R. Fergus, “Restoring an image taken [41] K. and S. Nayar,and E. Fatemi, “Nonlinear
“Photorealistic rendering oftotal
rainvariation based
streaks,” ACM
[25] J.-H. Kim, a C. Lee, J.-Y. Sim, with
and C.-S. Kim, “Single-image deraining noise removal algorithms,”
through window covered dirt or rain,” in IEEE Int’l Conf. Trans. Graphics, vol. 25, no.Physica D: Nonlinear
3, pp. 996–1002, Phenomena, vol. 60,
2006.
using
Computeran adaptive
Vision, 2013.nonlocal means filter,” in IEEE Int’l Conf. Image no. Martin,
1, pp. 259–268, 1992.D. Tal, and J. Malik, “A database of human
[42] D. C. Fowlkes,
[27] Processing,
H. Kurihata,2013. T. Takahashi, I. Ide, Y. Mekada, H. Murase, Y. Tamatsu, [40] K. He, J. Sun, and images
X. Tang,and“Guided image filtering,” in European Conf.
segmented natural its application to evaluating segmentation
[26] D.
andEigen, D. Krishnan,
T. Miyahara, “Rainy and R. Fergus,
weather “Restoring
recognition an imagecamera
from in-vehicle taken Computer Vision,
algorithms 2010.
and measuring ecological statistics,” in IEEE Int’l Conf.
through a window covered with
images for driver assistance,” in IEEEdirt Intelligent
or rain,” in IEEE Symposium,
Vehicles Int’l Conf. [41] K. Garg and
Computer S. K. 2001.
Vision, Nayar, “Photorealistic rendering of rain streaks,” ACM
Computer
2005. Vision, 2013. Trans. Graphics, vol. 25, no. 3, pp. 996–1002, 2006.
[27]
[28] H.
M. Kurihata, T. Takahashi,
Roser, J. Kurz, I. Ide, “Realistic
and A. Geiger, Y. Mekada, H. Murase,
modeling Y. Tamatsu,
of water droplets [42] D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human
and T. Miyahara,
for monocular “Rainyraindrop
adherent weather recognition
recognition using
from bezier
in-vehicle camera
curves,” in segmented natural images and its application to evaluating segmentation
images for driver
Asian Conf. Computer assistance,”
Vision, in IEEE Intelligent Vehicles Symposium,
2011. algorithms and measuring ecological statistics,” in IEEE Int’l Conf.
[29] 2005.
S. You, R. T. Tan, R. Kawakami, and K. Ikeuchi, “Adherent raindrop Computer Vision, 2001.
[28] M. Roser, and
detection J. Kurz,
removaland A.
in Geiger,
video,” “Realistic
in IEEE Conf. modeling of water
Computer droplets
Vision and
for monocular
Pattern adherent
Recognition, raindrop recognition using bezier curves,” in
2013.
[30] Asian
——, Conf. Computer
“Adherent Vision,
raindrop 2011. detection and removal in video,”
modeling,
[29] S.
IEEEYou,Trans.
R. T.Pattern
Tan, R.Analysis
Kawakami, and K. Ikeuchi,
and Machine “Adherent
Intelligence, vol. PP,raindrop
no. 99,
detection
p. 1. and removal in video,” in IEEE Conf. Computer Vision and
[31] Pattern
Y. Li and Recognition,
M. S. Brown, 2013.“Single image layer separation using relative
[30] ——, “Adherent
smoothness,” raindrop
in IEEE Conf.modeling,
Computerdetection
Vision and andPattern
removal in video,”
Recognition,
IEEE
2014. Trans. Pattern Analysis and Machine Intelligence, vol. PP, no. 99,
[32] p.
D.1.Geman and C. Yang, “Nonlinear image recovery with half-quadratic
[31] Y. Li and M. S.IEEE
regularization,” Brown, “Single
Trans. Image image layer separation
Processing, vol. 4, no.using
7, pp.relative
932–
smoothness,”
946, 1995. in IEEE Conf. Computer Vision and Pattern Recognition,
[33] 2014.
C. Zhu, R. H. Byrd, P. Lu, and J. Nocedal, “Algorithm 778: L-bfgs-
[32] D.
b: Geman
Fortran and C. Yang,for
subroutines “Nonlinear
large-scaleimage recovery with optimization,”
bound-constrained half-quadratic
12

Yu Li received his Ph.D. degree in National Univer- Michael S. Brown obtained his BS and PhD in
sity of Singapore in 2015. He is now with Advanced Computer Science from the University of Kentucky
Digital Sciences Center, a research center found- in 1995 and 2001 respectively. He is a professor at
ed by University of Illinois at Urbana-Champaign York University in the Lassonde School of Engineer-
(UIUC) and the Agency for Science, Technology and ing. Dr. Browns research interests include computer
Research (A*STAR), Singapore. His research inter- vision, image processing and computer graphics.
ests include computer vision, computational photog-
raphy, and computer graphics.

Robby T. Tan Robby T. Tan received the PhD


degree in computer science from the University of
Tokyo. He is now an assistant professor at both
Yale-NUS College and ECE (Electrical and Comput-
ing Engineering), National University of Singapore.
Previously, he was an assistant professor at Utrecht
University. His research interests include machine
learning and computer vision, particularly in dealing
with bad weather, physics-based and motion analy-
sis. He is a member of the IEEE.

Xiaojie Guo received the B.E. degree in software


engineering from the School of Computer Science
and Technology, Wuhan University of Technology,
Wuhan, China, in 2008, and the M.S. and Ph.D.
degrees in computer science from the School of
Computer Science and Technology, Tianjin Universi-
ty, Tianjin, China, in 2010 and 2013, respectively. He
is currently an Associate Professor with the Institute
of Information Engineering, Chinese Academy of
Sciences. He was a recipient of the Piero Zamper-
oni Best Student Paper Award in the International
Conference on Pattern Recognition (International Association on Pattern
Recognition), in 2010.

Jiangbo Lu received his B.S. and M.S. degrees


in electrical engineering from Zhejiang University,
Hangzhou, China, in 2000 and 2003, respective-
ly, and the Ph.D. degree in electrical engineering,
Katholieke Universiteit Leuven, Leuven, Belgium, in
2009. From 2009 to 2016, he was with the Advanced
Digital Sciences Center (ADSC), Singapore, which
is a joint research center between the University
of Illinois at Urbana-Champaign (UIUC), USA, and
the Agency for Science, Technology and Research
(A*STAR), Singapore, where he has led several
research projects as a Senior Research Scientist. Since 2017, he has joined
Shenzhen Cloudream Technology Co., Ltd. as the Chief Technology Officer
(CTO), and is leading R & D departments to work on cutting-edge problems
broadly in computer vision, computer graphics, and machine learning centered
on perceiving and understanding humans. His research interests include
computer vision, visual computing, robotic vision, and computational imaging.
Dr. Lu served as an Associate Editor for IEEE Transactions on Circuits and
Systems for Video Technology (TCSVT) in 2012-2016. He received the 2012
TCSVT Best Associate Editor Award.

You might also like