You are on page 1of 12

Digital Signal Processing 116 (2021) 103142

Contents lists available at ScienceDirect

Digital Signal Processing


www.elsevier.com/locate/dsp

SLOPE: A monotonic algorithm to design sequences with good


autocorrelation properties by minimizing the peak sidelobe level
R. Jyothi ∗ , Prabhu Babu, Mohammad Alaee-Kerahroodi 1

a r t i c l e i n f o a b s t r a c t

Article history: Sequences with low autocorrelation sidelobes has applications in various fields like wireless communica-
Available online 15 June 2021 tions, radar, sonar, cryptography to name a few. In this paper, we propose an approach to construct
sequences, in particular unimodular sequences, by directly minimizing the peak sidelobe level (PSL)
Keywords:
metric. The underlying optimization problem involved is a minimax problem which is in general difficult
Peak sidelobe level minimization
Unimodular sequence
to tract. We address this issue and propose an iterative algorithm named Sequence with LOw Peak
Minimax problem sidelobE level (SLOPE) based on the technique of Majorization Minimization, which can be implemented
RADAR efficiently using the Fast Fourier Transform tool. Further, we also discuss the extension of SLOPE to
SONAR incorporate energy, peak-to-average-power ratio (PAPR) and spectral constraints on the sequence. We
show through numerical simulations that the proposed algorithm can generate sequences of considerably
longer lengths with lower peak sidelobe level when compared to the state-of-the-art algorithms and in
the end we also evaluate the performance of the sequences designed via SLOPE in the context of channel
estimation application.
© 2021 Elsevier Inc. All rights reserved.

1. Introduction and related work Integrated Sidelobe level (ISL) and Peak Sidelobe level (PSL) met-
rics [5] which are defined as follows:
Sequences with low autocorrelation sidelobes are commonly −1

N
used in the field of wireless communications [1], radar/sonar [2] ISL = |r (k)|2 (1)
and in cryptography based security systems. In radar/sonar sys- k =1
tems, sequences with low autocorrelation sidelobes can improve
the detection of weak targets ([3], [4]) by enhancing the resolution PSL = max {|r (k)|2 } (2)
k=1,2,··· N −1
capabilities of the radar/sonar receivers. In wireless communication
systems, the estimates of the channel taps will highly depend on where N is the sequence length, rk is the aperiodic autocorrelation
the autocorrelation property of the pilot sequence employed. In ad- of the sequence and is defined as:
dition to designing a sequence with low autocorrelation sidelobes, −k

N
due to hardware constraints, such as the maximum amplitude clip
r (k) = xi +k x∗i = r ∗ (−k), k = 0, · · · N − 1 (3)
of analog to digital converters and power amplifiers, it is desir-
i =1
able to design sequences with unimodular constraints [5]. Further,
due to limited budget, one may also need to design sequence with where xi is the i th element of sequence x. The above metrics can
some additional constraints such as spectral, fixed energy and/or also be viewed as the  p norm of rk , in particular, ISL is the 2
lower peak-to-average-power ratio (PAPR) ([6]) [7]), which is de- norm of rk and PSL is the ∞ norm of rk .
fined as the ratio of the sequence’s largest magnitude to its average Given the importance of sequences with low autocorrelation
power. sidelobes, plethora of methods have been developed to design such
Typically, the goodness of the autocorrelation sidelobes of a se- sequences. Binary Barker sequence [8] is a well known example of
quence of length N, denoted by x ∈ C N ×1 , is evaluated using the such a sequence with PSL value not greater than one, however,
these sequences do not exist for length N greater than 13. Au-
thors in [9] worked on generalized Barker sequence which has the
same PSL maximum as that of the Binary Barker sequence, how-
* Corresponding author.
ever the terms xi were now allowed to take complex numbers with
E-mail address: jyothi.r@care.iitd.ac.in (R. Jyothi).
1
The work of Mohammad Alaee-Kerahroodi was supported by SPRINGER C18/ unit magnitude. Several researchers have worked on generalized
IS/12734677. Barker sequence and the longest Barker sequence constructed so

https://doi.org/10.1016/j.dsp.2021.103142
1051-2004/© 2021 Elsevier Inc. All rights reserved.
R. Jyothi, P. Babu and M. Alaee-Kerahroodi Digital Signal Processing 116 (2021) 103142

far is of length 77 ([10–12]). Apart from Barker sequence several polynomial using bisection method at every iteration. In [24], the
well known sequences, which are mainly analytical, are the Chu authors approximated the PSL objective by an auxiliary function
sequence [13] and the Frank sequence [14]. Even though these se- with an additional semi-unitary variable - this approach is similar
quences have closed-form construction, they exists only for fixed to the algorithmic development of CAN [15]. The algorithm pro-
lengths. posed in [24] alternates between updating the sequence and the
Recently, several numerical optimization based approaches have additional variable. The authors in [25] minimize a unified metric
been proposed to design sequences with lower autocorrelation using MM approach to design sequences with low autocorrelation
sidelobes of longer lengths. Authors in [15] proposed Cyclic Al- sidelobes. This unified metric includes ISL and approximated PSL
gorithm New (CAN) algorithm which is capable of generating se- metric as special cases. However, it is found that the algorithm de-
quences of length N = 106 or even larger with lower ISL and veloped to minimize the approximated PSL metric (corresponding
can be implemented efficiently using Fast Fourier Transform (FFT) to the choice of w k = 1 in the unified metric) is just the same as
tool, however, instead of minimizing ISL metric, CAN minimizes the MM-PSL-adaptive algorithm.
a close approximation of ISL. Monotonic Minimizer for Integrated In this paper, we propose an algorithm named Sequence with
LOw Peak sidelobE level (SLOPE) which would be based on the
Sidelobe Level (MISL) reported in [16] generated sequences by di-
principle of MM. SLOPE is iterative in nature and in each iteration
rectly minimizing the ISL metric and similar to CAN algorithm can
of the algorithm the sequence generated will monotonically min-
also be implemented efficiently using FFT. Many other algorithms
imize the PSL metric. The major contributions of our work are as
such as MM-Corr [17], ISL-New [18], WeCAN [15], ADMM based
follows:
approaches [19] [20] and a projection based method [21] were
proposed in the literature to generate sequences that minimize 1) A MM based algorithm named SLOPE is proposed to generate
the ISL metric. However, on contrary, only few algorithms focus sequences with low peak sidelobe level by directly minimizing
on designing sequences that minimize the PSL metric. We feel that the PSL metric subject to unimodular constraint. A computa-
designing sequences by minimizing PSL is as important as design- tionally efficient scheme based on FFT to implement SLOPE has
ing the sequences by ISL minimization, as the design based on the also been discussed. Extension of SLOPE to handle constraints
PSL problem would help yielding sequences with equal sidelobe other than unimodular constraint such as energy constraint
level (which is also minimal). This kind of equisidelobe sequences and PAPR constraint is also discussed.
would be of immense use in the wireless and radar/sonar appli- 2) We prove that the sequence generated by the iterative steps
cations, as the equal sidelobe level would avoid leakage from any of SLOPE converges to the stationary point of the PSL prob-
stronger targets (in the case of radar/sonar) or stronger multipaths lem and also propose a way to accelerate the convergence of
(in the case of channel estimation application), else which may SLOPE.
be misunderstood for an additional target or a multipath. We feel 3) Numerical simulations are conducted to compare the perfor-
the scarcity in methods solving the PSL problem is mainly due mance of the proposed algorithm with the state-of-the-art al-
to the challenging nature of the PSL minimization problem as it gorithms. Further, to show the performance of the sequence
designed via SLOPE, we simulate and evaluate show the per-
would involve solving a minimax optimization problem. Adding to
formance of the sequences generated using SLOPE in the con-
the argument of importance of solving the PSL problem, one can
text of channel estimation application.
show that sequences with lower PSL metric will automatically have
lower ISL metric, as the PSL is a natural lower bound on the ISL as
The organization of the paper is as follows. We formulate the prob-
shown below:
lem of designing a sequence by minimizing the peak sidelobe level

N −1 and also give a brief overview of MM in Section 2, which would be
PSL = max {|r (k)|2 } ≤ |r (k)|2 = ISL (4) a central to our algorithmic development. In Section 3 we propose
k=1,2,··· N −1 our SLOPE algorithm, discuss the extension to handle different con-
k =1
straints on the sequence and conclude the section with a discus-
Hence, even though challenging, it is desirable to generate se-
sion on convergence and computational complexity. In Section 3,
quences which minimize the PSL metric. Some of the state-of-the-
we also discuss a way to accelerate the convergence of SLOPE. In
art algorithms which try to minimize the PSL metric are the MM-
Section 4 we compare the proposed algorithm with the state-of-
PSL [22] algorithm, coordinate descent based approach named CPM
the-art algorithms and conclude the paper in Section 5.Throughout
in [23], and an extension of CAN like approach for the PSL cost
the paper, bold capital and bold small letter denote matrix and
function [24]. Since, PSL corresponds to the ∞ norm of rk and the
vector, respectively. A scalar is denoted by a small letter. The value
∞ norm is better approximated by the  p norm with large value
taken by x at the t th iteration is denoted by xt . Superscripts (·)∗ ,
of p, MM-PSL minimizes the  p norm of rk for p ≥ 2 by employ-
(·)T and (·) H denote the complex conjugate, transpose, and conju-
ing the majorization-minimization (MM) technique, in which they
gate transpose, respectively. The trace of a matrix X is denoted
have upperbounded the  p norm objective in-terms of 2 norm
Tr( X ). The symbol  represents the Hadamard product. Vector
and a linear term and then minimized the upperbound. The au-
vec( X ) is constructed by stacking the columns of X , λmax ( X ) de-
thors in [22] also proposed a variant of MM-PSL algorithm named
notes the maximum eigenvalue of X . The Euclidean norm of the
as MM-PSL-adaptive, wherein the MM-PSL algorithm is made to
vector x is denoted by x2 , |x| denotes the elementwise absolute
run for different values of p − 22 , 23 · · · 213 and for each value of p,
of the vector x.
the algorithm is initialized with the solution obtained from solv-
ing the  p norm problem for the previous p value. It has been
shown in the simulation section in [22] that the MM-PSL-adaptive 2. Problem formulation, MM and MM for minimax
algorithm performs better than the MM-PSL algorithm with fixed
value of p. However, the MM-PSL and its adaptive variant opti- We start with the problem of minimizing the PSL metric i.e.,
mize only the  p norm approximation of PSL and not minimize
the actual PSL metric. Recently, in [23] the authors have developed min max |r (k)|2
x k
an algorithm using the coordinate descent framework which try to (5)
minimize the PSL metric, it involves minimizing a fractional quartic subject to | x i | = 1 i = 1, 2, · · · N

2
R. Jyothi, P. Babu and M. Alaee-Kerahroodi Digital Signal Processing 116 (2021) 103142

Note that the constraint |xi | = 1 denotes the unimodular sequence, where f (x) could be a nonlinear function in general and χ de-
later we will also discuss sequence taking different constraints. For notes the constraint set. MM technique solves the above problem
our later convenience we write the following equivalent of (5) as: by first choosing a surrogate function g (x|xt ) which majorizes the
function f (x) at the current iterate xt , and in the subsequent step,
min max |r (k)|2 + |r ∗ (k)|2 the surrogate function is optimized to get the next iterate,
x k (6)
subject to | x i | = 1 i = 1, 2, · · · N
xt +1 ∈ arg min g x|xt (12)
x∈χ
We now rewrite the objective function in (6) by defining rk =
x H Ak x where The choice of the surrogate function is based on the following two
 conditions:
1 ; j−i =k
Ak = (7)
0 ; else g xt |xt = f xt (13)

(i , j ) are the row and column indexes of the Toeplitz matrix Ak . g x|xt ≥ f (x) . (14)
For instance A1 would look like:
⎛ ⎞ The first condition implies the tangency of the surrogate to the
0 1 ··· 0 0 0 original objective and the second condition implies the upper-
⎜0 0 ··· 1 0 0⎟ bound nature of the surrogate function. From (12), (13) and (14)
⎜ ⎟
A1 = ⎜ . . . . .. ⎟ one can show that the sequence of points {xt } generated via the
⎝ .. .. . . .. .1⎠
MM scheme monotonically decreases the objective function:
0 0 0 ··· 0 0

Hence, the problem in (6) becomes equivalent to: f (xt +1 ) ≤ g (xt +1 |xt ) ≤ g (xt |xt ) = f (xt ) (15)

The complexity and convergence rate of the MM based algorithms


min max f P S L (x) = |x H Ak x|2 + |x H AkH x|2
x k (8) depends on the nature of the surrogate function g (x|xt ). A broad
subject to | x i | = 1 i = 1, 2, · · · N overview and summary of different ways to construct the surrogate
function can be found in [27], [28].
Note that the objective in (8) is quartic in x and we further expand
the objective function f P S L (x) in (8) as: 2.2. MM in the minmax case

|x Ak x| + |x
H 2 H
AkH x|2 = Tr(Ak X )Tr(AkH X )
Consider the following minimax optimization problem:
(9)
+Tr( X AkH )Tr(Ak X ),
min f (x) (16)
where X = xx H . Since Tr(Ak X ) = vec H (Ak )vec( X ), the problem in x∈χ
(8) can be rewritten as:
where f (x) = max f̃ i (x). Below we will show how one can
i =1,2,··· , K
min max { f P S L (x) = vec H ( X )k vec( X )} apply MM technique to the minimax problem. Contrary to the
X ,x k
(10) general case discussed in the last subsection, constructing g (x|xt )
subject to | x i | = 1 i = 1, 2, · · · N
for the minimax problem does not look obvious, but can be con-
H
X = xx , structed as follows:

where k = vec( A k )vec H ( A kH ) + vec( A kH )vec H ( A k ). Needless to say, g (x|xt ) = max g̃ i (x|xt )
i =1,2,··· , K
(17)
the problem in (10) is non-convex and in general very difficult to
tract. In the following section, we will try and devise an iterative
where each g̃ i (x) is a tight upper bound on the respective f̃ i (x)
algorithm based on Majorization Minimization principle for (10),
before that we will briefly discuss the general MM and also discuss at a given xt , each individual surrogates will satisfy the following
the problem specific MM- which in this case would be MM for conditions:
minimax problems in the next subsections.
g̃ i (xt |xt ) = f̃ i (xt ) (18)
2.1. General MM
g̃ i (x|x ) ≥ f̃ i (x)
t (19)

The original MM algorithm was proposed by De Leeuw in [26], One can easily show that the surrogate function g (x|xt ) defined in
in which he devised an iterative algorithm based on MM for the (17) satisfies condition (13) and (14) as follows:
multidimensional scaling problem. From then on, numerous re-
searchers have realized its potential and have adopted the tech- g (xt |xt ) = max g̃ i (xt |xt ) = max f̃ i (xt ) = f (xt ) (20)
i =1,2,··· , K i =1,2,··· , K
nique to solve various problems in the fields of signal processing,
statistics ([27], [28]) and also in the recent hot field like machine and
learning. The popularity of MM is mainly due to its adaptability
to solve problems of different nature, and moreover, it does not g̃ i (x|xt ) ≥ f̃ i (x) =⇒ max g̃ i (x|xt )
i =1,2,··· , K
put restrictions to solve only minimization (or maximization) prob-
≥ max f̃ i (x) =⇒ g (x|xt ) ≥ f (x).
lems, and also one can easily extend it to solve minimax problems, i =1,2,··· , K
which we would discuss in the next subsection. Consider the fol-
(21)
lowing optimization problem:
Similar to the general MM case, here also one can show that the
min f (x) (11) series of iterates xt decrease the objective f (x) monotonically.
x∈χ

3
R. Jyothi, P. Babu and M. Alaee-Kerahroodi Digital Signal Processing 116 (2021) 103142

3. Monotonic minimizer for PSL metric Proof. Let qk = vec(Ak ) and sk = vec(AH
k
), then k = Q k + Q kH =
2
qk skH + sk qkH . Then we have Tr( Q k Q H
K ) = sk 2 qk 2 = (N − k) ,
2 2

In this section, we propose our algorithm named Sequence with since qk and sk are the vectors formed from matrix Ak which has
LOw Peak sidelobE level (SLOPE). In the algorithmic development exactly ( N − k) elements equal to one and the rest zero. Also, the
we will discuss constructing unimodular sequences and later in a trace of Q k and Q k2 matrices will be equal to zero, since Ak will
separate subsection, we show how to deal with the energy and always have zero along its diagonals. Since k is the sum of two
PAPR constraints. We also discuss the convergence and computa-
rank one matrices, the maximum rank it can take is two. Let λ1
tional complexity of SLOPE, and an accelerated version of SLOPE is
and λ2 denote the two non-zero eigenvalues of matrix k . Then
also discussed at the end of this section.
using the above relations we have:

3.1. Sequence with LOw Peak sidelobE level (SLOPE)


λ1 + λ2 = Tr( Q k + Q H
k
) =Tr( Q k )+Tr( Q H
k
)=0 (26)

Though the original PSL objective f P S L (x) is quartic in the vari- λ21 + λ22 = Tr( Q k + Q H 2
k
) = 2Tr( Q k Q H
k
) = 2(N − k) 2
(27)
able x, the PSL function in (10) is quadratic and twice differen-
Using (26) and (27) we get:
tiable in X , which can be exploited to construct an upperbound
for the PSL metric. To achieve the same we discuss the follow- 1
λ1 λ2 = (λ1 + λ2 )2 − λ21 + λ22 = −( N − k)2 (28)
ing Lemma, which will be helpful to construct an upperbound for 2
f P S L (x). Before proceeding to derive the upperbound, we would Hence, we can find the maximum eigenvalue by solving the fol-
like to express the cost function in (10) in a fashion similar to lowing characteristic equation of ( Q k + Q kH ):
(16) as follows:
  λ2 − ( N − k)2 = 0 (29)
min f (X) = max f̃ k ( X )
X ,x k=1,2,··· , N −1 whose solution is equal to λ = ±( N − k), choosing its maximum
(22)
subject to | x i | = 1 i = 1, 2, · · · N gives λmax (k ) = ( N − k). 
X = xx H ,
Using Lemma 3.2 and by exploiting the relation vec H ( X )vec( X )
where each f̃ k ( X ) would be equal to vec H ( X )k vec( X ). This for- = (x H x)2 = N 2 , the surrogate function g̃k X | X t in (25) can be
mulation will help us to use the MM procedure described in the rewritten as:
Subsection 2.2. With this, we start with the following Lemma.
g̃k X | X t = − vec H ( X t )k vec( X t )
Lemma 3.1. Let f : C → R be a continuous twice differentiable func-
N
+ 2Re(vec H ( X t )k vec( X ))
tion and if there exists a matrix M such that M ∇ 2 f (x), then f (x) (30)
can be upper bounded as: − 2 ( N − k) Re(vec H ( X t )vec( X ))
+ 2N 2 ( N − k) .
t t H t 1 t H t
f ( x) ≤ f ( x ) + ∇ f ( x ) ( x − x ) + (x − x ) M (x − x ) (23)
2 Now by re-substituting for X = xx H , the surrogate function in
terms of x, i.e., g̃ x|xt can be obtained:
where x = xt is the value taken by x at the t th iteration. The upper bound
for f (x) is quadratic and differentiable in x.
g̃k x|xt = −2|(xt ) H A k xt |2 + 2(x H B k x)
(31)
−2 ( N − k) (x H xt (xt ) H x) + 2N 2 ( N − k)
Proof. The proof is trivial and can be found in standard optimiza-
tion text, yet we include our proof for the sake of completeness. = −2|(xt ) H Ak xt |2 + 2(x H Bˆk x) + 2λmax ( B k )
(32)
Suppose there exists a matrix M such that M ∇ 2 f (x), then we −2 ( N − k) (x H xt (xt ) H x) + 2N 2 ( N − k)
have the following inequality by second order Taylor expansion:
where B k = A k ((xt ) H A kH xt ) + A kH ((xt ) H A k xt ) = Ak r ∗ (k) + AkH r (k)
f ( x) ≤ f ( x ) + ∇ f ( x ) ( x − x ) +
t t H t 1
(x − xt ) H M (x − xt ) (24) and note that in (32) we have introduced the matrix B̂ k = B k −
2
(λmax ( B k ) I N ), the need for which will be explained shortly. The
and equality is achieved at x = xt .  surrogate function in (31) is dominated by convex quadratic term
in x: 2(x H B k x) and the resultant surrogate minimization problem
Using the above Lemma, we can majorize each f̃ k ( X ) in the ob- (under the unimodular constraint) will be intractable. To circum-
jective function of (22) at any given X t (which of course obtained vent this, we have added and subtracted λmax ( B k ) in (32), which
from a given xt as X t = (xt )(xt ) H ) by the following surrogate func- will make the surrogate function concave in x and can be ma-
tion g̃k X | X t : jorized another time by using the following Lemma 3.3. The final
surrogate function would be linear in x, and the resultant surro-
g̃k X | X t = − vec H ( X t )(k − M k )vec( X t )+ gate minimization will be tractable.

+ 2 Re(vec H ( X t )(k − M k )vec( X )) (25)


Lemma 3.3. Given any x = xt , a concave function f (x) can be upper
H
+ vec ( X )( M k )vec( X ) bounded as:

where we have chosen M k = λmax (k ) I N 2 . Given the special struc- H


f (x) ≤ f xt + ∇ f xt x − xt (33)
ture of k , the maximum eigenvalue of k can be derived analyt-
ically as in the following Lemma: The upper bound for f (x) is linear in x.

Lemma 3.2. The maximum eigenvalue of the N × N matrix k = Proof. Since f (x) is concave, linearizing it around xt using the first
vec( A k )vec H ( A kH ) + vec( A kH )vec H ( A k ) is equal to N − k. order Taylor series gives the above inequality. 

4
R. Jyothi, P. Babu and M. Alaee-Kerahroodi Digital Signal Processing 116 (2021) 103142

Calculating the maximum eigenvalue of B k of size N × N can be min α


x, α
computationally expensive for large N and to deal with this issue,
the following Lemma will provide a computationally efficient way subject to 4Re(x H dk ) + tk ≤ α , k = 1, .., ( N − 1), (37)
to obtain an upperbound on λmax ( B k ). | x i | ≤ 1, i = 1, 2, · · · , N
We would like to note that the above problem is convex as we are
Lemma 3.4. Let B be a N × N Hermitian Toeplitz matrix defined by
minimizing a linear objective subject to convex constraints, more
{bk }kN=−01 as:
precisely, the constraint set is an intersection of hyperplanes and a
⎛ ⎞
b0 b∗1 · · · b∗N −1 second order cone. Thus the standard second order cone program
⎜ b1 b0 · · · b N −2 ⎟
∗ (SOCP) solvers [31] can be employed to solve (37). However, as the
⎜ ⎟ length of the sequence we desire to construct increases, solving the
B =⎜ .. .. .. .. ⎟
⎝ . . . . ⎠ SOCP would be computationally demanding. So, in the following
b N −1 b N −2 ··· b0 we resort to developing a computationally efficient solver for (37).
Let xr = Re(x) and x j = Im(x). Then the problem in (37) be-
Then the λmax ( B ) can be upper bounded as: comes
 
1 min α
λmax ( B ) ≤ max z̃2i + max z̃2i −1 = λub ( B ) (34) xr , x j , α
2 1≤ i ≤ N 1≤ i ≤ N  
where z̃ = Fb, b = [b0 , b1 , · · · , b N −1 , 0, b∗ , b∗1 ] T and F is the subject to 4 xrT dkr + x Tj dkj + tk ≤ α , k = 1, .., ( N − 1), (38)
N −1 , · · ·
2N × 2N FFT matrix. 
x2ir + x2i j ≤ 1, i = 1, 2, · · · , N
Proof. See [29]. 
where dkr = Re(dk ) and dkj = Im(dk ). Let z = [xr , x j ] T and d˜k =
[dkr , dkj ] T . Then the above problem becomes:
Using Lemma 3.3 and Lemma 3.4 we again majorize the surro-
gate function g̃k x|xt in (31) to get the following new surrogate min α
z, α
function:  
gk (x|xt ) = −2|( subject to 4 z T d˜k + tk ≤ α , k = 1, .., ( N − 1),
x ) Ak x |
t H t 2 (39)
  
+2 −(xt ) H B̃ k xt + 2Re (xt ) H B̃ k x
(35) z2i + z2i + N ≤ 1, i = 1, 2, · · · , N
+2λub ( B k ) − 2( N − k) −1 + 2Re x H xt
+2N 2 ( N − k), which is equivalent to
 
1 min max 4 z T d˜k + tk
where B̃ k = B k − (λub ( B k ) I N ), λub ( B k ) = max (z̃k )2i + z k
 2 1≤ i ≤ N  (40)
max (z̃k )2i −1 , where z̃k = Fbk , bk = [0k×1 , r (k), 0 N −k×1 , 0, subject to z2i + z2i + N ≤ 1, i = 1, 2, · · · , N
1≤ i ≤ N
0 N −k×1 , r ∗ (k), 0k×1 ] T and F is 2N × 2N FFT matrix. Here we would Rewriting the above maximization problem as a maximization
like to note that since gk (x|xt ) is a tighter surrogate for g̃k x|xt , problem over a simplex variable:
it can be viewed as a direct surrogate to f̃ k (x). Therefore, at any  
iteration, given xt , the final surrogate minimization problem looks max 4 z T d˜k + tk
k
like: −1   

N

min max H
4 Re(x dk ) + tk = max pk 4z T d˜k + tk (41)
x k p≥0,1 T p=1
(36) k =1
subject to | x i | = 1, i = 1, 2, · · · , N = max p t + 4z T D̃p
T
p≥0,1 T p=1
where dk = B̃ k xt − ( N − k)xt and tk = −2 |(xt ) H A k xt |2 −
where t = [t 1 , t 2 · · · t N −1 ] T and D̃ = [d˜1 , d˜2 , · · · , d̃( N −1) ]. Hence the
2((xt ) H B k xt ) + 4λub ( B k ) + 4N 2 ( N − k). Substituting for B k =
Ak r ∗ (k) + AkH r (k), we get dk = Ak r ∗ (k) + AkH r (k) − λub ( B k ) I N xt − problem in (40) becomes:
( N − k)xt = Ak xt r ∗ (k) + AkH xt r (k) − (λub ( B k )I N ) xt − ( N − k)xt . min max p T t + 4z T D̃p
Since Ak is a sparse Toeplitz matrix, the matrix vector product z p≥0,1 T p=1
Ak xt can be calculated efficiently by keeping only the necessary  (42)
entries in xt . Also, x t H
B k xt is nothing but 2|rk |2 and hence
subject to z2i + z2i + N ≤ 1, i = 1, 2, · · · , N
tk = −6|rk | + 4λub ( B k ) + 4N 2 ( N − k). So to calculate dk and tk
2
The objective function in the above problem is bilinear in p and
one needs to calculate the autocorrelation function rk which can z and the constraints are compact convex sets, then by minimax
be calculated efficiently using Fast Fourier Transform (FFT). We theorem [32], we can swap min max to max min without altering
discuss the per iteration complexity of SLOPE later in this section
the solution:
where we would give number of flops needed per iteration to im-
plement SLOPE. max min p T t + 4z T D̃p
p≥0,1 T p=1 z
The surrogate minimization problem in (36) is nonconvex in x
 (43)
due to the presence of the equality constraint. However, one can
subject to z2i + z2i + N ≤ 1, i = 1, 2, · · · , N
relax the equality constraint with the inequality constraint as the
optimal solution of the relaxed problem would lie only on the which is equivalent to
boundary of the relaxed constraint set [30]. The relaxed problem
max h(p) (44)
in epigraph form can be given as: p≥0,1 T p=1

5
R. Jyothi, P. Babu and M. Alaee-Kerahroodi Digital Signal Processing 116 (2021) 103142

where h(p) is given by: Table 1: Pseudocode of SLOPE.


Input: Sequence length N
min p t + 4z D̃p
T T
Initialize: Set t = 0. Initialize x0
z  (45) Repeat:
subject to z2i + z2i + N ≤ 1, i = 1, 2, · · · , N . 1) Compute the following for k = 1, 2, · · · , N − 1
Compute rk using FFT and obtain bk = [0k×1 , r (k), 0 N −k×1 , 0, 0 N −k×1 ,
We solve the maximization problem in (44) using the Mirror r ∗ (k), 0k×1 ] T
Descent Algorithm (MDA) which we discuss in the next subsec- Compute zk = Fbk where  F is the 2N × 2N FFT matrix. 
1
tion. Once the optimal p is found using MDA, z can be calcu- Calculate λub ( B k ) = max ( zk )2i + max ( zk )2i − 1
2 1≤i ≤ N 1≤i ≤ N
lated by solving the problem in (45) whose solution is given by Compute dk = Ak xt r ∗ (k) + AkH xt r (k) − λub ( B k )xt − ( N − k)xt and d˜k =
ãi
[ zi , zi + N ] T = − where ãi = [ai , ai + N ] T for i = 1, 2 · · · N and [Re(dk ), Im(dk )]T
ãi 2 Compute tk = −6|rk |2 + 4λub ( B k ) + 4N 2 ( N − k).
a = D̃p. From z, xt can easily be recovered. The pseudo code of the 2) Compute a = D̃p where D̃ = [d˜1 , d˜2 , · · · , d̃( N −1) ], where p is obtained
proposed algorithm is shown in Table 1. by solving the maximization problem in (44) using the MDA algorithm as
described in Subsection 3.2
ãi
3.2. Mirror descent algorithm 3) Compute [ zi , zi + N ] T = − where ãi = [ai , ai + N ] T for i = 1, 2 · · · N.
ãi 2
t +1
4) Compute xi = zi + jzi + N for i = 1, 2 · · · N.
MDA [33] is a simple iterative subgradient projection method t ← t + 1, until convergence

which is applicable for problems involving non-differentiable ob-


jective as in (45), it has the following update step:
by zero. Hence, it is guaranteed that the sequence { f PSL (xt )} will
m +1 m T 1 m converge to a finite value.
p = arg min (h ) p + B ψ p, p (46)
γm We now show that the sequence {xt } converges to the station-
p≥0,1 T p=1
ary point of the problem in (8). From the monotonic property of
where hm ∈ ∂ h(pm ) is the subgradient of h(p), γm > 0 is the MM we have:
O(1)
step size given by γm = √ where O (1) is some constant and
m f PSL (x2 ) ≤ f PSL (x1 ) ≤ f PSL (x0 ) (51)
m
B ψ p, p is Bregman-like distance generated by ψ and is given rj
Assume that there is a subsequence x converging to a limit point
by B ψ p, pm = ψ(p) −ψ(pm ) −∇ T ψ pm p − pm . Since the con-
q̃. Then from (13), (14) and (51) we get:
straint in the maximization problem is a unit simplex, we choose
ψ(p) as mentioned in [33]: g (xr j+1 |xr j+1 ) = f PSL (xr j+1 )
(52)
⎧ ≤ f PSL (xr j +1 ) ≤ g (xr j +1 |xr j ) ≤ g (x|xr j )
⎨
⎪ N
p i log p i p ∈ P where g (·) is the surrogate function as defined in (35). Then, let-
ψ(p) = , (47)

⎩ i =1 ting j → ∞, we get:
+∞ otherwise
g (q̃|q̃) ≤ g (x|q̃) (53)
where P = {p ∈ R N −1 |1 T p = 1, p ≥ 0}. Hence, the update step in
(46) is simplified to: which implies g  (q̃|q̃) ≥ 0. Since the first order behavior of surro-
gate function is same as function f PSL (x) ([34]), g  (q̃|q̃) ≥ 0 implies
pm  expγm hm  (q̃) ≥ 0. Hence, q̃ is the stationary point of f (x) and there-
f PSL
pm+1 =
PSL
(48) fore the proposed algorithm converges to the stationary point of
1 T pm  exp γm hm
the problem in (8).
The subgradient hm is given as: Before we end this subsection, we will here discuss the com-
putational complexity of SLOPE. As can be seen from the algo-
T
hm = 4 D̃ ym + t (49) rithmic development in the last subsection, SLOPE is iterative in
nature, and more importantly, we also use MDA to solve the
where
surrogate minimization problem, which in itself is iterative. So,
 T
SLOPE will have double loop of iterations. The inner loop (which
ym = min 4 D̃pm y
y  (50) implements the MDA) takes very less computations when com-
subject to y 2i + y 2i + N ≤ 1, i = 1, 2, · · · N pared to the outer loop, in fact it requires computing two matrix-
T
vector products ( D̃ ym and D̃pm ), which require only O ( N ) com-
m
ã m putations. In the every iteration of the outerloop, the computa-
whose solution is given by [ ym
i
, ym
i+N
] T = − mi where ãi = tions are mainly dominated by the calculation of the quantities:
ãi 2
bk , zk , λub ( B k ), dk , and t k and one would require O (3N log( N ))
[am
i
, am
i+N
] T for i = 1, 2 · · · N and a = D̃p . The pseudo code of
m m
flops to implement them.
MDA algorithm is given Table 2. MDA algorithm which is an inner
loop of SLOPE is terminated when there is no substantial improve-
3.4. Energy, PAPR and spectral constraints
ment between iterations than a predefined threshold (say for ex-
ample, 10−6 ) or when the number of iterations reach a fixed value
Apart from designing sequences with unimodular constraints,
(say 4000 iterations).
in some applications, one may have to deal with energy and the
3.3. Convergence proof and computational complexity of SLOPE PAPR constraints. In this subsection, we will discuss the modifi-
cations needed in SLOPE method to incorporate these constraints.
Since the proposed algorithm is developed using the MM We will show the modifications needed to be done only for the
framework, the sequence of points {xt } generated by the algorithm PAPR constraint on the sequence x as the energy constraint on the
will monotonically decrease the objective function f PSL (x) in (8). sequence is a special case of PAPR constraint. PAPR constraint can
Also, as the PSL metric is the ∞ norm of rk , it is bounded below be expressed as:

6
R. Jyothi, P. Babu and M. Alaee-Kerahroodi Digital Signal Processing 116 (2021) 103142

Table 2: Pseudocode of MDA algorithm. the frequencies in the sequence that we want to get suppressed.
The penalty factor λ is chosen accordingly such that the chosen
Input: D̃, t and O (1)
Initialize: Set m = 0. Initialize p0 ∈ P frequencies are suppressed in the sequence designed by solving
Repeat: the problem in (55). The penalty term x H W H W x is convex in x
m
1) Compute am = D̃pm and ãi = [am i , ai + N ] for i = 1, 2 · · · N.
m T
and using Lemma 3.1, we majorize it by the surrogate function:
m
2) Compute [ y i , y i + N ] = − m
m m T ãi
for i = 1, 2 · · · N. 2 Re (xt )H W H W − λmax ( W H W )x at any given x = xt . As the
ãi 2 surrogate function for the spectral penalty term is linear in x, it
T
3) Compute the subgradient hm = 4 D̃ ym + t can be absorbed in dk ’s of the surrogate function of SLOPE in (36)
pm  exp γm hm
4) Compute pm+1 = and all the other steps after which can be repeated to solve (55).
1 pm  exp γm hm
T
We would like to note that one needs to calculate λmax ( W H W )
m ← m + 1, until convergence
and the matrix-matrix product W H W only one time and stored
and can be later used in the iterations.
Table 3: Pseudocode of accelerated SLOPE.
3.5. Accelerated SLOPE
Input: Sequence length N
Initialize: Set t = 0. Initialize x0
Repeat: To accelerate the convergence speed of the proposed algorithm,
1) Solve the problem in (44) to get the optimal solution p using the MDA
t
we use the following technique - let p and zt be the optimal solu-
algorithm as discussed in the previous subsection and compute x̂ tion obtained by solving the problem in (44) and (45), respectively
2) Line search to get optimal step size β :
choose α (> 1)
at the t th iteration. From, zt we recover the optimal solution x,
t t
β =1 which we denote it as x̂ and use the difference x̂ − xt as the as-
t
xt + α β(x̂ − xt ) cent direction and do the line search [35] as shown in step. 2 in
xtemp = t
|xt + α β(x̂ − xt )| Table 3 to get the optimal step size β . This acceleration scheme
t
while max f P S L (xtemp ) ≤ max f P S L (x̂ ) can be proven to converge to the stationary point of the problem
k k
β = αβ in (8) using the similar analysis done in Subsection 3.3.
t
x + α β(x̂ − xt )
t
xtemp = t
|xt + α β(x̂ − xt )| 4. Numerical simulations and channel estimation applications
end while
t
xt + β(x̂ − xt )
3) xt +1 = t
4.1. Comparison of SLOPE with state of the art methods
|x + β(x̂ − xt )|
t
t ← t + 1, until convergence
In the first part of this section we compare the peak side lobe
 levels of the sequences generated by SLOPE (whose convergence
√ ρ
x2 = N, x∞ = (54) speed was accelerated using the scheme discussed in the Subsec-
N
tion 3.5 i.e., in the simulations, what we refer to as SLOPE corre-
where the parameter ρ determines the ratio between the peak to sponds to the accelerated SLOPE algorithm) with the state-of-the-
average of a sequence and it can vary between N and N 2 . Suppose art algorithms. In particular, we compare the proposed algorithm
if one choose ρ = N 2 , then it can be seen easily that √ the PAPR with the MM-PSL-adaptive algorithm [22] and the CPM algorithm
constraint will degenerate into energy constraint (x2 = N), and [23]. Also, since the algorithm in [25] which minimizes the ap-
similarly if ρ = N, then the PAPR constraint will boil down to uni- proximated PSL metric (with the choice of w k = 1 in the unified
modular constraint. For different choices of ρ , the core steps of metric), is just the same as the MM-PSL-adaptive algorithm; we
the SLOPE algorithm remains the same, and there will be some do not include it for comparison. All the algorithms were imple-
changes only in the MDA algorithm, which are described in the mented in MATLAB using a PC with 2.40 GHz processor with 16
following: GB RAM.

1. In this simulation, we fix the sequence lengths N equal to 30


• When ρ = N 2 , then the h(p) in (44) would become h(p) =
and 300 and compare the PSL value of the sequences generated by
4 Dp2 + p T t, where D = [d1 , d2 , . . . , d N −1 ], and once MDA
SLOPE with the MM-PSL-adaptive algorithm and CPM algorithm.
is run till convergence, the corresponding sequence xt can be
Dpt All the algorithms were initialized with the same random sequence
obtained as  Dpt  .
2 whose elements are generated using the following equation:
• When ρ = N, then there is no change in the steps of SLOPE
(including MDA) as the constraint set would be unimodular xi = exp( j2π φi ) i = 1, 2, · · · N (56)
here.
• When N < ρ < N 2 , then the steps to obtain the closed form where φi is randomly generated from a uniform distribution from
solution of xt after the MDA convergence can be obtained as [0, 1]. SLOPE and the CPM algorithm were made to run until the
in Algorithm 2 of [7]. following condition was met:

In some applications, one may be interested in designing se- |max f PSL xt +1 − max f PSL xt |
≤ 10−10
k k
quences with good PSL as well as the spectrum of the sequence (57)
max f PSL xt
taking a null at some frequency bands. SLOPE can be easily ex- k
tended to handle spectral constraints. Suppose if we consider the or until the maximum number of iterations was met which was
following problem: set equal to 5 × 104 . The MM-PSL algorithm was made to run un-
til the condition in (57) was satisfied or until p = 213 is reached.
min max |r (k)|2 + λ  W x2
x k Fig. 1 shows the objective value vs the iteration (with 5 Monte
(55)
Carlo runs superimposed on the same plot) of SLOPE and the
subject to | x i | = 1 i = 1, 2, · · · N
state-of-the-art algorithms. We found that the CPM algorithm con-
where W denotes a matrix which is made by choosing only spe- verges very quickly when compared to MM-PSL and SLOPE al-
cific rows of a N × N Fourier matrix, the rows to be chosen will be gorithm. Hence, for the sake of readability we have extended

7
R. Jyothi, P. Babu and M. Alaee-Kerahroodi Digital Signal Processing 116 (2021) 103142

the plot of CPM in Fig. 1. From Fig. 1 it can be seen that the
MM-PSL algorithm has stopped early, before meeting the conver-
gence criteria, when compared to the other algorithms. This could
be because of two reasons. Firstly, unlike SLOPE and CPM algo-
rithm, which are directly minimizing the PSL metric, the MM-
PSL algorithm only minimizes the  p norm of the autocorrela-
tion function. Hence, the MM-PSL algorithm minimizes the ap-
proximated PSL metric compared to SLOPE and CPM algorithm.
Next, in the MM-PSL algorithm, one needs to compute the term
   
|rk+1 | p |rk+1 | p −1
1 + ( p − 1) −p
r2: N  p r2: N  p
ak = 2
, k = 1, · · · , N − 1
r2: N  p − |rk+1 |
shown in Table 4 of the manuscript in [22]. Note that for large
value of p such as p = 213 , the  p norm of the vector r will be
equal to the maximum of the absolute value of one of the terms in
|rk+1 |
r. Hence, for some value of k, the term (where rk+1 repre-
r2: N  p
sents one of the terms in r) will be equal to one and therefore the
numerator of one of the ak ’s will be equal to zero. Also, the term
in the denominator of the corresponding ak will be zero since for
large value of p we have r2: N  p = |rk+1 |. Therefore, for large value
of p, one of the ak ’s value will be equal to NaN. Also, note that the
objective function in the MM-PSL-adaptive algorithm involves tak-

N −1
ing the  p norm of the autocorrelation sidelobes i.e. |rk | p . For
k=1
small value of rk and large value of p such as rk = 1.2 and p = 212 ,
the value |rk | p would be equal to Inf. For these reasons, the MM-
PSL algorithm becomes numerically unstable for large value of p
and hence has forced the algorithm to stop even before possible
convergence to a lower PSL value. Also, from Fig. 1 it can be seen
that the state-of-the-art algorithms converges to a larger PSL value
when compared to the PSL value of the SLOPE even for this new
choice of threshold value and maximum iteration number. Hence, Fig. 1. PSL value vs. Iteration of SLOPE, MM-PSL-adaptive algorithm and the CPM
even for a small value of threshold and a large value of maximum algorithm for sequence lengths 30 and 300. (For interpretation of the colors in the
figure(s), the reader is referred to the web version of this article.)
iteration number, the proposed algorithm performs better than the
state-of-the-art algorithms in terms of obtained PSL value.

2. In this simulation we vary the size of N and compare the PSL


value of SLOPE and the state-of-the-art algorithms. All the algo-
rithms were initialized with a random unimodular sequence. MM-
PSL-adaptive algorithm was made to run until p = 213 is reached
or until the condition in (57) is satisfied with threshold set to
10−4 . SLOPE and the CPM algorithm were made to run until the
condition in (57) was satisfied with threshold set to 10−4 or the
maximum number of iterations were met which was set equal to
5 × 103 . The size of N was varied from 500 to 5000 in steps of 500.
For each value of N, we had performed 10 Monte Carlo runs and
averaged over the runs to calculate the PSL value. From Fig. 2 it can
be seen that the proposed algorithm has the least PSL value com-
pared to the state-of-the-art algorithms irrespective of the value
of N. Fig. 2. PSL vs N for SLOPE, CPM and MM-PSL.

3. We now vary the size of the sequence length N and compare the
- Table III compares the performance of the algorithms for varying
performance of the algorithms with respect to different metrics:
sequence length with respect to PSL value, number of iterations
PSL value, run time and the number of iterations required by the
and run time. From Fig. 3.b, Fig. 3.c and Table II, Table III it can
algorithms to converge. The MM-PSL algorithm was made to run
be seen that SLOPE takes more number of iterations and time to
for different values of p - 22 , 23 · · · 213 . The maximum number of
converge when compared to the other algorithms. However, this
iterations was set equal to 5 × 104 and the algorithms were made
increase in time is complemented by the superior performance of
to run until the condition in (57) was met with threshold equal to
SLOPE algorithm in terms of achieving smaller PSL value for all
10−10 . All the algorithms were initialized with the same random
values of N when compared to CPM and MM-PSL - as shown in
sequence as defined in (56). The size of N was varied from 100
Fig. 3.a and Table I.
to 500 in steps of 100. For each value of N, we had performed
50 Monte Carlo runs and averaged over the runs to calculate the 4. In this simulation, we generate sequences of length N = 49 and
PSL value, average run time in seconds and the average number of N = 100 using different initialization sequences namely - Frank Se-
iterations required by the algorithms to converge. Fig. 3 and Table I quence, Golomb sequence, and a random sequence. Similar to the

8
R. Jyothi, P. Babu and M. Alaee-Kerahroodi Digital Signal Processing 116 (2021) 103142

Fig. 3. Comparison of SLOPE, MM-PSL-adaptive, and CPM algorithm with respect to three different metrics - PSL value, CPU time, and the number of iterations for different
values of sequence length N.

Table I isfied. The proposed algorithm and the CPM algorithm were made
Comparison of PSL value of SLOPE with the other state-of-the-art iterative algo- to run until the condition in (57) was satisfied or the maximum
rithms.
number of iterations were met which was set equal to 5 × 104 .
Algorithm N = 100 N = 200 N = 300 N = 400 N = 500 Fig. 4.a, Fig. 5.a and Fig. 6.a compares the correlation level of the
SLOPE 5.103 8.114 9.721 11.311 12.801 sequence for length N = 49 when initialized with a random uni-
MM-PSL 16.401 26.893 35.6398 43.3885 50.7007 modular, Frank and Golomb sequence, respectively. Fig. 4.b, Fig. 5.b
CPM 9.116 16.063 21.053 24.983 29.72
and Fig. 6.b compares the correlation level of the sequence for
length N = 100 when initialized with a random unimodular, Frank
Table II
and Golomb sequence, respectively. From Fig. 4 it can be seen that
Comparison of average run time (in seconds) of SLOPE with the other state-of-the- the proposed SLOPE algorithm generates sequences with almost
art iterative algorithms. equal autocorrelation sidelobes when compared to the state-of-the
Algorithm N = 100 N = 200 N = 300 N = 400 N = 500 art algorithms, this was excepted as SLOPE optimally minimizes
SLOPE 119.94 520.87 924.03 1398.5 1634
the PSL directly unlike the MM-PSL and CPM methods. From Fig. 5
MM-PSL 11.98 30.11 49.35 66.52 74 and Fig. 6 it can be seen that the proposed algorithm has lower
CPM 2.44 4.95 8.27 12.36 25.68 PSL for k close to 0 and N − 1 when compared to the state-of-the-
art algorithms.

Table III
5. In this simulation we will design a sequence of length N = 200
Comparison of average number of iterations of SLOPE with the other state-of-the-art
iterative algorithms. by minimizing the PSL, on top of this we would also want the fre-
quencies in the band [ f l = 0.3142, f u = 1.2566] (rads/sec) to be
Algorithm N = 100 N = 200 N = 300 N = 400 N = 500
suppressed. We will follow the idea described in Subsection 3.4
SLOPE 41612 40830 46713 39322 45512
(with λ = 0.5) and run SLOPE. Fig. 7 shows the autocorrelation and
MM-PSL 28460 26922 26922 26922 26922
CPM 6 5 5 5 5 normalized power spectral density of sequence designed by run-
ning SLOPE with and without spectral constraints using the same
random initialization sequence. It can be seen from the figure that
previous simulation, the MM-PSL-adaptive algorithm was made to with spectral constraints SLOPE generated sequence takes negligi-
run until p = 213 is reached or until the condition in (57) is sat- ble power in the band [ f l = 0.3142, f u = 1.2566] (rads/sec) at the

9
R. Jyothi, P. Babu and M. Alaee-Kerahroodi Digital Signal Processing 116 (2021) 103142

Fig. 4. Comparison of Correlation level of the sequence for length N = 49 and length N = 100 when the proposed algorithm, MM-PSL-adaptive and the CPM algorithm are
initialized with a random unimodular sequence.

Fig. 5. Comparison of Correlation level of the sequence for length N = 49 and length N = 100 when SLOPE algorithm, MM-PSL-adaptive and the CPM algorithm are initialized
with Frank sequence.

Fig. 6. Comparison of Correlation level of the sequence for length N = 49 and length N = 100 when the SLOPE algorithm, MM-PSL-adaptive and the CPM algorithm are
initialized with Golomb sequence.

10
R. Jyothi, P. Babu and M. Alaee-Kerahroodi Digital Signal Processing 116 (2021) 103142

Fig. 8. MMSE vs N for channel estimation using sequences generated via SLOPE, CPM
and MM-PSL.

y = [ y 1 , · · · , y M + N −1) ] T , h = [h0 , · · · , h( M −1) ] T and e = [e 1 , · · · ,


e M + N −1) ] T . If we employ the matched filter at the receiver then
the channel estimates are given by:

H
ĥ = X̃ y (61)

In the numerical simulations, we have generated three different se-


Fig. 7. Autocorrelation and normalized power spectral density plots of sequences
generated with and without spectral constraints. The spectral constraints are ap- quences (sequences generated by SLOPE, CPM and MM-PSL) each
plied over the band [ f l = 0.3142, f u = 1.2566] (rads/sec). of length N, and have generated channel taps of length M = 30
(which are uniformly distributed between [0, 5]). With this setting,
cost of higher PSL when compared with the PSL of the sequence we have generated data sets (corresponding to different sequences)
generated by running SLOPE without spectral constraints. according to (59), and the noise in data sets are taken to white
Gaussian noise. In the first simulation, we varied the value of N
4.2. FIR channel estimation application from 100 to 1000, and for each value of N, we have done 100
Monte Carlo and estimated the channel taps using the matched fil-
In this subsection, we will evaluate the performance of se- ter in (61). We then calculated the average mean square (MMSE) in
quences generated by SLOPE and other methods in comparison in estimating the channel taps. The SNR in this experiment is taken
to be 0 dB. It can be seen from Fig. 8, as N increases MMSE of
the context of channel estimation. We consider a channel whose
all the methods decrease, and SLOPE exhibits lower errors when
impulse response is finite (FIR) and our main goal here is to esti-
compared to the other two methods. In the second simulation, for
mate the channel impulse response h i (the total number of taps M
a fixed value of N = 100, we have varied the SNR from −5 dB
of the channel is assumed to be known). With this, let us suppose
to 20 dB in steps of 5 dB and calculated the mean square error
we transmit a N length pilot sequence xi at the transmitter side
in estimating the channel taps; from Fig. 9, one can observe that
and the signal received at the receiver will look like: when noise power decreases (or the SNR increases) the MMSE of
M
−1 all the methods decrease; sequence generated via SLOPE give ac-
yj = h i x j −i + e j , j = 1, · · · , M + N − 1 (58) curate channel estimates at all SNRs than the sequences generated
i =0 via CPM and MM-PSL.
where e j denotes the noise which is assumed to white Gaussian
with zero mean and variance σ 2 . With the matrix vector notations, 5. Conclusion
the above equation (58) can be compactly written as:
In this paper we propose an iterative algorithm SLOPE which
y = X̃ h + e (59) designs unimodular sequence by directly minimizing the peak
sidelobe level metric. The proposed algorithm - SLOPE is based
where
on the principle of MM and can be implemented efficiently using
⎡ ⎤
x1 0 FFT. We also show that the proposed algorithm can be modified
⎢ .. ⎥ to include energy, spectral and PAPR constraints. We show through
⎢ . ⎥
⎢ ⎥ computer simulations that the proposed algorithm performs better
⎢ .. ⎥
X̃ = ⎢ x . x1 ⎥ (60) than the state-of-the-art algorithms in terms of peak sidelobe level
⎢ N ⎥
⎢ .. ⎥ of the autocorrelation function and also evaluate the performance
⎣0 . ⎦ of the sequence designed via SLOPE in the context of channel esti-
0 ··· xN mation application.

11
R. Jyothi, P. Babu and M. Alaee-Kerahroodi Digital Signal Processing 116 (2021) 103142

[16] J. Song, P. Babu, D.P. Palomar, Optimization methods for designing sequences
with low autocorrelation sidelobes, IEEE Trans. Signal Process. 63 (15) (2015)
3998–4009.
[17] J. Song, P. Babu, D.P. Palomar, Sequence set design with good correlation prop-
erties via majorization-minimization, IEEE Trans. Signal Process. 64 (11) (2016)
2866–2879.
[18] Y. Li, S.A. Vorobyov, Fast algorithms for designing unimodular waveform (s)
with good correlation properties, IEEE Trans. Signal Process. 66 (5) (2017)
1197–1212.
[19] J. Liang, H.C. So, J. Li, A. Farina, Unimodular sequence design based on al-
ternating direction method of multipliers, IEEE Trans. Signal Process. (2016)
5367–5381.
[20] Y. Wang, J. Wang, Designing unimodular sequences with optimized auto/cross-
correlation properties via consensus-admm/pdmm approaches, arXiv preprint,
arXiv:1907.06227, 2019.
[21] M. Soltanalian, P. Stoica, Computational design of sequences with good corre-
lation properties, IEEE Trans. Signal Process. 60 (5) (2012) 2180–2193.
[22] J. Song, P. Babu, D.P. Palomar, Sequence design to minimize the weighted in-
tegrated and peak sidelobe levels, IEEE Trans. Signal Process. 64 (8) (2015)
2051–2064.
[23] M.A. Kerahroodi, A. Aubry, A. De Maio, M.M. Naghsh, M. Modarres-Hashemi,
A coordinate-descent framework to design low psl/isl sequences, IEEE Trans.
Signal Process. 65 (22) (2017) 5942–5956.
[24] H. Esmaeili-Najafabadi, M. Ataei, M.F. Sabahi, Designing sequence with mini-
mum PSL using Chebyshev distance and its application for chaotic mimo radar
waveform design, IEEE Trans. Signal Process. 65 (3) (2016) 690–704.
Fig. 9. MMSE vs Noise power for channel estimation using sequences generated via [25] L. Zhao, J. Song, P. Babu, D.P. Palomar, A unified framework for low autocorrela-
SLOPE, CPM and MM-PSL. tion sequence design via majorization–minimization, IEEE Trans. Signal Process.
65 (2) (2016) 438–453.
CRediT authorship contribution statement [26] J. De Leeuw, W.J. Heiser, Convergence of correction matrix algorithms for mul-
tidimensional scaling, Geom. Represent. Relational Data 735–752 (1977).
[27] D.R. Hunter, K. Lange, A tutorial on MM algorithms, Am. Stat. 58 (1) (2004)
R. Jyothi: Formal analysis, Investigation, Software, Writing 30–37.
– original draft. Prabhu Babu: Conceptualization, Methodology, [28] Y. Sun, P. Babu, D.P. Palomar, Majorization-minimization algorithms in signal
Supervision, Writing – review & editing. Mohammad Alaee- processing, communications, and machine learning, IEEE Trans. Signal Process.
Kerahroodi: Methodology, Writing – review & editing. 65 (3) (2016) 794–816.
[29] P.J.S. Ferreira, Localization of the eigenvalues of Toeplitz matrices using additive
decomposition, embedding in circulants, and the Fourier transform, Matrix 100
Declaration of competing interest (1994) 2.
[30] S. Boyd, S.P. Boyd, L. Vandenberghe, Convex Optimization, Cambridge University
The authors declare that they do not have any conflict of inter- Press, 2004.
est. [31] M. Grant, S. Boyd, Cvx: Matlab software for disciplined convex programming,
version 2.1, 2014.
[32] J.v. Neumann, Zur theorie der gesellschaftsspiele, Math. Ann. 100 (1) (1928)
References 295–320.
[33] A. Beck, M. Teboulle, Mirror descent and nonlinear projected subgradient
[1] M. Rupf, J.L. Massey, Optimum sequence multisets for synchronous code- methods for convex optimization, Oper. Res. Lett. 31 (3) (2003) 167–175.
division multiple-access channels, IEEE Trans. Inf. Theory 40 (4) (1994) [34] M. Razaviyayn, M. Hong, Z.-Q. Luo, A unified convergence analysis of block
1261–1266. successive minimization methods for nonsmooth optimization, SIAM J. Optim.
[2] J. Li, P. Stoica, X. Zheng, Signal synthesis and receiver design for mimo radar 23 (2) (2013) 1126–1153.
imaging, IEEE Trans. Signal Process. 56 (8) (2008) 3959–3968. [35] T. Lipp, S. Boyd, Variations and extension of the convex–concave procedure,
[3] F.F. Kretschmer, K. Gerlach, Low sidelobe radar waveforms derived from orthog- Optim. Eng. 17 (2) (2016) 263–287.
onal matrices, IEEE Trans. Aerosp. Electron. Syst. 27 (1) (1991) 92–102.
[4] P.M. Woodward, Probability and Information Theory, with Applications to
Radar, International Series of Monographs on Electronics and Instrumentation, R. Jyothi is currently working toward the Ph.D. degree with the Centre
vol. 3, Elsevier, 2014.
for Applied Research in Electronics, Indian Institute of Technology Delhi,
[5] H. He, J. Li, P. Stoica, Waveform Design for Active Sensing Systems: A Compu-
New Delhi, India. Her research interests include signal processing and op-
tational Approach, Cambridge University Press, 2012.
[6] M. Soltanalian, M.M. Naghsh, P. Stoica, A fast algorithm for designing comple- timization algorithms.
mentary sets of sequences, Signal Process. 93 (7) (2013) 2096–2102.
[7] J.A. Tropp, I.S. Dhillon, R.W. Heath, T. Strohmer, Designing structured tight Prabhu Babu received the Ph.D. degree in Electrical Engineering from
frames via an alternating projection method, IEEE Trans. Inf. Theory 51 (1)
the Uppsala University, Uppsala, Sweden, in 2012. From 2013 to 2016, he
(2005) 188–209.
was a Postdoctoral Fellow with the Hong Kong University of Science and
[8] R. Barker, Group synchronizing of binary digital systems, Commun. Theory
(1953) 273–287. Technology. He is currently with the Centre for Applied Research in Elec-
[9] S. Golomb, R. Scholtz, Generalized barker sequences, IEEE Trans. Inf. Theory tronics, Indian Institute of Technology Delhi, New Delhi, India.
11 (4) (1965) 533–537.
[10] N. Zhang, S.W. Golomb, Sixty-phase generalized Barker sequences, IEEE Trans. Mohammad Alaee-Kerahroodi received the Ph.D. degree in telecom-
Inf. Theory 35 (4) (1989) 911–912.
munication engineering from the Department of Electrical and Computer
[11] P. Borwein, R. Ferguson, Polyphase sequences with low autocorrelation, IEEE
Trans. Inf. Theory 51 (4) (2005) 1564–1567.
Engineering, Isfahan University of Technology, Isfahan, Iran. In 2017, he
[12] C.J. Nunn, G.E. Coxson, Polyphase pulse compression codes with optimal peak joined SIGCOM, SnT, the University of Luxembourg, where he is currently
and integrated sidelobes, IEEE Trans. Aerosp. Electron. Syst. 45 (2) (2009) working on innovative radar signal processing solutions for automotive
775–781. MIMO radar systems as well as pursuing academic research in waveform
[13] D. Chu, Polyphase codes with good periodic correlation properties (corresp.), design and signal processing. In addition to the research activities, he is
IEEE Trans. Inf. Theory 18 (4) (1972) 531–532. also in charge of radar lab activities and prototyping at SnT. He has more
[14] R. Frank, Polyphase codes with good nonperiodic correlation properties, IEEE
than 12 years of practical experience in different radar systems, including
Trans. Inf. Theory 9 (1) (1963) 43–45.
ground surveillance, air surveillance, marine, and weather radar systems.
[15] P. Stoica, H. He, J. Li, New algorithms for designing unimodular sequences
with good correlation properties, IEEE Trans. Signal Process. 57 (4) (2009)
1415–1425.

12

You might also like