- Chapter 1 Forecasting Perspective
- Demand
- Forecasting
- PPP - Homework 2
- tv_24_2017_Supplement_2_503_509
- Forecasting of Electric Consumption in a Semiconductor Plant Using Time Series Methods
- Double Exponential Smoothing for Banks
- Exponential Smoothing- The State of the Art
- 9. Business Economics -Regression Analysis
- Forecasting Techniques in Operations Management
- Rtimeseries Handout
- Chapter 6
- Time Series
- Forecasting Methods
- Master Forecast Profile2
- OM2E_Chapter03
- Time Series Economic Forecasting
- forecasting.pdf
- Forecasting
- Demand Forecasting - Lecture Notes
- Production operation strategy
- EBS_Forecasting
- Forecasting
- Costos y Beneficios
- Lee-Carter mortality forecasting.pdf
- Business Stats Project
- What_s_new_APO_7_0308
- OM1
- Time Series Data for Statistics Glossary
- Pharma Sales Forecasting
- How Most Published Findings Are False
- Tibetan Page 69 70
- Community Fiber Lessons Learnt
- globalayurvedascenario-121117100847-phpapp01
- Real Estate Funds
- WCE2007_pp1010-1015 (1)
- Genese Heading
- Jill Lepore_ What the Theory of “Disruptive Innovation” Gets Wrong _ the New Yorker
- Mb Manual Ga-z77x-Ud5h j
- Geneve 1564
- Global Science Market
- Walter
- Vers Subm.ps
- VIF
- Mb Manual Ga-z77x-Ud5h j
- Mean reversion
- Presentation
- Large Price Declines, News, Liquidity, and Trading Strategies: An Intraday Analysis
- Vagbhatta Sutra Sanskrit Verses I Hindi
- Can Andrew Lo Legitimize Technical Analysis
- Time Series Prediction using ICA Algorithms
- Nov Dec2002cookrep
- Classical Ayurvedic Texts
- Ayurvrda Fundamentals and Basic Principals1044 (1)
- Ideapad u310 u410 Hmm 1st Edition Mar 2012 English
- QF HighFrequencyIntradayStockReturns Chiang Yu Wu 3-12-2006 V2
- Geneve 1564
- Rasa Shastra Andrew Mason Vol 4 Eng
- quants
- Yava Barley a Dietary Solution in Obesity

**Jai-Jin Lim and Kang G. Shin
**

Real-Time Computing Laboratory, Electrical Engineering and Computer Science University of Michigan, Ann Arbor, MI 48109–2122, USA Email: {jjinlim,kgshin}@eecs.umich.edu

Abstract— New energy-efﬁcient linear forecasting methods are proposed for various sensor network applications, including innetwork data aggregation and mining. The proposed methods are designed to minimize the number of trend changes for a given application-speciﬁed forecast quality metric. They also self-adjust the model parameters, the slope and the intercept, based on the forecast errors observed via measurements. As a result, they incur O(1) space and time overheads, a critical advantage for resource-limited wireless sensors. An extensive simulation study based on real-world and synthetic time-series data shows that the proposed methods reduce the number of trend changes by 20%∼50% over the existing well-known methods for a given forecast quality metric. That is, they are more predictive than the others with the same forecast quality metric.

I. I NTRODUCTION Forecast or prediction is of fundamental importance to all sensor network applications and services, including network management, data aggregation and mining, object tracking, and environmental surveillance. Thus, it should be designed as a middleware component that is predictive, energy-efﬁcient, and self-manageable: (1) being predictive, a forecast method should predict as many future measurements as possible with the required forecast quality; (2) being energy-efﬁcient, which is a networked version of being predictive when the prediction logic is shared across networked sensors, a forecast method should reduce the number of measurements to be transmitted through the network for given forecast quality requirements; (3) being self-manageable, a forecast method should selfadjust the model parameters based on the forecast errors observed during actual measurements. There are a wide variety of prediction algorithms available in the literature. For example, the Kalman and Extended Kalman Filters [1] (KF/EKF) based predictors have been widely used by many researchers. Selection of a prediction algorithm, however, should be based on the characteristics of application domain and problem at hand. For example, the authors of [2] use double exponential smoothing [3] to build a linear forecast method for predictive tracking of a user position and orientation. Their results showed that when compared against KF/EKF-based predictors, the constructed predictor runs much faster with equivalent prediction performance and simpler implementations. In this paper, we focus our attention on linear forecast, as it

is not only popular (due mainly to its simple implementation and intuitive prediction), but also it has been widely used to characterize the time-series data for many data mining and feature extraction algorithms [4]–[6], which work on a series of piece-wise linear representations (PLRs) extracted from the underlying time-series data. Main contributions of this paper with respect to the linear forecast are as follows. • We evaluate the performance of existing well-known linear forecast methods in terms of the number of trend changes for a given forecast quality requirement. Speciﬁcally, we analyze the applicability of existing methods in the application context that trades forecast accuracy for the lifetime of sensors. • We propose new linear forecast methods that outperform the existing methods with respect to the performance metric used above, making themselves more energyefﬁcient or predictive than the others. Moreover, the proposed methods are designed for online use and selfadjust the model parameters of a linear trend based on the forecast errors observed via actual measurements The paper is organized as follows. Section II discusses the related work. Section III describes the motivations behind our algorithms development and states the associated problem. Section IV presents our linear forecast methods, as well as existing linear forecast methods, to solve the problem. In Section V, we describe an extensive simulation study of both real-world time series data and synthetic data. Finally, the paper concludes with Section VI. II. R ELATED W ORK Distributed in-network aggregation and approximate monitoring in sensor networks. To reduce the number of transmissions involved in a distributed in-network aggregation, the authors of [7] rely on an aggregation tree rooted at an aggregator. On the other hand, the authors of [8], [9] tackle the same problem in a different way called the approximate monitoring, by assuming a new class of error-tolerant applications that can tolerate some degree of errors in the aggregated results. The authors of [10] extend the latter by deﬁning a way of mapping the user-speciﬁed aggregation error bound to the per-

0-7803-9466-6/05/$20.00 ©2005 IEEE

MASS 2005

So. ﬁnd the most similar or dissimilar time-series and any relevant areas in the network. it is one of the most frequently-used representations that support various data mining operations [4]–[6]. For example. Note that both in-network aggregation service and innetwork data mining service need to collect continuous measurements. the solution guaranteeing robust trend delivery and recovery from any packet loss would be fully developed in future. clustering and relevance feedback. [17]. whose data type is. The authors of [5] undertook extensive review and empirical comparison of a variety of algorithms to construct a piece-wise linear representation from time-series data stored in data base systems. so are all the above-mentioned approaches. Delivery of the linear trend information across networked sensors in our approach is also susceptible to such loss. hide key information on gradual trends or patterns in measurements. Discrete Wavelet Transformation (DWT). Data mining and piece-wise linear representation. They also claimed that the choice of an appropriate time-series representation should be made by other features like ﬂexibility. not only holds such patterns. They concluded that there is little difference among all wellknown approaches in terms of the root mean square error. [18]. used by many data mining applications. If SWAB were used in the network. which extends the in-network data aggregation service in such a way that data mining queries are issued and processed within the same network. The distributed incremental computation allows initial linear segments to be collected across the network. The thus-collected linear segments can later be used to build a new linear segment as needed2 at resource-abundant sensors. called SWAB (Sliding Window And Bottom-up). To best serve our purpose without any modiﬁcation. it may need to develop an energy-efﬁcient in-network data mining service. without resorting to the original data. ﬁnd segments showing anomalies and any relevant areas in the network. SUM. and MAX. allowing a new linear segment to be built from existing linear segments [4]–[6]. Second. However. the following networked version of data mining queries [16] can be issued and processed in the network. Singular Value Decomposition (SVD). Similar work [11] was also reported to dynamically redistribute the aggregation error bound based on local statistics through which the most beneﬁciary node from the redistribution is decided. Other high-level time-series representations can also be used.group quality constraint for hierarchically-clustered networks with more than one level. For example. our online linear forecast method can be used as a means to replace the bottom half of the algorithm that produces initial linear segments. incremental calculation. Fourth. PLR. the bottom-up algorithm or SWAB is appropriate. III. or Theorems 1 and 2 in [6]) enables a new linear segment to be constructed from existing segments with some error metric. merging operation in [4]. the bottom-up algorithm or SWAB (Sliding Window And Bottom-up) in [5] (or others such as Theorems 3. They have been proposed in [5]. the choice of an appropriate time-series representation should be made by other features like ﬂexibility. The authors of [4]. Thus. M OTIVATION AND P ROBLEM A. a natural question arises about the representation of the network state. [17] and references thereof. and Piecewise Linear Representation (PLR). Piecewise Constant Approximation. as claimed in [6]. rather than approximation ﬁdelity. incremental calculation. The authors of [6] compare different time-series representations over many real-world data in terms of reconstruction accuracy. MIN. numeric in sensor networks. [18]. To remedy this problem. (3) Anomaly detection: Given a time-series and some model of normal behavior. (1) Distributed pattern identiﬁcation: Given a query timeseries and a similarity measure. but comparisons [6] of several time-series representations1 over many real-world datasets show that there is little difference among these approaches in terms of the root mean square error. First. Piecewise Aggregate Approximation (PAA). [15] to prolong the lifetime of sensor networks. PLR will be appropriate as a basic representation of the network state for a uniﬁed innetwork aggregation and data mining service. rather than approximate ﬁdelity. it is intuitive and easy to calculate. that produces high-quality approximations. Motivation Initially. While the result in [12] can be applied to ensure the robust delivery of the linear trend information in the network.3 in [18]. but can also reconstruct the original time-series within some error bound. without resorting to the original data. [13]. it allows distributed incremental computation so that a new linear segment may be built from existing linear segments [4]–[6]. They also proposed the on-line bottom up piece-wise linear segmentation technique. (2) Summarization: Given a time-series of measurements. Third. 1 Such representations include Discrete Fourier Transformation (DFT). about various network states such as the temperature of some region and position/orientation of moving targets. with no delay. Obviously. through the prediction. However. Using PLR as the network state representation has the following main advantages. Then. create and report its approximate but essential features. aggregation results like AVG. the authors of [12] proposed a new robust aggregation tree construction algorithm against packet loss over a wireless channel. . which is then used to produce approximate values of various aggregation functions. without collecting all the raw measurements to get initial linear segments. the aggregated result is affected lot by packet loss over a lossy wireless channel.2 and 3. typically at periodic intervals. and designed energy-efﬁcient computation for some classes of aggregation such as SUM and AVG. or will be used to produce the approximate aggregation results as needed. [14] studied the use of piece-wise linear representation as the underlying representation of timeseries data to allow fast and accurate classiﬁcation. 2 Much work has been reported in the literature. our desire for an energy-efﬁcient self-adapting linear forecast method is generated by the need to design a uniﬁed in-network data aggregation and mining service. Other than the in-network data aggregation service [7]. in most cases. it can also be used to compute traditional aggregation queries as approximate results.

ˆ denote two n vectors of the underlying time-series X and X data and the estimated time-series by the corresponding linear ˆ (i). This metric is even used in a continuous patient monitoring application in an intensive care unit [20]. X f (X. For instance. 1. speciﬁes that the cumulative residual errors of any X segment should not exceed a user-speciﬁed threshold. Thus. Using L∞ we have the segmentation governed by: ˆ) ˆ) L∞ (X. Using the C∞ metric. such random noises can be ignored with some loss of information about the original time-series data. Problem Statement The problem we want to solve is formally stated as follows. As can be seen from the ﬁgure.e. These areas will become one of major sensor network applications. . • Adaptive: It states that a solution should self-adjust the model parameters. A similar concept of bounded error-tolerance is also used in approximate monitoring and aggregation applications [8]. . i = 1. . . . Problem 1: : Design an energy-efﬁcient self-adaptive online linear forecast method: Given an inﬁnite time-series xi . Figure 1 shows this. or the cumulative residual errors are bounded [20]. τ = 0. Note that lT . f is the application-speciﬁed forecast quality measure governing piece-wise linear segmentation. lightweight. with the maximum forecast error 10. speciﬁes that the maximum forecast error of any segment should not exceed a user-speciﬁed threshold. . one of the most frequently-used metrics for quality control in environmental and manufacturing process management [22]. . which is typically controlled from both below and above. .40 20 0 10 20 30 40 50 60 70 80 90 100 Fig. the L∞ metric can serve both innetwork data aggregation and in-network data mining services simultaneously. lT − 1 f (< xT . the slope and the intercept. given the quality metric. For ease of exposition. . . X n | i=1 ˆ (i))| ≤ ε (X (i) − X where ε is a user-speciﬁed threshold. ﬁt. i. the segmentation problem can be deﬁned to produce the best representation such that the maximum error for any segment does not exceed a user-speciﬁed threshold [5]. deﬁned as i=1 (X (i) − ˆ (i)). . It is probably the most popular metric in various applications such as data-mining (see Table 5 in [5]) or data-archiving applications to compress time-series data [19]. This quality metric. < x ˆT . . This quality metric. but automatically decided as a result of segmentation. However. Illustration of piece-wise linear segmentation with a maximum permissible forecast error ε = 10. Depending on the forecast quality measure. of a linear trend according to the forecast errors observed. . xT +lT −1 is represented by x ˆT (T + τ ). Since the method is intended to be used as a middleware component of distributed wireless sensors. Figure 1 illustrates the application of a piece-wise linear forecast method to real-world data. the speciﬁcation of the forecast quality measure depends largely on various application requirements. Due to the same accuracy constraint. . and an application-speciﬁed forecast quality threshold ε. such that x ˆT (T + τ ) = a ˆ 1 (T ) + τ ˆ b2 (T ). • Online: It states that there should be no time lag between the availability of a linear trend ﬁt and the underlying time-series data. Solid circles mean the forecast estimates and empty circles mean actual measurements. In general. Bars at the bottom represent trend-change points. L∞ metric. Each element of the two is denoted by X (i) and X respectively. . the segmentation of actual time-series data is governed by the forecast quality metric ε. deﬁned as maxn i=1 |X (i) − ˆ X (i)|. Although these examples are not exhaustive. . Especially. . x ˆT +lT −1 >) ≤ ε where x ˆT +τ is an estimate for the actual measurement xT +τ from the linear ﬁt x ˆT (T + τ ) built at time T . . design an energy-efﬁcient self-adapting online linear forecast method that minimizes the number of trend changes under a given quality measure. It will produce a sequence of piece-wise linear segments. some forecasts at the sampling interval between 40 and 65. the ﬁrst two speciﬁcations are so widely used that we may incorporate them as our forecast quality measures. where each piecewise linear ﬁt to the underlying original time-series data xT . B. This means that a new linear trend information is available as soon as the current linear trend violates the forecast quality measure. the solution should be energy-efﬁcient. X ˆ )| |C∞ (X. xT +lT −1 >. and online: • Energy-efﬁcient: It states that a solution should minimize the number of trend changes. as explored by the CodeBlue project [23]. adaptive. we have the segmentation governed by: ˆ) f (X. which are unexpected outliers. [19].. [9]. they are frequently addressed by many data mining and aggregation applications that deal with time-series data. the length of the linear segment. . • Lightweight: It states that a solution should incur O (1) space and time overheads in extracting a linear trend information. . X n ˆ (i)| ≤ ε max |X (i) − X i=1 where ε is a user-speciﬁed threshold. a linear segment can grow unbounded as long as it satisﬁes the forecast quality measure. or the dynamic conﬁdence band can be deﬁned [21]. 2. This metric is in fact a CUSUM (cumulative sum) metric. is not a decision parameter. xT +1 . the number of piecewise linear segments. n C∞ metric. a thus-constructed linear segment may degenerate to a single point if it predicts badly or the underlying time-series show random ﬂuctuations.

a slope estimate st is deﬁned by st 1 (xt − a ˆ1 (to )). DSSL extends this interval of instantaneous slope calculation from between two successive sampling measurements to between two successive linear segments. DESL requires that the [2] following exponential smoothing statistics St and St are updated at every sampling period [3]: St [2] St = = [2] αxt + (1 − α)St−1 αSt + (1 − α)St−1 [2] where the notation St implies double exponential smoothing. Recall that in NHWL the estimate of the slope is simply the smoothed difference between two successive estimates of the intercept. DSSL applies the same update procedures for the slope and the intercept as in NHWL. DSSL estimates the slope by anchoring a base to the intercept of the current linear trend. a new trend should be derived by re-applying the linear regression method. unlike NHWL. It uses exponential smoothing to extract the model parameters of a linear trend. . Second. B. Similarly. Although the thusconstructed linear trend is guaranteed to minimize the sum of the squared errors over the past measurements. 1. it is shown in [3] that DESL minimizes the weighted sum of the squared errors. Then. Double-Exponential-Smoothing based Linear (DESL) forecast. past W measurements of xi . Thus. β < 1). except for using st rather . it has only one single smoothing parameter. the τ -step ahead linear forecast x ˆt (τ ) into future measurements after t is then given by a ˆ1 (t) + τ ˆ b2 (t). α. DSSL is logical and intuitive. but is not based on sound criteria like the least square errors or the weighted least square errors. τ = 0. the following relations are known [3]: intercept: a ˆ1 (t) = slope: ˆ b2 (t) = Equivalently [3]: slope: ˆ b2 (t) = intercept: a ˆ1 (t) = α (St − St−1 ) + (1 − α)ˆ b2 (t − 1) 1 − αˆ b2 (t). In some sense. A commonly-used linear trend forecast algorithm with O(1) space and time overheads is NHWL [24]. Note that the slope is ﬁrst computed in an equivalent form. The Proposed Linear Forecast Methods Directly Smoothed Slope based Linear (DSSL) forecast. νt = i=tb (i−t) . It has been applied widely to detect aberrant network behavior [21]. . which is then used to calculate the intercept. Thus. . D ESIGN OF E NERGY. τ = 0. for the intercept and the slope updates at time t as follows: intercept: a ˆ1 (t) = αxt + (1 − α) a ˆ1 (t − 1) + ˆ b2 (t − 1) slope: ˆ b2 (t) = β (ˆ a1 (t) − a ˆ1 (t − 1)) + (1 − β )ˆ b2 (t − 1) where the determination of smoothing parameters is left to the user or application. not the square of a single smoothed statistic. the τ -step ahead linear forecast x ˆt (τ ) into future measurements after t is then given by a ˆ1 (t) + τ ˆ b2 (t). [18]: te major differences from the above two algorithms. it is unclear if the thus-constructed linear trend can still produce a good predictive linear ﬁt to future measurements. . [25] for its simplicity. It has two Then. β (0 < α. 1. The intercept at the shifted origin to the time t. The intercept a ˆ0 and the slope ˆ b2 are calculated as follows [3]. The constructed linear trend holds as long as it satisﬁes the quality constraint speciﬁed by L∞ or C∞ metrics. Suppose the current linear trend is no longer valid at time t ≡ te . DESL estimates the slope by calculating the smoothed difference between two successive smoothed inputs. . at any time t after to in the sampling period. Preliminary Least-Square-Error based Linear forecast method (LSEL). Another O(1) space and time linear forecast algorithm that uses exponential smoothing is DESL [3]. x tb xi /W . the difference between two successive estimates in NHWL or DESL can be considered as an estimate for the instantaneous slope between two successive measurements in the sampling period. A W -period moving linear regression with O(W ) space and time requirements is built as follows. First. is given by a ˆ0 (t) + tˆ b2 (t). Then. 1. i ∈ [tb . t − to Non-seasonal Holt-Winters Linear (NHWL) forecast. te ]. τ = 0. Otherwise. the τ -step ahead linear forecast x ˆt (τ ) into future measurements after t is given by a ˆ1 (t) + τ ˆ b2 (t) = (ˆ a0 (t) + tˆ b2 (t)) + τ ˆ b2 (t). Forecasting with the linear regression needs to identify the model parameters from past W measurements. An intuitive approach to extracting linear trend information is to apply the linear regression method. Then. suppose that the current linear trend was built at time to with the intercept a ˆ1 (to ). . St + α 2St − St α [2] (St − St ) 1−α [2] slope: ˆ b2 (t) = i=tb ¯ i−t (xi − x ¯) νt intercept: a ˆ0 (t) = ¯ x ¯−ˆ b2 t te te e ¯= t ¯2 where t ¯ = i= i=tb i/W . W ≡ te − tb + 1. Thus. will be used to extract the parameters of a new liner trend. In other words.IV. .EFFICIENT S ELF . a ˆ1 (t). .ADAPTING O NLINE L INEAR F ORECAST A. NHWL uses separate smoothing parameters. Speciﬁcally. Given a smoothing constant α. . unlike LSEL minimizing the sum of the squared errors.

.53 std 19.56 1. In plotting the performance results. . The results are then averaged from 200 independent runs. the archived measurement data [28] from several sites participating in the tsunami research program are used.44 359. . we set the current measurement xt to the new intercept estimate. The network currently consists of around 20 TinyOS mica2 and mica2dot motes. 31.08 1. Some of data used in our evaluation are obtained from the following sites. short for the mean n successive difference. 1. Finally the stdsd statistic represents the standard deviation of the successive difference between samples. the τ -step ahead linear forecast x ˆt (τ ) into future measurements after t is then given by a ˆ1 (t) + τ ˆ b2 (t). Intercept correction under the quality metric All the algorithms described above are susceptible to violate the L∞ metric constraint.60 1160. The program focuses on improving tsunami warning and mitigation.9431 6. Moreover.44 si i=to +1 ˆ b2 (t − 1) + 1 (st − ˆ b2 (t − 1)). a linear trend construction algorithm minimizing the mean absolute deviation is preferred in that it can produce a better linear ﬁt to underlying time series data. 5). DASL uses another incremental relationship using st intercept: a ˆ1 (t) = slope: ˆ b2 (t) = xt 1 t − to t TABLE I S TATISTICAL PROPERTIES OF SAMPLES Temperature Light Water-level Random-walk mean 553. 1. tidal waves due to seismic activity. and the range in samples. To avoid this problem.32 254. C. std. research program at PMEL (Paciﬁc Marine Environmental Laboratory) of NOAA (National Oceanic and Atmospheric Administration) [27]. τ = 0. V. the standard deviation. 2004. California. The statistics like mean. In addition. in specifying the smoothing parameters for forecast methods other than LSEL and DASL.. The PMEL tsunami research program seeks to mitigate tsunami hazards to Hawaii. E VALUATION We now evaluate the proposed linear forecast methods over various real-world and synthetic time-series data.37 range 159. This measure is calculated as a fraction of sampling points that trigger the linear trend change. especially when the new intercept estimate is either above or below the actual measurement by more than the speciﬁed L∞ constraint ε. equipped with a variety of sensor boards such as light.00 10. To circumvent such difﬁculty. a major source of energy consumption in wireless sensor networks. we use the relative performance since we are interested in deciding which method is a better ﬁt to the underlying time series. • The SensorScope [26] which is an indoor wireless sen´ sor network testbed at the EPFL (Ecole Polytechnique F´ ed´ erale de Lausanne) campus. The mean absolute deviation: the accuracy of the predicted value as compared to the actual value. representing the mean. A common problem with any exponential smoothing system is that it is difﬁcult to determine an appropriate smoothing parameter under which a system is expected to yield better performance.73 254. the τ -step ahead linear forecast x ˆt (τ ) into future measurements after t is then given by a ˆ1 (t) + τ ˆ b2 (t). .5075 stdsd 1. Table I summarizes statistical properties of a total of 100. • The tsunami. temperature. The msd statistic. Using these 500 samples per each group we evaluate our algorithm and get results on the performance metrics below.37 418. As for the communication overhead. • • Communication overhead: the number of trend changes under the user-speciﬁed L∞ or C∞ metrics.3778 0.000 samples used in our evaluation. whereas the tolerance bound ε in unit of the mean absolute deviation is used for evaluating the relative performance of the mean absolute deviation per each method. is deﬁned as i=2 |xi − xi−1 |/(n − 1). In addition to the above. Washington and Alaska. we use the relationship between an exponential smoothing system with . . i. t − to Thus. xt+1 = xt + U (−5. Oregon.than the successive difference of two intercept estimates: intercept: a ˆ1 (t) = αxt + (1 − α) a ˆ1 (t − 1) + ˆ b2 (t − 1) slope: ˆ b2 (t) = βst + (1 − β )ˆ b2 (t − 1).000 readings for both realworld data and synthetic data. Thus. Directly Averaged Slope based Linear (DASL) forecast. By deﬁnition. all the performance results are normalized with that of NHWL.00 1023. τ = 0. As for water-level samples. synthetic random walk time-series data are generated by the relation: x1 = 20.e.000 temperature and light readings are collected from the SensorScope project during the period of Nov. A total of 100.06 1. These 100. implying that a new linear trend always starts with the actual measurement that causes the violation of the speciﬁed forecast quality metric.28 45.64 0. and acoustic sensors. 25 through Dec. and range are deﬁned as usual. Figure 2 shows snapshots of 10. we adhere to the same policy even in the case of using the C∞ metric in order to make algorithms simple.000 samples per each category are then grouped into nonoverlapping 200 sub-groups of size 500 samples each. Notice that any linear trend ﬁt will be valid and equivalent to each other as long as it satisﬁes the user-speciﬁed L∞ or C∞ metrics. Nonetheless.00 msd 0. this performance measure is directly related to the energy consumption in the communication. .0077 2.

we perform the sensitivity analysis of DSSL to the smoothing parameters α and β . Thus any method described in the paper will be used as a valid forecast method. We leave out any evaluation result for methods whose relative performance with respect to NHWL exceeds 150%.5 0 0 2000 4000 6000 sample 8000 10000 300 200 100 0 0 2000 4000 6000 sample 8000 10000 0 2000 4000 6000 sample 8000 10000 (a) Temperature Fig. Notice that a very short-term memory or a way of weighing more recent measurements is used for the forecasting evaluation in both ﬁgures. the result in Figure 7 shows that a short-term memory is sufﬁcient to get a good forecast method in most cases. Second. NHWL and DESL. This will be shown later in the sensitivity analysis. normalized with NHWL. Since DASL is also . The results are obtained with the parameters W = 2 (LSEL). (b) Light (c) Water-level (d) Random-walk Snapshots of time series (raw) data used in our evaluation a smoothing parameter α and a W -period moving window system. it is still worth investigating how good a ﬁt each method is to underlying time series. The mean absolute deviation Any forecast method is equivalent as long as it satisﬁes the user-speciﬁed L∞ or C∞ metrics. The evaluation of the communication overhead shows that the proposed DSSL outperforms the other methods in almost all data set evaluated. A. Recall that a short-term memory in DSSL or NHWL means that the forecast model is more responsive to recent forecast errors. resulting in saving more energy up to 20% ∼ 50%. First. Under the L∞ metric. For some method. in terms of the number of trend chanes or the number of linear segments produced. making itself promising for an energy-efﬁcient linear forecast method in wireless sensor networks.5 2 1. as shown in Figure 4. Communication overhead Figure 3 shows the relative communication overhead of all forecast methods. we can draw the following conclusions.67 (NHWL. contributing up to about 20% energy savings as compared to NHWL. Sensitivity to the smoothing parameters Finally. its result is omitted in the paper. Figure 5 and Figure 6 show evaluation results. DSSL turns out to be better than NHWL no matter what combination of the α and β smoothing parameters is chosen. DASL. whereas the deviation gets small below 20% under the C∞ metric. when the same single smoothing parameter is used. However. From both ﬁgures. Speciﬁcally. it just shows the relative performance among the forecast methods. As can be seen in both ﬁgures. DSSL). except for the Light readings. when experimented with the same α and β . For instance. measuring the msd of each method with respect to the input tolerance bound ε. all the forecast methods show similar features. Figure 7 and Figure 8 show evaluation results under the L∞ metric. DSSL shows the best performance in almost all the cases evaluated. When experimented with the constant α and the varying β . From the ﬁgure. the overhead shown in both Figure 3 and Figure 4 appears to be increasing as the tolerance bound ε gets larger. where we perform the sensitivity of DSSL to different smoothing parameters – but the same conclusion will be drawn as can be seen later. it is concluded that DSSL with a short memory outperforms most of time any other methods investigated in the paper. The tolerance bound ε in the L∞ metric is speciﬁed in unit of the integer times the mean absolute deviation (msd). show almost the same performance due to their similarity in the slope estimation. it does not mean that actual absolute overhead increases proportionally. 2. challenging the adequacy of it as a good forecast method in our context. B. However. a short-term memory is better in all the cases. Such difference between under two forecast quality measures relates to the communication overhead. the closer the forecast is the underlying time series and the smaller deviation. the msd’s stay between 30%∼40%. Fourth. making the average age of data in both system equal gets the following result [3]: α = 2/(W + 1).5 3 580 800 value (raw) value (raw) 500 400 value (raw) value (raw) 560 540 520 500 0 2000 4000 6000 sample 8000 10000 600 400 200 0 2. However.67 (DSEL. a long-term memory does not overturn the conclusion drawn above. This gets us back to the results in Figure 3 or Figure 4. the forecast method based on the linear regression shows the worst performance in all the cases evaluated. Note that in the DSSL method α is used as the intercept smoothing parameter whereas β as the slope smoothing parameter.5 1 0. Third. performs as good as and even better than DSSL in a particular data set. which evaluates the sensitivity of the slope smoothing only. For instance. The more often a trend change occurs. α = 2/(W + 1) = 0. under the L∞ metric.1000 3. In general. Similar results are also observed with the C∞ metric. DSSL performs fairly well as compared to the best method in each data set. NHWL. It turns out that the proposed DSSL and DASL save the communication overhead by up to 50% as compared to NHWL or DESL. DSSL) and β = 0. Another evaluation was conducted for the C∞ metric. C. LSEL is not plotted in Figure 3(a) and Figure 3(c) since its relative performance is almost always beyond 150% in the two cases evaluated. but due to both the space limit and the similar features. without any hassle of determining smoothing parameter. regardless of the quality metric. Instead.

DSSL).67 (NHWL. DSSL). DSSL).67 (DSEL.w=6 DSSL / NHWL. α = 2/(W + 1) = 0. 3.67 (DESL. 4.67 (NHWL.w=6 DSSL / NHWL. 150 150 150 150 communication overhead (%) communication overhead (%) communication overhead (%) communication overhead (%) 100 100 100 100 50 DESL NHWL LSEL DSSL DASL 50 DESL NHWL LSEL DSSL DASL 50 DESL NHWL LSEL DSSL DASL 50 DESL NHWL LSEL DSSL DASL 0 0 2 4 6 tolerance error (ε) 8 10 0 0 2 4 6 tolerance error (ε) 8 10 0 0 2 4 6 tolerance error (ε) 8 10 0 0 2 4 6 tolerance error (ε) 8 10 (a) Temperature (b) Light (c) Water-level (d) Random-walk Fig. and β = 0. 5. the speciﬁed tolerance error ε (in unit of the integer times msd) under L∞ metric with parameters of W = 2 (LSEL).w=2 DSSL / NHWL. with respect to the tolerance error) vs.67. The DSSL communication overhead relative to NHWL vs.w=4 DSSL / NHWL. the speciﬁed tolerance error ε (in unit of the integer times msd) under the C∞ metric.w=4 DSSL / NHWL.w=8 communication overhead (%) communication overhead (%) communication overhead (%) 100 80 60 40 20 0 0 DSSL / NHWL. The mean absolute deviation (%. with respect to the NHWL) vs. α = 2/(W + 1) = 0. DSSL).w=2 DSSL / NHWL.w=2 DSSL / NHWL. 0. the speciﬁed tolerance error ε (in unit of the integer times msd) under C∞ metric with parameters of W = 2 (LSEL). NHWL.67 (NHWL. and β = 0. 6.150 150 150 150 communication overhead (%) communication overhead (%) communication overhead (%) communication overhead (%) 100 100 100 100 50 DESL NHWL LSEL DSSL DASL 50 DESL NHWL LSEL DSSL DASL 50 DESL NHWL LSEL DSSL DASL 50 DESL NHWL LSEL DSSL DASL 0 0 2 4 6 tolerance error (ε) 8 10 0 0 2 4 6 tolerance error (ε) 8 10 0 0 2 4 6 tolerance error (ε) 8 10 0 0 2 4 6 tolerance error (ε) 8 10 (a) Temperature (b) Light (c) Water-level (d) Random-walk Fig. Communication overhead (%. DSSL).w=8 2 4 6 tolerance error (ε) 8 10 2 4 6 tolerance error (ε) 8 10 2 4 6 tolerance error (ε) 8 10 2 4 6 tolerance error (ε) 8 10 (a) Temperature (b) Light (c) Water-level (d) Random-walk Fig. 4. . 0.40.28. NHWL. 7. the speciﬁed tolerance error ε (in unit of the integer times msd) under L∞ metric with parameters of W = 2. NHWL.w=4 DSSL / NHWL. with parameters of W = 2 (LSEL). 0. 120 120 120 120 communication overhead (%) 100 80 60 40 20 0 0 DSSL / NHWL.67 (DSEL. α = 2/(W + 1) = 0. and β = 0.w=8 100 80 60 40 20 0 0 DSSL / NHWL. 8 and α = β = 2/(W + 1) = 0.67 (DSEL. Communication overhead (%.67 (NHWL. NHWL. the speciﬁed tolerance error ε (in unit of the integer times msd) under the L∞ metric.w=6 DSSL / NHWL. with respect to the NHWL) vs.w=4 DSSL / NHWL. DSSL).w=6 DSSL / NHWL. DSSL).w=2 DSSL / NHWL. with respect to the tolerance error) vs.22. 50 50 50 50 mean absolute deviation (%) 40 30 20 10 0 40 30 20 10 0 40 30 20 10 0 mean absolute deviation (%) mean absolute deviation (%) mean absolute deviation (%) DESL NHWL LSEL DSSL DASL DESL NHWL LSEL DSSL DASL DESL NHWL LSEL DSSL DASL 40 30 20 10 0 DESL NHWL LSEL DSSL DASL 0 2 4 6 tolerance error (ε) 8 10 0 2 4 6 tolerance error (ε) 8 10 0 2 4 6 tolerance error (ε) 8 10 0 2 4 6 tolerance error (ε) 8 10 (a) Temperature (b) Light (c) Water-level (d) Random-walk Fig. 6. with parameters of W = 2 (LSEL). DSSL). The mean absolute deviation (%. α = 2/(W + 1) = 0.w=8 100 80 60 40 20 0 0 DSSL / NHWL. and β = 0. 50 50 50 50 mean absolute deviation (%) 40 30 20 10 0 40 30 20 10 0 40 30 20 10 0 mean absolute deviation (%) mean absolute deviation (%) mean absolute deviation (%) DESL NHWL LSEL DSSL DASL DESL NHWL LSEL DSSL DASL DESL NHWL LSEL DSSL DASL 40 30 20 10 0 DESL NHWL LSEL DSSL DASL 0 2 4 6 tolerance error (ε) 8 10 0 2 4 6 tolerance error (ε) 8 10 0 2 4 6 tolerance error (ε) 8 10 0 2 4 6 tolerance error (ε) 8 10 (a) Temperature (b) Light (c) Water-level (d) Random-walk Fig.

” http://sensorscope. Zhang. R EFERENCES [1] P. Brutlag. P. Keogh. We will also design a robust linear forecast method to combat measurement noises. Kotidis. S. and J. “A symbolic representation of time series. S. [25] B. Dong. Loo. 1990.” in Proceedings of the 3rd ACM SIGCOMM conference on Internet measurement (IMC ’03). Hong. Gardiner.w=8 100 80 60 40 20 0 0 DSSL / NHWL. with our forecast capability.w=4 DSSL / NHWL. 578–584. [17] E. “Aberrant behavior detection in time series data for network monitoring. G. [22] D. “Query processing in sensor networks.w=8 100 80 60 40 20 0 0 DSSL / NHWL.” in Proc. Lazaridis and S. Chu.w=6 DSSL / NHWL. Franklin.edu/ xyu/pub/aggXYU. March 2004. The proposed methods self-adjust the model parameters using the forecast errors observed via experimental measurements.” in Third International Conference on Knowledge Discovery and Data Mining.” in Proceedings of the 2001 ACM SIGMOD international conference on Management of data. Mehrotra. “Adaptive precision setting for cached approximate values. evaluation. pp. Mehrotra. 0. Keogh. Hart.w=2 DSSL / NHWL. Musoff. “Double exponential smoothing: an alternative to kalman ﬁlter-based predictive tracking.w=6 DSSL / NHWL. with implications for streaming algorithms.67. B. and J. The augmented TinyDB is then expected to support both in-network aggregation and data mining with a more uniﬁed framework.w=2 DSSL / NHWL.w=4 DSSL / NHWL. Pazzani. Mehrotra. and applications. Lin. Finally. 2004. [20] S. Chiu. “Online amnesic approximation of streaming time series. but forecasting based on smoothing techniques can be general solutions for various sensor network applications.edu/ mdw/proj/codeblue/. and Y. T. “On-line segmentation algorithm for continuously monitored data in intensive care units. and J.ics.22. “Tsunami events and data. 2003. “Hierarchical innetwork data aggregation with quality guarantees. [2] J.” 2nd Edition. K. Dec.pdf.” Springer. Olwell. Keogh.w=4 DSSL / NHWL. M. C.w=6 DSSL / NHWL. 2002. and M. [3] D.” http://www. S. “An enhanced representation of time series which allows fast and accurate classiﬁcation. [19] I. Olston. M.40. Govindan. 1997. Yao and J. E.” Springer. Davis. Moreover.w=8 2 4 6 tolerance error (ε) 8 10 2 4 6 tolerance error (ε) 8 10 2 4 6 tolerance error (ε) 8 10 2 4 6 tolerance error (ε) 8 10 (a) Temperature (b) Light (c) Water-level (d) Random-walk Fig. the speciﬁed tolerance error ε (in unit of the integer times msd) under L∞ metric with parameters of α = 0.” in Proceedings of the 14th USENIX Systems Administration Conference (LISA). and minimize the number of trend changes for a given forecast quality metric. [7] S. 1996. An extensive simulation study based on both real-world and synthetic random walk time-series data shows that the proposed methods reduce the number of trend changes by 20∼50% compared to the other existing methods. We plan to augment the TinyDB. “Computing aggregates for monitoring wireless sensor networks. and D. The DSSL communication overhead relative to NHWL vs.harvard. Gehrke. especially for data aggregation and mining. Olston. “Fast similarity search in the presence of longitudinal scaling in time series databases. VI. Y. which is critically important for resourcelimited wireless sensors.” in International Immersive projection technologies workshop.w=2 DSSL / NHWL. Keogh and M. [12] J.noaa. “The tsunami research program.w=8 communication overhead (%) communication overhead (%) communication overhead (%) 100 80 60 40 20 0 0 DSSL / NHWL. 484–492. B. 2000.w=4 DSSL / NHWL. [6] T. we proposed energy-efﬁcient. clustering and relevance feedback.” in Symposium on Operating System Design and Implementation. 2002.gov/tsunami. C ONCLUSION In this paper. Roussopoulos. McGraw-Hill Book Company. we need to develop a new deﬁnition of robust forecast quality metric for various sensor network applications. Venkatasubramanian. 2003. J.epﬂ. a de-facto standard data aggregation application in the TinyOS-based sensor networks. [28] ——.” in Proceedings of the IEEE International Conference on Data Mining (ICDM’01). pp. “Adaptive ﬁlters for continuous queries over distributed data streams. March 2004.” in IEEE Transactions on Biomedical Engineering. “An online algorithm for segmenting time series. Pazzani. online linear forecast methods for various wireless sensor network applications. Widom. Krishnamurthy.67. . R. “Capturing sensor-generated time series with quality guarantees. J. Johnson. and B. [23] CodeBlue.” AIAA (American Institute of Aeronautics and Astronautics. J. Maden. Wang. Oct. Wah. A. [15] Y. self-adapting. Becq. [16] J.28. D.w=6 DSSL / NHWL. J. Lonardi. performing as good as DSSL with a short memory. Zarchan and H.” http://www.” in Proceedings of the 8th ACM SIGMOD workshop on Research issues in data mining and knowledge discovery (DMKD ’03). the computation involved in the self-adjustment incurs O(1) space and time overheads. 0. Charbonnier. Widom. β = 2/(W + 1) = 0. [10] X.uci. Y.” in Proceedings of the IEEE SNPA’03. “A probabilistic approach to fast pattern matching in time series databases. and N. 1997. of the 28th VLDB Conference. and W.” in Proceedings of the 9th International Conference on Extending DataBase Technology (EDBT). Jiang. D. It is also empirically shown that the performance of forecasting based on linear regression is not suitable both qualitatively and quantitatively. Themis. G. 0. Han. [26] EPFL. [24] P. Hellerstein.pmel. and W.pmel. Broackwell and R. S. M. “Cumulative sum charts and charting for quality improvement. Keogh. Montgomery. Hawkins and D. Yang. Chen. [18] Y. Pazzani. Keogh and P. 2003. and L. [11] A. “Sketch-based change detection: methods.ch/. Estrin. “The sensorscope project. N. and J. Yu. Zhao. “Forecasting and time series analysis. Deligiannakis.” http://www. 1997. H. Biot. 2003. “Wireless sensor networks for medical care.” in International Conference on Data Engineering. [8] C. and S. [13] E.” in Proceedings of the 2003 ACM SIGMOD international conference on Management of data (SIGMOD ’03). Chakrabarti.” http:// www. DASL can be chosen as an alternative to DSSL without worrying about the determination of smoothing parameters. “Locally adaptive dimensionality reduction for indexing large time series databases.” in 1st Biennial Conference Inovative Data System Research (CIDR).w=2 DSSL / NHWL. “Approximate monitoring in wireless sensor networks.” in Proceedings of the 2001 ACM SIGMOD international conference on Management of data (SIGMOD ’01). LaViola. “Fundamentals of kalman ﬁltering: A practical approach. J.” in Fourth International Conference on Knowledge Discovery and Data Mining (KDD’98).eecs. S. [4] E. [27] NOAA. Chen. [21] J. [14] E. [5] E. Sen.noaa. W. 2000. 8. ACM Press. Smyth. “Tag: a tiny aggregation service for ad-hoc sensor networks. L.” in IEEE International Conference on Data Engineering.120 120 120 120 communication overhead (%) 100 80 60 40 20 0 0 DSSL / NHWL. “Introduction to time series and forecasting. “Multi-dimensional regresssion analsis of time-series data streams.” in ICTAI. J. [9] C.gov /ftp /tsunami /database /t20030925hokkaido /nos.

- Chapter 1 Forecasting PerspectiveUploaded byNikita Nath
- DemandUploaded byToulouse18
- ForecastingUploaded byAnurag Sharma
- PPP - Homework 2Uploaded byWiratha Nungrat
- tv_24_2017_Supplement_2_503_509Uploaded bychetana
- Forecasting of Electric Consumption in a Semiconductor Plant Using Time Series MethodsUploaded byAlexander Decker
- Double Exponential Smoothing for BanksUploaded bytimothytroy
- Exponential Smoothing- The State of the ArtUploaded byproluvieslacus
- 9. Business Economics -Regression AnalysisUploaded byHamza Yousaf
- Forecasting Techniques in Operations ManagementUploaded byPatrick Mbunza
- Rtimeseries HandoutUploaded byelz0rr0
- Chapter 6Uploaded bysms
- Time SeriesUploaded bysppanchal198
- Forecasting MethodsUploaded byThomas O'Neal
- Master Forecast Profile2Uploaded byPunit Rai
- OM2E_Chapter03Uploaded bykonosuba
- Time Series Economic ForecastingUploaded byjulyfriska
- forecasting.pdfUploaded byandresacastro
- ForecastingUploaded byNeerajBoora
- Demand Forecasting - Lecture NotesUploaded byHong Hiro
- Production operation strategyUploaded byMuhammad Bilal
- EBS_ForecastingUploaded bysagarika panda
- ForecastingUploaded byenigmav
- Costos y BeneficiosUploaded byelkin celorio
- Lee-Carter mortality forecasting.pdfUploaded bysssskkkkllll
- Business Stats ProjectUploaded byVishal
- What_s_new_APO_7_0308Uploaded byArtashes Kradjian
- OM1Uploaded byKali Pandi
- Time Series Data for Statistics GlossaryUploaded byvelkus2013
- Pharma Sales ForecastingUploaded byRohitPatel

- How Most Published Findings Are FalseUploaded byDigito Dunkey
- Tibetan Page 69 70Uploaded byDigito Dunkey
- Community Fiber Lessons LearntUploaded byDigito Dunkey
- globalayurvedascenario-121117100847-phpapp01Uploaded byDigito Dunkey
- Real Estate FundsUploaded byDigito Dunkey
- WCE2007_pp1010-1015 (1)Uploaded byDigito Dunkey
- Genese HeadingUploaded byDigito Dunkey
- Jill Lepore_ What the Theory of “Disruptive Innovation” Gets Wrong _ the New YorkerUploaded byDigito Dunkey
- Mb Manual Ga-z77x-Ud5h jUploaded byDigito Dunkey
- Geneve 1564Uploaded byDigito Dunkey
- Global Science MarketUploaded byDigito Dunkey
- WalterUploaded byDigito Dunkey
- Vers Subm.psUploaded byDigito Dunkey
- VIFUploaded byDigito Dunkey
- Mb Manual Ga-z77x-Ud5h jUploaded byDigito Dunkey
- Mean reversionUploaded byDigito Dunkey
- PresentationUploaded byDigito Dunkey
- Large Price Declines, News, Liquidity, and Trading Strategies: An Intraday AnalysisUploaded byDigito Dunkey
- Vagbhatta Sutra Sanskrit Verses I HindiUploaded byDigito Dunkey
- Can Andrew Lo Legitimize Technical AnalysisUploaded byDigito Dunkey
- Time Series Prediction using ICA AlgorithmsUploaded byDigito Dunkey
- Nov Dec2002cookrepUploaded byDigito Dunkey
- Classical Ayurvedic TextsUploaded byDigito Dunkey
- Ayurvrda Fundamentals and Basic Principals1044 (1)Uploaded byDigito Dunkey
- Ideapad u310 u410 Hmm 1st Edition Mar 2012 EnglishUploaded byDigito Dunkey
- QF HighFrequencyIntradayStockReturns Chiang Yu Wu 3-12-2006 V2Uploaded byDigito Dunkey
- Geneve 1564Uploaded byDigito Dunkey
- Rasa Shastra Andrew Mason Vol 4 EngUploaded byDigito Dunkey
- quantsUploaded byDigito Dunkey
- Yava Barley a Dietary Solution in ObesityUploaded byDigito Dunkey