You are on page 1of 245

Module Code & Module Title

CS6P05NI Final Year Project

Assessment Weightage & Type


10% FYP Project Artifact

Semester
2020 Spring

Student Name: Prasis Poudel


London Met ID: 18029178
College ID: np01cp4a180322
Internal Supervisor: Mr. Roshan Tandukar
External Supervisor: Mr. Amulya Lohani
Assignment Due Date: 26/4//2021
Assignment Submission Date:26/4/2021

I confirm that I understand my coursework needs to be submitted online via Google Classroom under the
relevant module page before the deadline in order for my assignment to be accepted and marked. I am
fully aware that late submissions will be treated as non-submission and a mark of zero will be awarded.
All the research evidence will be provided within the section as screen shots of PDF documents, which
are generated after conversion of all the evidence to PDF format.
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/263351551

Kalman filtered GPS accelerometerbased accident detection and location


system: A low-cost approach

Article in Current Science · June 2014

CITATIONS READS

14 4,639

4 authors, including:

Md. Syedul Amin Mamun Bin Ibne Reaz

31 PUBLICATIONS 262 CITATIONS


Universiti Kebangssan Malaysia
340 PUBLICATIONS 4,414 CITATIONS
SEE PROFILE
SEE PROFILE

Mohammad Arif Sobhan Bhuiyan


Xiamen University Malaysia
80 PUBLICATIONS 420 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Microelectronics RF IC design View project

EEG signal processing View project

All content following this page was uploaded by Mohammad Arif Sobhan Bhuiyan on 25 June 2014.

The user has requested enhancement of the downloaded file.


RESEARCH COMMUNICATIONS

Kalman filtered GPS accelerometer- installed in all cars and the location, at times, may not be
known due to GPS outage.
based accident detection and location A smart phone-based accident detection system has
system: a low-cost approach been proposed by White et al. 5 based on expensive smart
phones and suffers from the possibility of false alarm. An
impact sensor-based accident detection and wireless mod-
Md. Syedul Amin, Mamun Bin Ibne Reaz,
ule-based reporting system proposed by Megalingam et
Mohammad Arif Sobhan Bhuiyan and
al.6 largely depends on the huge installation of wireless
Salwa Sheikh Nasir receiver at short intervals. A solely GPS-based accident
Department of Electrical, Electronic and Systems Engineering, detection and location system was proposed by Amin et
Universiti Kebangsaan Malaysia, 43600 UKM, Bangi, Selangor,
Malaysia
al.7. The speed capability of GPS was used for accident
detection and accident location was reported from GPS
A low-cost accident detection system utilizing cheap position through GSM. However, the proposed system
ADXL345 accelerometers and GPS receiver is pro- suffers from the conventional limitations of the GPS.
posed in this communication. The accident detection Sando et al.8 also reported the limitations of GPS for
algorithm was developed based on sudden decelera- locating a crashed vehicle.
tion. The double integration of the acceleration and The GPS receiver determines position by the direct
heading from the tilt angles of accelerometers were line-of-sight signal from the satellites. The blockage by
used to determine the location. Kalman filter was uti- any obstruction affects both the amplitude and phase of
lized to correct the accumulated double integration the received satellite signals and needs requisition of sat-
errors with the trusted GPS data. The field tests ellite signals9–11. Moreover, the update rate of GPS is also
demonstrated the correct functioning of the accident
not sufficient to determine the acceleration from the
detection algorithm and location. The proposed low-
cost system can save many lives by the automated speed data for the purpose of accident detection. An
accident detection and accurate location even during accelerometer, on the other hand, provides instantaneous
GPS outage. acceleration with higher update rate. Besides, it can also
provide orientation information 12. The rapid development
Keywords: Accelerometer, accident detection, GPS of semiconductor manufacturing technologies enabled the
recovers, Kalman filter. development of MEMS-based accelerometers. However,
T HE automobile is one of the greatest inventions that has the accuracy of the MEMS-based accelerometers is
become an essential part of our daily activities. However, affected by accumulated bias, scale factor, drift, noise,
it can bring about disaster and can even kill us through etc. which vary with the price13. The limitation of instan-
accidents. Vehicle accident is a vital problem in today’s taneous acceleration and outage of GPS can be overcome
world1 . Nearly 1.3 million people die in road crashes by the higher update rate of acceleration and orientation
each year, which is 3,287 deaths per day and leaving 20– of the accelerometer. On the other hand, the inherited
50 million injured or disabled2. Despite various efforts by inaccuracy of the low-cost accelerometer can be over-
different agencies against careless driving, accidents are come by accurate information of the GPS.
taking place. About 6% of the accident fatalities could This communication proposes a low-cost, efficient,
have been eliminated if the accident information had been accident detection and location system utilizing cheap
divulged at the right time3. This can be achieved by an MEMS 3-axes accelerometers and GPS. The loosely cou-
efficient automatic accident detection and notification pled integration of accelerometer and GPS with Kalman
system with correct location of the accident. filter provides proper acceleration information to detect
Accident detection system is a widely researched topic. accidents accurately and orientation of the vehicle for its
Real-time traffic accident prediction relates accident exact location during GPS outage.
occurrences from various detectors such as induction The GPS receiver determines the position by calculat-
loops, infrared detector, camera, etc. However, these are ing the geometric intersection of the ranges of known
restricted by the number of sensors, algorithms, traffic coordinates of the satellites and the receiver using eq. (1)
flow, weather, etc. Driver-initiated incident detection sys- below14. At least four pseudorange measurements are
tem has more advantages than manual incident detection required to estimate the position as eq. (1) has four
system. However, during an accident, the driver may not unknowns (x, y, z and b). In low-cost GPS receivers,
be in a physical condition to report the accident manually. inexpensive quartz crystal oscillator is used as a timer. As
Chuan-Zhi et al.4 proposed a freeway incident detection such, the receiver clock drifts from true GPS time. So, eq.
system utilizing the car air bag sensor and accelerometer, (1) includes a significant user time bias.
GPS to locate the place of accident and GSM to send the
accident location. However, car air bag sensors are not c( k )  ( x ( k )  x )2  ( y ( k )  y )2  ( z (k )  z ) 2  b  (k ) ,
*For correspondence. (e-mail: syedul8585@yahoo.com)
(1)

1548 CURRENT SCIENCE, VOL. 106, NO. 11, 10 JUNE 2014


RESEARCH COMMUNICATIONS

where x(k), y (k) and z(k) are the coordinates of the satellite, The accelerometer provides acceleration due to move-
x, y and z are the user coordinates, b is the bias of the ment and also acceleration due to gravity. By measuring
user clock and  is the error source. the static acceleration caused by gravity, the tilt angle can
The receiver estimates velocity utilizing the Doppler be calculated. The tilt angle can provide the heading in-
shift between the satellites and the receiver. The Doppler formation of the vehicle. With three-axes accelerometers,
shift can be determined with the known satellite velocity the tilt angle can be determined using eq. (7) for an axis
by differentiating pseudorange measurement. The pseudo- where X, Y and Z are the accelerations in x, y and z axes.
range rate can be calculated using eq. (2). With minimum
four pseudorange rate measurements, the velocity is  X 
determined using the least squares estimation. The accel- Ax  arctan  . (7)
 2 2 
eration (a) can be easily calculated from eq. (3), where dv  Y Z 
is the change of velocity in time dt. However, the update
rate of the low-cost GPS receivers is normally limited. As The accelerometer measurements need to be alignned
such, the instantaneous acceleration cannot be found from with GPS measurements for the GPS and accelerometer
the GPS. Moreover, the acceleration is contaminated if fusion. Thus, the acceleration readings need to be trans-
the velocity is less than 1.5 m/s (ref. 12). formed to the earth frame by a fixed rotation matrix. The
vehicle position is expressed in x, y and z axis. It is
 ( k )  (v ( k )  v ) 1( k )  b  (k ) , (2) expressed in geodetic coordinate frame. GPS uses a set of
fundamental parameters referred by the World Geodetic
System 1984 (WGS 84), where the ECEF (earth centred
dv earth fixed) coordinate frame is for the GPS14. The trans-
a , (3)
dt formation from LLA (, , h) to ECEF (x, y, z) is given in
eq. (8), where N is called the normal, h the distance from
where v (k) is the satellite velocity vector, v is user velocity the surface to the z-axis along the ellipsoid normal and e
vector, b is rate of change of receiver clock, 1(k) is the is the first numerical eccentricity of the ellipsoid.
line-of-sight unit vector of the user-to-satellite unit and 
is the error source.
 x   ( N  h) cos  cos  
Digital accelerometer provides acceleration informa-  y    ( N  h) cos  sin   .
tion in inter-integrated circuit (I2 C) or serial peripheral     (8)
interface (SPI) protocol. The accelerometer reading needs  z   ( N (1  e 2 )  h)sin  
 
to be scaled to the G values. The scale is calculated by
dividing the total range with the number of bits as shown
in eq. (4), where ar is the total range and ab is the number The NED (north east down) is a right-handed coordinate
of bits. The acceleration in G can then be determined by system fixed to the vehicle which points to the true north,
multiplying the raw digital values by the scale. From the east and downward directions. The body coordinate frame
acceleration, the position can be found by eq. (5), where is located at the centre of gravity (CG) of the vehicle,
the present position is the summation of the previous with x pointing towards the nose of the vehicle, y point-
position, integral of velocity and the double integration of ing towards the right door of the vehicle and z pointing
the acceleration. During a constant velocity, the accelera- down to comply with the right-hand rule. The different
tion will be zero. Thus, the third term in eq. (5) will be- coordinate frames are shown in Figure 1. Equation (9)
come zero. In this situation, the vehicle will move with a shows the primary rotation that brings the ECEF frame to
constant velocity and the position will be the summation coincide with NED frame and the vice versa is given in
of previous position and the integral of constant velocity eq. (10). The NED to body frame coordinate rotation is
as given in eq. (6). given in eq. (11).

ar   sin  cos   sin  sin  cos  


Scale  , (4)  
ab Cen    sin  cos  0 , (9)
  cos  cos   cos  sin   sin  

t  t t 
   
xt  xt 1  x dt   
 x dt  dt ,
  (5)
    sin  cos   sin   cos  cos  
t 1 t 1  t 1   
Cne    sin  sin  cos   cos  sin   , (10)
 cos  0  sin  
t 
  
xt  xt 1   x dt. (6)
t 1 where  is latitude and  is longitude in geodetic frame.

CURRENT SCIENCE, VOL. 106, NO. 11, 10 JUNE 2014 1549


RESEARCH COMMUNICATIONS

 c c c s  s s c s s  c s c  where the matrix A relates the state at time k – 1 to the


  state at time k, matrix B relates the control input to
Cbn   c s c c  s s s  s c  c s s , (11)
  s  the state, matrix H relates the state to the measurement,
 s c c c  and variables v k and wk are the measurement noise and the
process noise respectively.
where sin is denoted by s and cos is denoted by c. The Kalman filter consists of five essential stages that
Despite the accuracy of the GPS, it cannot provide repeat over a given time-interval. The current state esti-
location information during an outage. On the other hand, mate is used to compute the predicted value of the state at
accelerometer can provide continuous acceleration which the next time interval by eq. (14). In the next stage, cur-
can be used to calculate the position. However, initializa- rent state covariance estimate is used to compute the pre-
tion with a valid GPS position is essential for the acceler- dicted value of the state covariance using eq. (15) at the
ometer to correctly determine accelerometer-derived next time-interval. The filter gain is computed using eq.
position. But position and orientation of the accelero- (16). With the state estimate update, the posteriori state
meter are contaminated by the double integration errors update is executed using eq. (17). In the final stage, the
of the low-cost accelerometer. These limitations can be state covariance estimate is updated using eq. (18). The
overcome by fusing both the sensor data with an appro- covariance and filter gain computation update can be
priate filter. Kalman filter, particle filter, neural network, done offline and the real-time Kalman filter implementa-
etc. are the popular filters for data fusion. However, tion is reduced to state estimation only.
Kalman filter is chosen in the proposed system due to its
simplicity, computational efficiency and easier imple- xˆk  Axˆk 1  Buk 1 , (14)
mentation compared to the other filters15. In spite of the
predicted fused position from the Kalman filter, to restrict
T
the error growth of the accelerometer-derived position, a Pk  E ek ek  , (15)
valid GPS signal is needed from time to time.  
Kalman filter is a recursive solution to discrete linear
filtering which provides an optimal, unbiased method for
estimation. Here, the state of a system14 at time-step k is K k  Pk H T ( HPk H T  R) 1 , (16)
described by eq. (12) and updated with a measurement by
eq. (13) xˆk  xˆk  K (Z k  Hxˆk ), (17)
xk  Axk 1  Buk 1  wk 1, (12)
T
Pk  E ek ek  , (18)
 
zk  Hxk  vk , (13)

where xˆk is the priori state estimate at step k, A an n  n


matrix relating state at k – 1 time to k, xˆk 1 the posteriori
state estimate at time-step k – 1, B the n  1 matrix relat-
ing the control input to state x, u a control input which is
optional, Pk the priori estimate error covariance, and
Pk–1 the posteriori estimate error covariance.
The HI-204III GPS receiver (Haicom Electronics Cor-
poration) is a low-cost device which provides satellite
positioning data with a continuous tracking of all satel-
lites in view. Its 4000 search bins and 20 parallel chan-
nels provide quick satellite signal acquisition and it takes
less than 40 sec in cold start and 8 sec in hot start. With
–159 dBm tracking sensitivity, it offers good navigation
performance in urban areas and limited sky view with
1 Hz update rate.
The low-cost ADXL345 is an MEMS-based, small,
thin, low power, three-axes accelerometer with high reso-
lution (13-bit), which can provide measurement up to
16 G. Its high resolution (4 mg/LSB) enables measure-
ment of inclination changes less than 1.0. An ATmega328
Figure 1. Geodetic, earth centred earth fixed (ECEF) and north east microprocessor is used at 57,600 bps to capture the
down (NED) coordinate system. ADXL345 data at 8 g range in I2 C digital interface.
1550 CURRENT SCIENCE, VOL. 106, NO. 11, 10 JUNE 2014
RESEARCH COMMUNICATIONS

Raw accelerometer data were scaled to determine the method. If the reading falls within these limits, then it is
acceleration data in G value using eq. (5). The scale was considered as noise and the acceleration is set to zero.
found to be 0.015625 with an acceleration range of 8 G. The accelerometer also gives noisy data which come as a
The data were captured at 10bits. The raw value was mul- spike of sudden unusual acceleration for one or two sam-
tiplied by the G value (9.80665 m/s2) after the scaling. ples. These are removed with a low-pass filter.
The three-axes orthogonal accelerometers measure the The loosely coupled GPS and accelerometer integra-
longitudinal, latitudinal and gravitational acceleration tion system with Kalman filter depends on the form of
respectively. The accelerometer data are rotated from the input data to the filter, which can be either three-
body frame to the NED frame using eq. (10). Data dimensional velocity/position differences or pseudorange/
derived from the GPS lack in instantaneous acceleration, rate differences. The three-dimensional velocity/position
which is important in determining a sudden deceleration difference method is easier to implement due to its simple
due to accident. But the accelerometer can provide in- structure19. Thus, the difference of GPS-derived and
stantaneous acceleration with high update rate which is accelerometer-generated velocity/position is taken as an
the prime need for accident detection. As such, for the input to the Kalman filter. The proposed Kalman filter
purpose of sudden deceleration detection the accelero- integration of the GPS and accelerometer with its input
meter data are only considered. But, the low-cost acceler- and output is shown in Figure 3.
ometer data contain a lot of noise. Denoising is done
using the Alaln variance method, low-pass filter and win-
dow of discrimination.
An efficient accident detection model is vital in detect-
ing vehicle accident. A threshold-based filtering of decel-
eration is used to predict the occurrence of an accident.
The accident detection scenario is activated when the
vehicle travels above 23 kph. Speed below this is not
likely to cause any fatal accident situation in frontal crash
and reduces the false alarm of accident situation 16.
A vehicle decelerates when the brake is applied. The
proposed system monitors the deceleration data from the
accelerometer continuously. For the frontal crash, any
deceleration more than 5 Gs is considered as an accident
situation17. Once the deceleration is detected less than
5 Gs, the system checks for the velocity. In a frontal
crash, the vehicle is likely to be stopped completely. If
the velocity is found below 5 kph, then the system con-
firms it to be an accident. Considering the GPS and
accelerometer conventional errors, the 5 kph velocity is
considered. The velocity check after the deceleration
threshold would reduce the chances of false alarm. Once
the accident situation is detected, the system would raise
an alarm for the location detection module. The flowchart
of the accident detection is shown in Figure 2.
Beside the detection of the accident, the accelerometer
can provide velocity information by integrating the accel-
eration once and the position information by double inte-
gration. The heading change can be found from the tilt
angles of the accelerometers. However, the low-cost
ADXL345 accelerometer suffers from noises and errors Figure 2. Accident detection algorithm.
which include bias, random walk, quantization noise and
jitter. This causes an error in the position which grows
quadratically with time. The raw accelerometer data were
collected for 24 h to determine the quantization noise,
random walk, and bias instability of the accelerometer by
fitting a curve to the resulting plot by the Allan variance
method18. A window of discrimination between valid and
invalid readings was used to remove the jitter around the Figure 3. Proposed GPS and accelerometer integration by Kalman
stationary value using the errors from the Allan variance filter.

CURRENT SCIENCE, VOL. 106, NO. 11, 10 JUNE 2014 1551


RESEARCH COMMUNICATIONS

Figure 4. Accident detection based on deceleration.

The measurement error is an important parameter for were fed to a Kalman filter using Matlab. The GPS data
the Kalman filter. As such, the measurement errors of the were reliable as the data do not contain long-term accu-
GPS and accelerometer need to be determined. The GPS racy errors. The accelerometer errors were corrected
velocity error is required to find the velocity measure- using the GPS data. Whenever the GPS velocity informa-
ment variance for observation equation. The standard tion was more than 1.5 m/s, the position vector was
deviation of the velocity measurement ( v) error can be updated by eq. (22). If the GPS velocity was not reliable
found using eq. (19). The static velocity error (static) was (velocity < 1.5 m/s), then the position information was
found to be 0.6 m/s by keeping the Haicom 204III GPS updated with the Kalman filter-derived accelerometer
receiver static in a place with good line-of-sight for 3 h. information using eq. (23), where A denotes accelero-
The aggressive accelerations and decelerations for a meter and G denotes for GPS.
vehicle20 were considered to be 6 m/s2. With the update   
rate of 1 m/s of the receiver, the dynamic velocity error X (t )  X (t  1)  VG t , (22)
(dynamic) was found to be 6 m/s. A higher update rate
GPS can reduce this error with an increment in the     1 A 
xt  xt 1  ( xtG1 * t )   xt *(t )2 
receiver price. Using eq. (19), the standard deviation (v) 2 
of the velocity error is found to be 6.6 m/s

  1    A
1 
   xtG1 *(t )2     xt 1 *(t ) 2   . (23)
v = static + dynamic. (19)  2  2 

The acceleration from the longitudinal axis accelerometer A test vehicle was used to evaluate the proposed system.
was integrated once to obtain the longitudinal velocity The GPS was installed on the dashboard of the test vehi-
and integrated again to determine the displacement using cle for a better line-of-sight. The ADXL345 accelerome-
eq. (5). The lateral accelerometer was used to determine ter was rigidly fixed at the middle of the vehicle. GPS
the heading change using eq. (6). With the heading in- and accelerometer data were recorded while driving on
formation, the east and north distances were calculated the road of Jalan Temuan, Universiti Kebangsaan Malay-
and the vehicle positions were determined using eqs (20) sia, Bangi, Selangor, Malaysia. The vehicle was driven at
and (21). various speeds to obtain high accelerations. The vehicle
was abruptly brought to a dead stop by applying the brake
lat = lat + (n _ dist * degrees/m), (20) to attain sudden decelerations. During driving, the GPS
was intentionally covered few times to deprive the line-
long = long + (e _ dist * degrees/m). (21) of-sight. It was even intentionally covered before the last
deceleration to test the correctness of the location infor-
After determining the measurement errors and position mation derived from the accelerometer. After the vehicle
information for the GPS and the accelerometer, the data came to a dead stop, the GPS was uncovered to allow it to
1552 CURRENT SCIENCE, VOL. 106, NO. 11, 10 JUNE 2014
RESEARCH COMMUNICATIONS

reacquire the satellite so that the validity of the acceler- was completely stopped and thus it was treated as an
ometer position could be evaluated. accident. As such, the system could detect sudden decel-
The collected data were post-processed using eration and differentiate a false accident situation.
MATLAB software. The ADXL345 acceleration data re- After an accident is detected, the system determines the
corded for the X-axis (direction of the vehicle movement) location of the vehicle. The GPS coverage was intention-
were plotted to detect the acceleration and deceleration, ally interrupted several times during the test drive as
as shown in Figure 4. In the proposed accident detection mentioned above. Specially, before the vehicle came to a
algorithm, the deceleration above 5 Gs is considered as dead stop with high deceleration, the GPS acquisition had
an accident situation. But achieving 5 Gs is impossible been intentionally blocked for a longer time. The Kalman
without a real accident. As such, the threshold was re- filtered accelerometers positions compensated these gaps.
duced to 0.9 G to test the accident detection. Figure 4 Figure 5 shows the GPS outage positions with the filled-
shows that the vehicle achieved 0.9 G deceleration at up Kalman filtered accelerometer positions. The GPS was
around 6000th sample time, but again it started accelerat- allowed to reacquire the satellite position after the vehicle
ing. As such, although the 0.9 G threshold was reached, came to a dead stop in the simulated accident situation.
the accident was not declared as the vehicle started mov- The GPS-based position and the Kalman filter-based
ing more than 5 kph after achieving the deceleration. But accelerometer position almost coincided, as seen in
at around 7000th sample time, the vehicle again achieved Figure 5. The GPS positions and the Kalman filtered
a deceleration more than 0.9 G. But this time, the vehicle accelerometer positions were plotted in the open-source
Quantum Geographic Information System (QGIS; Figure
6). The plotting on the QGIS also showed the correct
positions of the vehicle during GPS outage, which indi-
cated the correct functioning of the proposed system.
Thus, a low-cost accident detection and location sys-
tem has been developed in this study utilizing only
off-the-shelf, low-cost accelerometers and GPS receiver.
The instantaneous and higher rate of the accelerometer
was used to detect an accident due to sudden decelera-
tion. The accident location module was developed by
double integrating the erroneous data of the accelerome-
ter with the inherited intermittent data of GPS using the
Kalman filter. The field test showed the correct accident
detection and the location during a GPS outage. The
low-cost system will be suitable for all vehicles and will
also save many lives by the automated detection of acci-
dent and location.
Figure 5. Kalman filtered accelerometer positions with GPS outage.

1. Surgailis, T., Valinevicius, A., Markevicius, V., Navikas, D. and


Andriukaitis, D., Avoiding forward car collision using stereo
vision system. Electron. Electr. Eng., 2012, 18(8), 37–40.
2. Annual Global Road Crash Statistics, 2013; http://www.asirt.
org/KnowBeforeYouGo/RoadSafetyFacts/RoadCrashStatistics/
tabid/213/Default.aspx (accessed on 20 March 2013).
3. Rauscher, S. et al., Enhanced automatic collision notification
system–improved rescue care due to injury prediction–first field
experience. In Proceedings of the 21st International Technical
Conference on the Enhanced Safety of Vehicles, Stuttgart, Ger-
many, 2009.
4. Chuan-zhi, L., Ru-fu, H. and Hong-wu, Y. E., Method of freeway
incident detection using wireless positioning. In IEEE Interna-
tional Conference on Automation and Logistics, Qingdao, China,
2008.
5. White, J., Thompson, C., Turner, H., Dougherty, B. and Schmidt,
D. C., WreckWatch: automatic traffic accident detection and noti-
fication with smartphones. Mobile Networks Appl., 2011, 16(3),
285–303.
6. Megalingam, R. K., Nair, R. N. and Prakhya, S. M., Wireless
vehicular accident detection and reporting system. In 2nd Interna-
Figure 6. GPS and Kalman filtered accelerometer positions on Quan- tional Conference on Mechanical and Electrical Technology,
tum Geographic Information System. Singapore, 2010.

CURRENT SCIENCE, VOL. 106, NO. 11, 10 JUNE 2014 1553


RESEARCH COMMUNICATIONS
7. Amin, M. S., Jalil, J. and Reaz, M. B. I., Accident detection and tion but do not alter thermophysical parameters. The
reporting system using GPS, GPRS and GSM technology. In PA signals were computed employing a theoretical
International Conference on Informatics, Electronics & Vision, model at various cell and intracellular NP concentra-
Dhaka, Bangladesh, 2012. tions for 532 nm illumination. It was found that the
8. Sando, T., Mussa, R., Sobanjo, J. and Spainhour, L., GPS Usabi-
PA amplitude increased linearly in both the cases. The
lity in crash location. In ITE 2004 Annual Meeting and Exhibit,
Lake Buena Vista, Florida, USA, 2004.
simulation results, when the contributions from both
9. Sharma, S., Dashora, N., Galav, P. and Pandey, R., Cycle slip coherent and incoherent components are included,
detection, correction and phase leveling of RINEX formatted GPS demonstrate good agreement with published experi-
observables. Curr. Sci., 2011, 100(2), 205–212. mental results.
10. Song, X., Raghavan, V. and Yoshida, D., Matching of vehicle
GPS traces with urban road networks. Curr. Sci., 2010, 98(12), Keywords: Computer simulations, endocytosis, gold
1592–1598. nanoparticles, photoacoustic signals.
11. Rao, G. S., Error analysis of satellite-based global navigation sys-
tem over the low-latitude region. Curr. Sci., 2007, 93(7), 927–931.
NANOPARTICLES (NPs) are of profound interest in a vari-
12. Croyle, S. R., Spencer, L. E. and Sittaro, E. R., Vehicle navigation
system and method using multiple axes accelerometer, Google ety of biological and biomedical studies. Several bio-
Patents, 2001. medical imaging modalities use different nanoscale
13. Goel, M., Electret sensors, filters and MEMS devices: New chal- structures such as gold (Au) nanospheres, Au nanorods,
lenges in materials research. Curr. Sci., 2003, 85(4), 443–453. silver nanosystems, and carbon (C) nanotubes as contrast
14. Chaves, S. M., Using Kalman filtering to improve a low-cost GPS-
enhancers1,2. AuNPs are found to be the most suitable for
based collision warning system for vehicle convoys, The Pennsyl-
vania State University, USA, 2010. various applications because of their (i) simple and fast
15. Won, S.-h. P., Melek, W. W. and Golnaraghi, F., A Kalman/ preparation procedure, (ii) tunable light scattering and
particle filter-based position and orientation estimation method absorbtion properties, (iii) ability to bind with target-
using a position sensor/inertial measurement unit hybrid system. specific ligands (through surface roughness manipulation)
IEEE Trans. Ind. Electron., 2010, 57(5), 1787–1798.
and (iv) lack of toxicity1 .
16. Chan, C.-Y., A treatise on crash sensing for automotive air bag
systems. IEEE/ASME Transactions on Mechatronics, 2002, 7(2), Photoacoustic (PA) imaging technique has also exten-
220–234. sively used different metallic and non-metallic NPs as
17. Zaldivar, J., Calafate, C. T., Cano, J. C. and Manzoni, P., Provid- contrast agents to improve its sensitivity3–6. In PA imag-
ing accident detection in vehicular networks through OBD-II ing, a short nanosecond pulsed laser is used to irradiate a
devices and Android-based smartphones. In IEEE 36th Conference
tissue sample and this induces a pressure transient due to
on Local Computer Networks, Bonn, Germany, 2011.
18. El-Sheimy, N., Hou, H. and Niu, X., Analysis and modeling of thermoelastic expansion 7–9. Such a wide band pressure
inertial sensors using Allan variance. IEEE Trans. Instrum. Meas., transient is detected employing an ultrasonic transducer.
2008, 57(1), 140–149. A raster scan of a 2D region is generally performed to
19. Park, S. and Tan, C.-W., GPS-aided gyroscope-free inertial navi- capture PA signals, which are then utilized to generate
gation systems. 2002.
the corresponding grey scale image. The PA image re-
20. Park, S. and Tan, C.-W., GPS-aided gyroscope-free inertial navi-
gation systems. California Path Research Report UCB-ITS-PRR- tains optical contrast of the imaging region, and central
2002-22, Institute of Transportation Studies, University of Cali- frequency of the ultrasonic detector defines resolution of
fornia, Berkeley, 2002. the image. This technique has been widely used to gather
anatomical and functional information of various small
Received 17 April 2013; revised accepted 11 April 2014
animal organs at depths beyond optical penetration depth.
The administration of NPs allows the PA technique to
form images of deep tissue regions with enhanced con-
trast and this in turn enables it to provide in vivo images.
Simulation study on the photoacoustics Moreover, the capability of the PA technique can be
of cells with endocytosed gold extended to image specific cells or molecules by appro-
nanoparticles priate surface functionalization of NPs so that they would
bind with those cells or molecules and induce PA effect.
Various metallic NPs have been employed for visualizing
Ratan K. Saha*, Madhusudan Roy and different functional and cellular/molecular processes10,11.
Alokmay Datta Studies have also demonstrated that C nanotubes conju-
Surface Physics and Material Science Division, Saha Institute of gated with cyclic Arg–Gly–Asp (RGD) peptides can
Nuclear Physics, 1/AF Bidhannagar, Kolkata 700 064, India
serve as contrast agents for PA imaging of tumours12.
Effort has been made to calculate theoretically the PA
The effect of endocytosis of gold nanoparticles (AuNPs)
pressure emitted by a NP surrounded by a fluid
on photoacoustic (PA) signal is examined using com-
puter simulations. It assumes that the endocytosed medium13. Chen et al.13 computed PA pressure from bare
AuNPs significantly enhance cellular optical absorp- and silica-coated NPs immersed in various solvents and
by comparing calculated and measured values revealed
*For correspondence. (e-mail: ratank.saha@saha.ac.in) that the surrounding medium greatly influences the
1554 CURRENT SCIENCE, VOL. 106, NO. 11, 10 JUNE 2014

View publication stats


Smartphone Based Automatic Incident
Detection Algorithm and Crash Notifica-
tion System for All-Terrain Vehicle Drivers
Using Smartphones to Automatically Notify Emergency Contacts of
Accidents
Master’s thesis in Systems, Control and Mechatronics

Gabriel Matuszczyk
Rasmus Åberg

Department of Signals and Systems


Chalmers University of Technology
Gothenburg, Sweden 2016
Master’s thesis 2016:73

Smartphone Based Automatic Incident


Detection Algorithm and Crash Notification
System for All-Terrain Vehicle Drivers

Using Smartphones to Automatically Notify


Emergency Contacts of Accidents

Gabriel Matuszczyk
Rasmus Åberg

Department of Signals and Systems


Division of Automation and Mechatronics
Chalmers University of Technology
Gothenburg, Sweden 2016
Smartphone Based Automatic Incident Detection Algorithm and Crash Notifica-
tion System for All-Terrain Vehicle Drivers
Using Smartphones to Automatically Notify Emergency Contacts of Accidents
Gabriel Matuszczyk
Rasmus Åberg

c Gabriel Matuszczyk and Rasmus Åberg, 2016.

Supervisor: Leif Sandsjö, University of Borås/MedTech West


Examiner: Stefan Candefjord, Department of Signals and Systems/MedTech West

Master’s Thesis 2016:73


Department of Signals and Systems
Division of Automation and Mechatronics
Chalmers University of Technology
SE-412 96 Gothenburg
Telephone +46 31 772 1000

Typeset in LATEX
Gothenburg, Sweden 2016

ii
Acknowledgements

As our work has progressed during this spring, certain people, without personal
gain, have provided invaluable help. Kjell and Kristina Åberg, Tomasz and Josefa
Matuszczyk, and Andréa Bergqvist helped us log data we’ve used throughout this
project. They have given us a dataset with more variation.

We would like to thank the volunteer ATV drivers who, out of interest in our field,
provided a previous project with a large dataset used also in our research.

Our examiner Stefan Candefjord and supervisor Leif Sandsjö have given us guid-
ance and feedback from day one until the very end, inspiring us to find new solu-
tions to difficult problems, while allowing us to work autonomously.

Gabriel Matuszczyk and Rasmus Åberg, Gothenburg, June 2016

i
Abstract

All-Terrain Vehicle (ATV ) drivers face a different sort of danger than


posed by most other means of travel. An ATV is mainly designed to travel
forests and unpaved areas. It is a versatile vehicle often used only by one
person at a time; thus, if an accident occurs far out in the wilderness help
is hard to come by, especially if the driver is incapacitated. As a result of
the prevalence of GPS-enabled smartphones, an application for them imple-
menting an accurate Incident Detection Algorithm (IDA) could save even
an unconscious driver, via an automatic message to an In Case of Emer-
gency (ICE ) contact. This thesis investigates the possibility of designing
such an application, as well as the feasibility of running it in real time on a
smartphone.
A dataset containing 55 hours of logged normal ATV driving motion data
was available, of which approximately 21 hours were of high quality. Two
logs were omitted due to their content of abnormally erratic motion data.
Machine learning methods (specifically One Class Support Vector Machines
(OC-SVM )) were used in order to create an IDA that can satisfactorily
identify several types of accidents. Motion data was collected containing
20 abnormalities in the form of a test person falling and rolling in several
directions, in order to simulate a number of crash scenarios. Together with
Accident Confirmation Criteria (ACC ) to cancel false positives, and a deci-
sion tree within the IDA, few false alarms are raised while alarms do occur for
all incidents simulated in the crash dataset. Overall, the trained OC-SVM
classified normal driving with a precision of 99.39%, and correctly identi-
fied all 20 simulated accidents; thus, the OC-SVM obtained an estimated
F1 -score of 99.69%.
A design for a smartphone application to enable this automatic alarm is
proposed in the form of a flow chart. Investigations of required functionality
support the claim that a smartphone is capable of running the IDA in real
time and with low battery consumption.
With the limited amount of normal driving data, and only simulated
crash data, further investigations must be performed in order to ensure no
overfitting has taken place. The next step in the development would be for a
test group consisting of regular ATV drivers to evaluate the performance of
the IDA in real life situations. It is the authors’ opinion that with additional
trials and tweaking of parameters, a well-functioning smartphone application
could be released to the public and potentially serve as a life saver, perhaps
even for other vehicles, in cases where the driver is otherwise helpless.

Keywords: Machine learning, Support vector machines (SVM), Incident detec-


tion algorithm (IDA), All-terrain Vehicle (ATV), Smartphone, eSafety.

ii
iii
Contents

Glossary ix

1 Introduction 1

1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.3 Aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.4 Scope and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.5 Feasibility Studies of Machine Learning Methods and Smartphone


Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Theory 7

2.1 Machine Learning Methods for Classification . . . . . . . . . . . . . 7

2.2 Evaluation of Algorithm Performance . . . . . . . . . . . . . . . . . 7

2.3 Large-Margin Binary Classification Using Support Vector Machines 8

2.3.1 Understanding the Power of Support Vector Machines . . . . 9

2.3.2 Understanding The Kernel Trick . . . . . . . . . . . . . . . . 11

2.4 One-Class Support Vector Machines for Data Description and Anomaly
Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.4.1 Using Samples From One Class to Distinguish Between Two 13

2.4.2 Tuning the OC-SVM Decision Boundary Using Regularisation 15

2.5 Using the Smartphone Application LogYard for Data Collection . . 17

iv
2.6 Medical Considerations . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.6.1 Patient Response Resulting From Accidents Involving Trauma


to the Brain . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.6.2 Movability Following a Serious Accident . . . . . . . . . . . 18

2.7 Smartphone Application . . . . . . . . . . . . . . . . . . . . . . . . 19

2.8 Smartphone Application Development: Apple iOS . . . . . . . . . . 19

2.8.1 Apple iOS and Swift Programming . . . . . . . . . . . . . . 19

2.8.2 Using Apple’s Accelerate Framework For Optimised Digital


Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . 20

2.8.3 Built-in Activity Classification Algorithms in iOS . . . . . . 20

2.9 Smartphone Application Development: Google’s Android . . . . . . 21

2.9.1 Theory Behind API-level . . . . . . . . . . . . . . . . . . . . 21

2.9.2 Built-in Activity Classification Algorithms in Android . . . . 22

3 Method 23

3.1 Previously Available Dataset . . . . . . . . . . . . . . . . . . . . . . 23

3.2 Data Evaluation and Classification . . . . . . . . . . . . . . . . . . 23

3.2.1 Removal of Accidentally Logged Walking Data: Preprocess-


ing of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.2.2 Removal of Accidentally Logged Walking Data: Removal of


Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.2.3 Using the Matlab Function fitcsvm to Train an OC-SVM . 26

3.2.4 Training an OC-SVM Classifier on Normal Driving Data . . 27

v
3.2.5 Collection of Simulated Crash Data . . . . . . . . . . . . . . 28

3.2.6 Collection of Potentially Problematic Data . . . . . . . . . . 30

3.2.7 Summarisation of Data . . . . . . . . . . . . . . . . . . . . . 31

3.3 Notification of Emergency Services in Case of Accident . . . . . . . 31

3.4 Smartphone Application Development: General Idea of Program


Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.5 User Smartphone Interaction and Undesired Triggering of Alarms . 35

3.6 Smartphone Application Development: Apple iOS . . . . . . . . . . 37

3.6.1 Sensor, Activity Detection and Calculation Performance Test-


ing Application . . . . . . . . . . . . . . . . . . . . . . . . . 37

3.6.2 Automatic ICE Notification in Apple iOS . . . . . . . . . . 40

3.7 Smartphone Application Development: Google’s Android . . . . . . 40

3.7.1 Choosing of API-level . . . . . . . . . . . . . . . . . . . . . 40

3.7.2 Built-in Activity Classification Algorithms in Android . . . . 41

3.7.3 Automatic ICE Notification in Android . . . . . . . . . . . . 43

4 Results 45

4.1 The Incident Detection Algorithm . . . . . . . . . . . . . . . . . . . 45

4.2 IDA Performance During Walking . . . . . . . . . . . . . . . . . . . 47

4.3 IDA Performance During Mounting/Dismounting an ATV . . . . . 49

4.4 IDA Performance During Wheelie Accidents . . . . . . . . . . . . . 51

4.5 IDA Performance During Sudden Stops and Roll Over Accidents . . 53

vi
4.6 IDA Performance During Normal Driving . . . . . . . . . . . . . . . 55

4.6.1 Inspecting Datasets with the Highest Levels of False Alarms 55

4.6.2 Estimated F 1 -Score for the OC-SVM During Normal Driving 59

4.7 IDA Performance Summarised . . . . . . . . . . . . . . . . . . . . . 59

5 Discussion 61

5.1 The Three Classification Methods Used in the IDA . . . . . . . . . 61

5.2 Cellular and Data Coverage in Rural Areas . . . . . . . . . . . . . . 61

5.3 Notification of Emergency Services in Case of Accident via Text


Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

5.4 IDA Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

5.4.1 Walking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

5.4.2 Mounting/Dismounting an ATV . . . . . . . . . . . . . . . . 63

5.4.3 Simulated Accidents . . . . . . . . . . . . . . . . . . . . . . 63

5.4.4 Normal Driving . . . . . . . . . . . . . . . . . . . . . . . . . 64

5.5 Ethical & Privacy Considerations . . . . . . . . . . . . . . . . . . . 64

5.6 Self-fulfilling Prophecies & Psychological Considerations . . . . . . 65

6 Conclusion 67

7 Future Development Suggestions 69

vii
Glossary
ACC - Accident Confirmation Criteria to prevent false alarms, several con-
firmation criteria has to be fulfilled in order for an accident to be taken
seriously. These criteria are defined as ACC throughout this report.

Algorithm evaluations of an input are carried out, and depending on the input’s
content a certain output is reached. An algorithm follows predefined rules
and steps to determine the output and can often run autonomously on a
computer.

Android the operating system developed by Google for smartphones, imple-


mented by several different smartphone manufacturers.

ATV - All-Terrain Vehicle also known as quadricycle, four-wheeler or quad


bike. A vehicle with a high center of gravity and high ground clearance,
the latter making it easier to drive in rough terrain.

Feature extraction the process of selecting parts of data to use for training a
classifier, in order to decrease the number of variables (features) that the
classifier must consider. For example, if several parts of the data are corre-
lated, it could be possible to merge them into one feature in order to increase
classifier performance while retaining classification accuracy.

GPS - Global Positioning System with a receiver (built-in in most smart-


phones) signals from GPS satellites can be used to obtain coordinates for
longitude and latitude.

ICE - In Case of Emergency common term to identify who to contact for ex-
ample in a person’s contact list if that person has been part of an accident
or similar.

IDA - Incident Detection Algorithm used to describe the whole process of


the decision making process. Starts with analysing data, make a decision
regarding if an anomaly (incident) has occurred, if so, trigger an alarm and
cancel alarm if new data indicates it was a false alarm. Worth noting is that
the IDA doesn’t notify an ICE, it simply determines whether an incident has
occurred, and whether the incident should trigger an alarm or be classified
as a false alarm.

viii
IDE - Integrated Development Environment a computer program used to
program other software programs, often include libraries, debugger, compiler
etc.

Incident when the IDA is running a lot of data is collected from different sensors,
for normal driving all sensor values should be within a normal driving space.
If a sensor’s value exits this space an anomaly occurs, these are the incidents
that require further investigation by the IDA.

iOS the operating system used by, and developed for, Apple’s smartphone model;
iPhone.

OC-SVM - One Class Support Vector Machine like SVM, it is a machine


learning method. However it uses only one class to define the limitation in
space.

RC - Rejection Criteria similar to ACC, but instead of confirming an accident,


they confirm false alarms. If a rejection criterion is fulfilled within a certain
time limit, then the incident is discarded as a false alarm.

SDK - Software Development Kit a set of tools to develop software programs


for specific hardware and/or operating systems, often available in the IDE
and chosen before a new project is started.

SVM - Support Vector Machine a machine learning method, which uses sam-
ples of each class to create a decision boundary between the classes, which
maximises the distances between outermost samples from each class and the
boundary. often in a higher dimension than that of the samples.

User smartphone interaction refers to the common case where the smartphone
is interacted with in such a way that the motion sensors detect erratic move-
ments. Such an event could be the driver placing it in a pocket or altering
its position in a pocket. These scenarios create sensor readings which can be
quite similar to incident sensor readings.

VRU - Vulnerable Road User a group of people using roads by means of travel
that offer little to no protection, most commonly thought of is pedestrians
and cyclists.

ix
1 Introduction

Vulnerable Road Users (VRUs) include almost everyone not traveling by car, truck
or bus on common roads, most commonly thought of is people travelling on foot
or biking on or near the road. Two other vulnerable road users are motorcyclists
and All-terrain vehicle (ATV ) drivers. Compared to the protective capacity of
modern cars, all subgroups of VRUs are at a significant disadvantage. Lately, the
focus of traffic safety has shifted from passively protecting car users in a crash,
to actively avoid crashes altogether, and also to develop new safety equipment for
VRUs. One such piece of safety equipment is Hövding, a modern helmet for cyclist
that inflate when an accident is imminent. It is essentially an airbag for cyclists
which envelops the head and generally provides better protection compared to
conventional helmets [1].

ATV drivers face another problem than just being vulnerable compared to car
drivers; their vehicle is by design very versatile and is used in unpaved areas for
both work and pleasure. If an accident occurs in a seldom travelled area (such as a
forest) the driver has to manage the situation by him or her self, since the likelihood
of a passerby appearing generally is small. Even on the assumption that the driver
is able to contact emergency services on his or her own, relating the location can
be problematic since forests rarely possess any road signs or discernible landmarks.

Machine learning algorithms, with their relatively high computational performance


requirements, have greatly increased in popularity, fuelled by increases in computer
performances and decreases in hardware prices. New areas where these algorithms
are imagined to work well are constantly explored. This report documents precisely
such an exploration: a machine learning algorithm is trained using smartphone
sensor data, to detect anomalies in order to detect incidents while driving ATVs.
The algorithm, running within a smartphone application, should trigger a message
(preferably containing GPS coordinates) to an emergency contact if an anomaly
(i.e. an incident) is detected.

1
1.1 Background

The Specialty Vehicle Institute of America (SVIA) defines an ATV as a motorised


off-highway vehicle designed to travel on four low-pressure tires, having a seat
designed to be straddled by the operator and handlebars for steering control; fur-
thermore, the SVIA defines two subgroups, Types I and II, where Type I ATV’s
are designed to carry only the driver and Type II ATV’s can carry both the driver
and one passenger, situated behind the driver [2].

In Table 1 accident data on Swedish roads for 2015 is presented [3]. ATVs may
be registered under different categories depending on their use and performance,
or not registered at all if they are used in an enclosed area. Of the three possible
categories in Table 1 (Motorcycle, Moped rider and Other ) that ATVs fall under,
two of them, by percent, has the highest death and severe accident rate. For all
road user groups the goal is of course to avoid accidents altogether and keep the
severity at a minimum if unavoidable, meaning improvement is needed.
Table 1: Statistics of accident severity for different groups of road users in Sweden
during 2015 [3].
Severity Car Motor- Moped Cyclist Pedestrian Other Total
cycle rider
Light 12850 663 756 1604 1153 172 17198
Severe 1534 248 110 241 271 41 2445
Deadly 159 44 5 17 28 6 259
Total 14543 955 871 1862 1452 219 19902
Percent of accident per severity level and road user group [%]
Light 88.36 69.42 86.80 86.14 79.41 78.54 86.41
Severe 10.55 25.97 12.63 12.94 18.66 18.72 12.29
Deadly 1.09 4.61 0.57 0.91 1.93 2.74 1.30

From a report published by the Swedish Transport Administration [4] some inter-
esting data can be found. During the years 2000-2003 around 3000 new ATVs were
registered per year, for the years 2011-2012 this number had risen to 11000 and at
the beginning of 2013 just over 91000 ATVs were registered (this figure does not
include unregistered ATVs or ATVs registered as tractors, so total ATVs in use
is likely higher). In the same report accident data presented estimates that 7000
persons visited emergency rooms between 2007 and 2010 for ATV related injuries.
Other data for the period 2001-2012 is presented in Table 2. For the on road -row
in the table is it worth mentioning that 90 % of fatal accidents were single vehicle
accidents and that for 20 % of the cases where the ATV overturned, the deceased
driver was still beneath the ATV when found.

2
Table 2: Deadly ATV accidents during 2001-2012 and how many involved an over-
turned ATV [4].

Location Killed Overturned ATV [%]


On road 42 70
Off road 27 60

Due to the high rate of single-vehicle accidents combined with drivers often trav-
eling alone, an automatic safety system could help lower the death toll. What
this automatic system should be able to do is to detect incidents using an algo-
rithm without any direct input from the user. Commonly this kind of system is
known as an Incident detection algorithm, or IDA; IDA’s use collected data (from
for example accelerometer sensors) to automatically evaluate if an incident has
occurred. Smartphones have become such common gadgets, and their increasing
quality regarding sensor readings and computational power could provide a very
attractive platform to run such an IDA on, both for the developers and for users.

Smartphones have become everyday objects, between 76 and 89 percent of Swedes


aged 16-54 have utilised a smartphone outside of their homes [5]. Since smart-
phones are so widespread and apps are easily distributed through official channels,
a successful application may be an important contribution towards providing med-
ical assistance to victims sooner.

Similar IDA’s have been produced before, which found that the quality of the built
in sensors in common smartphones is good enough to detect accidents while riding a
bike as well as for horse riders. For the development of the bike IDA [6], simulations
were made with a smartphone on a crash dummy and a bike to collect data, the
dummy was mounted and the bike pushed in order to gain speed and crashed into
objects to simulate an accident. For the horse riders [7], similar simulations were
obtained by means of researchers allowing themselves to be thrown off a mechanical
bull. Data of this type can be impractical or expensive to collect in many other
incident detection applications, such as ATV accidents.

The United States Consumer Product Safety Commission (CPSC ) found after
analysing 71 400 ATV-related injuries, that in 68.5% of the cases the driver was
the only rider [8]. Thus, if severe accidents happen, the driver is alone and help
won’t arrive until someone misses the driver or he or she is found by a passerby.
If the driver is able to contact emergency services, conveying an exact location
can be difficult for several reasons. The driver may not know where he or she
is, cellular services may be poor, which would make triangulation hard, and can

3
also cause a phone call to cut off. If the driver has a GPS signal, problems may
arise since there are several similar standards to convey coordinates. A short text
based message including a specific standard of GPS-coordinates (which generally
has much better coverage [9] than cellular services [10]), could be delivered in its
entirety to an in case of emergency (ICE ) contact and provide useful information
for a search and rescue team. In short, there is a need for an IDA which can
autonomously detect an incident, and quickly relay all necessary information to
another person; the problem, and potential application of an IDA to solve it, is
illustrated in Figure 1.

Figure 1: A dangerous situation, for which the risk could be reduced by use of an
autonomous IDA to facilitate expeditious search and rescue. The series of images
depict an ATV driver during a normal drive, who is suddenly subjected to a crash;
however, the driver’s smartphone is running the proposed IDA which detects the
crash and notifies an ICE contact.

4
1.2 Purpose

The primary long-term purpose of the project is to save lives, and to minimise
the effects of injuries in ATV-related accidents, by offering a means of obtaining
fast emergency medical attention in cases where the driver cannot him- or herself
summon such aid. Another purpose is to evaluate what is actually possible to iden-
tify using the sensors in today’s smartphones, how well the information captured
represents the real world and if it is possible to filter out disturbances well enough
that the obtained information is useful. General research on sensor accuracy and
fields of use could uncover opportunities not previously thought of.

1.3 Aim

The primary aims are:

1. to create an IDA which can identify accidents and crashes.


2. Optimise the IDA in order to obtain a high F1 -score and
3. release it as an application through official distribution channels, such as the
Google and Apple online application stores.

No other hardware than the smartphone’s stock sensors should be used as input
for the IDA. The application should create some sort of distress signal, such as a
text message, in case an incident or crash has occurred.

1.4 Scope and Limitations

Some uncertainty surrounds the collected volunteer data (see Section 3.2). In some
cases volunteers seem to have forgotten about the running logging after dismount-
ing the ATV, and inadvertently logged other data, such as walking. Evaluation
will be conducted and as much as possible of incorrectly logged data will be re-
moved. Although there is no way of knowing with full certainty that the remaining
data is only ATV driving, it will be assumed that it is.

No real life evaluation will be conducted since it would either require the expensive
employment of professional stunt men, or be unreasonably dangerous for the test

5
person. Simulation of crashes and general unusual data (such as just dropping the
phone from a low height) will however be collected and evaluated with the IDA to
see how it classifies these scenarios.

1.5 Feasibility Studies of Machine Learning Methods and


Smartphone Performance

Machine learning methods have already been used extensively in smartphone ap-
plications. Specifically of interest to this project, a report from 2012 documents the
use of multi-class support vector machines for smartphone-based human activity
recognition using accelerometer data [11], and another report from 2014 describes
the use of machine learning to filter out smartphone magnetometer disturbances
in Pedestrian Dead Reconning for indoor localisation [12]. Furthermore, two more
reports documented the use of machine learning in smartphones: the first report,
from 2012, for detection of Freeze of Gait prevalence in the everyday lives of pa-
tients suffering from advanced Parkinson’s Disease with more than 95 % accuracy
and specificity [13]. The second report, from 2009, documents the implementation
of machine learning in a smartphone application, which detects whether an indi-
vidual falls, as well as said individuals response to the fall, and subsequently sends
an SMS to one or more pre-specified social contacts [14]. Many other smartphone-
based applications using machine learning have been reported as well, even fish
species recognition using computer vision and multi-class support vector machines
in smartphones as a step towards allowing Chinese fish farmers to diagnose fish
disease using their smartphones [15]. The work of David M. J. Tax, 2001, shows
that it is possible to use SVM’s to differentiate between two classes even though
only one of the classes can be sampled [16]; something that had also already been
confirmed by Schölkopf et al in 1999 [17].

6
2 Theory

To understand some of the decisions made later on in the report, some theoretical
background is necessary. Also included in this section is explanations to some
terminology and differences for methods.

2.1 Machine Learning Methods for Classification

The general idea of Machine Learning is to use a computer’s capability to re-


peatedly and quickly calculate (complex) equations, and with the guidance of an
optimisation equation find the best parameters and/or function for the problem
evaluated. Machine learning’s true strength lies within computers ability to do the
same calculation with slightly altered values back-to-back without any fault. They
can therefore be used in applications where a lot of data exists and an abstract
solution is prominent.

Machine learning can be used for other purposes than classification [18] but in
this thesis it is a case of accident or no accident. Several different methods exists
for classifications, each with its different strengths and weaknesses depending on
preconditions and goals set by the developer/researcher [19]. In the present study,
a lot of data exists of only one class (no accident data exists, see Section 3.2.5 for
simulated crash data), the data consist of twelve sensor values in each sampling
instance and no need for fast training exist. With these preconditions Support Vec-
tor Machine ( SVM) for anomaly detection is a fitting method [20]. Furthermore,
in a 2003 study examining the possibility to identify traffic incidents in an arterial
network found that SVM classifiers offered a lower misclassification rate, higher
correct detection rate, lower false alarm rate and slightly faster detection time
than Multi-Layer Feed forward neural network and probabilistic neural network
models [21].

2.2 Evaluation of Algorithm Performance

When evaluating the performance of a binary (i.e two-class) classification algorithm


such as an SVM classifier, it is necessary to consider both accuracy in detecting
datapoints belonging to a target class, as well as accuracy in detecting datapoints
belonging to the outlier class. This is especially important when the numbers of

7
available datapoints from each respective class differ greatly. For example, if 90
datapoints are available from the target class, and 10 datapoints are available from
the outlier class, and a classifier classifies all datapoints as belonging to the target
class, the resulting accuracy would be 90% even though all outlier datapoints were
misclassified. One well-known performance measure for binary classification is the
F1 -score.

The F1 -score is based on two parameters; namely, precision and recall. The pre-
cision of a classifier is given by the number of correct positive classifications, i.e
target class datapoints classified correctly, divided by the total number of positive
classifications, i.e all datapoints classified as belonging to the target class. The
recall of a classifier is obtained identically but concerns negative classifications, i.e
outlier class datapoints classified correctly out of the total number of datapoints
classified as belonging to the outlier class. The F1 -score is a weighted average of
the precision and recall, as defined in Equation (1), which reaches its best value
at 1.

(precision) · (recall)
F1 = 2 · (1)
(precision) + (recall)

In the example described in the previous paragraph, the classifier precision would
be 100 %; however, since the classifier misclassified all outlier datapoints, the recall
would be 0 % and the F1 -score, calculated by Equation (1), would also be 0 %.

2.3 Large-Margin Binary Classification Using Support Vec-


tor Machines

In order to understand SVM’s, consider the data set created by Ronald Fisher with
the objective of solving the taxonomic problem of distinguishing iris flower species
Iris Setosa from Iris Versicolor and Iris Virginica by observing four parameters:
sepal length, sepal width, petal length and petal width [22]. While Fisher used
traditional statistical methods and manual calculations as an attempt to find co-
efficients which could be used to classify the species, using SVM’s it is possible to
automatically find a classifier which accurately identifies the species of iris flower.

8
2.3.1 Understanding the Power of Support Vector Machines

Considering only sepal length and sepal width as parameters and plotting each
observed iris flower as a data point based on the two parameters, an SVM will find
the straight line which separates the data points of two classes with the largest pos-
sible margin of separation. It can be readily seen in Figure 2 that Iris Setosa data
points are indeed linearly separable from Iris Versicolor in Fisher’s dataset, with
great classification accuracy; however, the same cannot be said for the randomly
generated data in Figure 3a. An adjustment is necessary to distinguish between
classes that are not linearly separable. This is handled by SVM’s by mapping
non-linearly separable data into a higher dimension where it is possible to obtain,
instead of the optimal separating line, the optimal separating hyperplane [23][24].
From the higher dimension this hyperplane is mapped back to the original dimen-
sion to obtain a non-linear separator even though only linear methods have been
used. This method is called the Kernel trick, where Kernel refers to the function
used to map the data points to the higher dimension (worth noting is that during
training of the SVM a non-linear kernel is used, but when the training is finished
only linear methods are used to classify data). The non-linear separator shown in
Figure 3b is the result of classification using the Kernel trick.

4
Sepal Width (cm)

3.5

2.5
Iris Setosa
Iris Versicolor
Support Vectors
2
4 4.5 5 5.5 6 6.5 7
Sepal Length (cm)

Figure 2: Classification of Iris Setosa versus Iris Versicolor using an SVM classi-
fier. Generated using Fisher’s Iris data set.

9
0.8
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
Class 1
-0.8 Class 2
Support Vectors

-0.5 0 0.5

(a) SVM using Linear Kernel.

0.8
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
Class 1
-0.8 Class 2
Support Vectors

-0.5 0 0.5

(b) SVM using Radial Basis Function Kernel.

Figure 3: Visualisation of the results of using the ”Kernel trick”, in order to


separate linearly unseparable data using an SVM.

10
2.3.2 Understanding The Kernel Trick

As mentioned in Section 2.3.1, SVM’s provide non-linear decision boundaries by


mapping non-linearly separable datapoints into a higher dimension where they are
linearly separable. A step-by-step visual description of how the Kernel Trick works
is displayed in Figure 4. In Figure 4a two datasets containing two-dimensional dat-
apoints are represented in a Cartesian plane. Clearly, the two datasets can easily
be separated by a straight line, as illustrated by the dotted line drawn between
them. Figure 4b shows a case where the datapoints contained in each dataset are
distributed in such a way that it is not possible to separate them with a straight
line. The dotted line drawn in the figure would give rise to misclassifications if used
as a decision boundary. This is where the Kernel Trick comes in handy: in Fig-
ure 4c, the datapoints of both datasets are mapped into a three-dimensional space
using the same non-linear function, termed the ”Kernel”, and it turns out that it
is possible to find a plane in 3D which optimally separates the datasets. Using
the same Kernel function to map the optimal separating plane back to the original
dimension provides the non-linear separator drawn as a dotted line in Figure 4d.
Use of the Kernel Trick is not limited to low-dimensional classification problems
such as the ones described in this section – it handles much higher dimensions
using the same approach.

11
Y

Y
X X

(a) Linearly separable 2D datasets. (b) Non-linearly separable datasets.

Y
Z

X
Y X

(c) Mapping of datapoints into higher dimen- (d) Non-linear decision boundary after map-
sion (3D). ping back to 2D.

Figure 4: Step-by-step description of how SVM’s produce non-linear decision


boundaries.

2.4 One-Class Support Vector Machines for Data Descrip-


tion and Anomaly Detection

Binary classification using SVM’s usually involves supplying training data from
two or more different classes. In Section 2.3, the classes were different species of
iris flowers, as well as some randomly generated datapoints, and training data was
available from each class; however, one might consider the case where the goal is
to obtain an SVM classifier algorithm which can differentiate between two classes,
but where there is little or no data describing one of them. An example of this
case, as described by Tax and Duin in 2004 [24], is a machine monitoring system
in which the current condition of a machine is examined and an alarm is raised

12
when there is an anomaly. Measurements of normal working conditions are usually
cheap and easy to obtain; however, measurements of abnormal operation would
require the destruction of the machine in all ways that are desirable to detect. The
solution proposed by Tax and Duin is to use data from the well-sampled class and
define it as the target class and thus define any other data-point, which does not
classify as part of the target class, as an outlier, or anomaly [24]. This is called
Support Vector Data Description (SVDD) or One-Class Support Vector Machines
(OC-SVM ).

2.4.1 Using Samples From One Class to Distinguish Between Two

Reviewing the Fisher Iris problem of Section 2.3, consider the case in which Fisher
had only measured a large number of Iris Setosa flowers, but did not have access to
any measurements from Iris Versicolor. Using an OC-SVM classifier, it is possible
to obtain a defined area in the datapoint space which is isolated and defined as
the target class. Any samples which occur outside of this space will be considered
outlier data; in this case, target data will be Iris Setosa and any datapoint outside
the defined separator will be classified as Iris Versicolor. This is illustrated in
Figure 5. For illustrative purposes, the same method has been used to describe
one of the classes of randomly generated data of Figure 3; the results of training a
one-class SVM on only the positive examples (represented as ”+”) are displayed
in Figure 6.

13
4
Sepal Width (cm)

3.5

2.5
Iris Setosa
One-Class separator
Iris Versicolor
2
4.5 5 5.5 6 6.5 7
Sepal Length (cm)

Figure 5: Using OC-SVM to distinguish between Iris Setosa and Iris Versicolor
using only Iris Setosa samples for training. Generated using Fisher’s Iris data set.

0.8

0.6

0.4

0.2

-0.2

-0.4

-0.6 Known OC Datapoints


One-Class separator
-0.8 Outliers

-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Figure 6: Using OC-SVM to distinguish between the two randomly generated, lin-
early unseparable classes.

14
2.4.2 Tuning the OC-SVM Decision Boundary Using Regularisation

As shown in Figure 6, the decision boundary does not enclose all known dat-
apoints perfectly. This is due to the OC-SVM having been regularised during
training. Regularisation is a way of ensuring that the resulting classifier does not
only recognise all datapoints it has previously seen, but also generalises well to
new datapoints from the same class. In OC-SVM training, this is performed by
adding a regularisation term, denoted by the Greek letter ν (nu), which adjusts
the penalty for misclassifications during training. The value of ν can be varied
between zero: non-inclusive, and one: inclusive. By setting a small ν, misclassifi-
cations are heavily penalised and the OC-SVM will try to find a decision boundary
which includes as many as possible of the training datapoints. If there should be
unlikely or highly unusual datapoints in the set, it could be better if the OC-SVM
ignored these; increasing ν allows disregarding of some misclassifications and pro-
duces a simpler classifier. Figure 7 shows the results of two OC-SVM classifiers.
Figure 7a shows a highly overfitted decision boundary as a result of a small ν. On
the other hand, Figure 7b shows an underfitted decision boundary, resulting from
a ν set too high.

15
Known OC Datapoints
0.8 One-Class separator
Outliers
0.6

0.4

0.2

-0.2

-0.4

-0.6

-0.8

-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

(a) ν close to zero.

Known OC Datapoints
0.8 One-Class separator
Outliers
0.6

0.4

0.2

-0.2

-0.4

-0.6

-0.8

-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

(b) ν close to one.

Figure 7: OC-SVM classification results with different levels of regularisation.

16
2.5 Using the Smartphone Application LogYard for Data
Collection

A smartphone application called LogYard [25] was used to perform all data logging
both during the writing of this thesis, and during a prior project [26] which resulted
in data used for this thesis. LogYard works by constantly sampling data from
a smartphone’s 3-axis accelerometer, 3-axis gyroscope, 3-axis magnetometer and
GPS location and estimated speed, and saving each sample to the smartphone’s
system storage. The sampling rate is set to 100 samples per second. Data is
stored in the form of Comma-Separated Value (CSV ) files and each log is denoted
by an anonymous numerical user ID. CSV-files can be imported and parsed into
spreadsheet form by both Matlab and most spreadsheet applications; a typical
output of a LogYard logging session is illustrated in Figure 8, where the CSV-file
in question was parsed by the application Numbers [27]. Each file also contains
metadata detailing, e.g, device model, sensor ranges and which unit system was
used for the measurements. Data logging is started and stopped manually by the
user and, thus, the logs invariably contain motion data produced by smartphone
interaction in the beginning and end sections.

Figure 8: The CSV-file output of a LogYard logging session, parsed into spreadsheet
form by the application Numbers.

17
2.6 Medical Considerations

Aspects of the human body should be taken into consideration in the development
of the algorithm. More specifically, how different accidents will affect the drivers
ability to summon help and also expected actions he or she will take.

2.6.1 Patient Response Resulting From Accidents Involving Trauma


to the Brain

A study presented in 1998 found that the most common injuries in ATV accidents
in the United States were orthopedic, 53.2 % of the drivers were afflicted by (at
least) one such injury. Second most common were head injuries, 40.8 % of all
drivers hurt their head in some way (scalp wound, concussion etc.) [28]. Both
these type of injuries can leave the driver unable to move or otherwise incapable
of contacting emergency services. Concussions are particularly dangerous; some
of its symptoms are: dizziness, seizures, trouble walking, weakness, numbness,
decreased coordination, confusion and slurred speech [29]. These symptoms may
not be instant, meaning that the driver may function normally for a short time
period after the accident but later on become dizzy and fall or crash again. A
specific danger with this kind of injury is that the driver may cancel the alarm
raised by the smartphone application and/or inform the ICE that it was a false
alarm and later on lose consciousness.

2.6.2 Movability Following a Serious Accident

Following any accident involving high velocity and sudden stops, broken bones
are a common complication [30]. If a driver crashes several scenarios are possible,
some involving broken bones, for the more extreme versions it is possible the driver
can’t walk at all, can’t manipulate hands and fingers, puncture a lung or the skin.
Most of these (extreme version) injuries result in a driver not capable of getting
him- or herself the help needed.

18
2.7 Smartphone Application

Since one aim is to release an application containing the finished IDA (see Sec-
tion 1.3) investigations into smartphone application programming is needed as
well. Several smartphone Operating Systems (OS ) exist; Google’s Android and
Apple’s iOS are the two largest smartphone markets [31], and online resources
offer good support for novice application programmers and are therefore chosen.

2.8 Smartphone Application Development: Apple iOS

The development of applications for Apple iOS devices is done using Objective-C
or Swift. Available and relevant functionality incorporated in the software devel-
opment kit (SDK ) are mentioned and briefly described in the following sections.

2.8.1 Apple iOS and Swift Programming

While traditionally Objective-C has been used to develop for the Apple range of
operating systems, namely iOS, watchOS, tvOS and macOS; however, since 2014, it
is also possible to program for said systems in the Apple-developed language called
Swift. Swift builds upon Objective-C and C but has many simplifications which
improve readability and makes programming easier, in particular for beginners.
Swift programming in the Apple-supplied IDE, Xcode, supports the use of so called
Playgrounds which evaluate each line of code written in real time so that it is
possible to troubleshoot blocks of code in an isolated and controlled environment to
see how it works [32]. A Playground example is illustrated in Figure 9. In the right
margin of the playground, the resulting output from evaluating each line of code
is shown and reevaluated in real time as the user writes or changes code. While
programming in Xcode, an application may be executed on any connected iPhone
which will communicate real time energy impact, CPU and memory consumption,
and disk and network usage to Xcode so that app performance may be analysed
[33].

19
Figure 9: An Apple Xcode Playground for running Swift code.

2.8.2 Using Apple’s Accelerate Framework For Optimised Digital Sig-


nal Processing

The SDK for Apple’s iOS includes an optimised package for Digital Signal Pro-
cessing (DSP) called vDSP [34]. This package provides mathematical functions
for, e.g, vector and matrix algebra, statistical analysis, and frequency analyses
using Fast Fourier Transforms, and is part of an Apple-developed and computa-
tionally efficient C-based general framework for advanced mathematics known as
Accelerate [35].

2.8.3 Built-in Activity Classification Algorithms in iOS

The iOS SDK already includes code for activity classification in a class called CM-
MotionActivity designed to identify the following activities: stationary, walking,
running, automotive, cycling and unknown. The activity classifications also come
with a three-parted level of confidence associated with the classification. When an

20
activity is detected, a flag for that specific activity is set and the confidence level
for the detection can be accessed. Several flags can be set at once, e.g if the user is
driving a vehicle, the automotive flag will be set, and if said user stops the vehicle
but does not get out, both the automotive and stationary flags will be set [36].

2.9 Smartphone Application Development: Google’s An-


droid

Android application development is done in Java, and several IDEs exist support-
ing Android packages. For this application, Google’s Android Studio will be used
since it is easily set up and developed specifically for Android development.

2.9.1 Theory Behind API-level

Since several different companies produce smartphones with Android as its OS, a
system with different API-levels is used. When new functionality is introduced
(such as communication with smart watches) with a major update of the Android
software the API-level is raised. This is done for mainly two reasons:

1. Manufacturers design and produce phones according to demands placed on


them by the API-level they want to fulfill, and can use the API-level to de-
termine what hardware is needed, for example a company’s flagship model
containing all new functionality and some predicted for future updates gen-
erally has a higher API-level than the company’s budget model.

2. Software developers choose for which API-level they develop their applica-
tion, i.e. how old phones (and consequently, how many) they want to be
able to run their application, a lower API-level will make the application
available for more smartphones, but limit functionality for the application.

This means every Android phone has an API-level and every application has one
too. If you combine one Android smartphone and one application, they will be a
successful combo if the API of the smartphone is equal or greater that of the ap-
plication. Android users of course doesn’t need to do this comparison themselves,
but will be notified if they try to install an incompatible application.

21
2.9.2 Built-in Activity Classification Algorithms in Android

Available via Google is a method for activity classification, which can predict the
user’s activity. The activities that the method is able to identify are: in vehicle,
on bicycle, on foot, walking, running, still, tilting and unknown. If this method is
called different results can be returned, for example the programmer can choose to
only obtain the most likely activity in return, or all activities with a corresponding
confidence. Activities are not mutually exclusive for several reasons, one being
that some are versions of other (walking and running are a version of on foot)
and another is that they are possible to combine in real life as well (i.e. to walk
around in a bus, which would result in both on foot and in vehicle having high
confidence) [37].

22
3 Method

Two main areas in this report are Matlab programming and smartphone appli-
cation development. Matlab is used to both train and evaluate an IDA, before
implementing it in an application. Since the algorithm will run on smartphones,
some Matlab functionality can’t be used, and how data is handled has to be
taken into account when both training and evaluating the algorithm, since it is
done on collected data in Matlab but will run in real time on the smartphones.

3.1 Previously Available Dataset

Prior to the start of this thesis, about 55 hours of normal ATV driving data from
20 different ATV drivers had been collected in another project [26]. Using the
smartphone application LogYard, detailed in Section 2.5, data was collected by
a group of 20 volunteer drivers using ATV’s in their work (e.g in farming and
forestry), or in their spare time. LogYard constantly samples and saves data from
available motion sensors, i.e gyroscopes and accelerometers, as well as orientation
and position sensors, i.e magnetometers and GPS. Data is generally sampled at a
rate of about 100 samples per second; however, due to the way smartphone CPU
time is divided between applications, the sampling rate may fall well below this
level. The previous study found that approximately 8 of the 55 hours had to be
omitted due to inconsistencies, leaving 47 hours of data to be analysed. Out of
these 47 hours of data logs, about 15.5 hours of logs were found to be affected by
file corruption or inconsistencies, leaving 31.5 hours of data logs available in total
for this thesis.

3.2 Data Evaluation and Classification

LogYard samples twelve different sensors, as mentioned in Section 2.5, however


not all sampled sensor values were used for this project. Several different smart-
phone models has been used to collect normal driving data and, from visually
analysing data from different sensors, some conclusions were drawn concerning
sensor aptitude for the application;

• Magnetometer sensor quality seems to differ greatly among manufacturers.


Quality of sensor values varies with such a degree, that using it as an input

23
to the OC-SVM might cause misclassifications dependent on the smartphone
model and therefore aren’t used. Also, magnetometers are sensitive to mag-
netic fields generated even by small nearby electronic devices.
• GPS locations (longitude and latitude coordinates) are important in case an
accident has occurred, but otherwise too uncertain. Sometimes the signal is
lost completely, other times the estimated position is off by several hundred
meters, GPS location is only used to communicate an accident location.
• Estimated speed (derived from the GPS data) is as uncertain as GPS data
itself. Due to sometimes being off by hundreds of meters and sometimes
losing the signal, improbable speed changes occur in the data; thus, estimated
speed by GPS data isn’t used as an input the OC-SVM.
• Accelerometer sensors are generally robust and not prone to disturbances,
they are therefore quite independent of the smartphone model that the driver
has. A lot of information about the drive can be found in the accelerometer
values and are one of the most important inputs to the OC-SVM.
• Gyroscope sensors, like accelerometers, are robust enough not to depend on
smartphone model; however, they do usually produce a bias term in their
signal which must be filtered out to obtain accurate measurements. Due
to the nature of a normal drive, only certain events can be found within
gyroscope values. However, a roll accident logged by a gyroscope sensor has
a quite unique pattern and the sensor is a valuable input to the OC-SVM.

Some feature extraction from accelerometer and gyroscope data is done as well,
see Section 3.2.4 for more information about input parameters to the OC-SVM.

Due to the fact that LogYard starts logging immediately after a user manually
enables it in the smartphone application, and that it continues up until the point
when the user manually stops it, unwanted data is unintentionally included in
every log. Unwanted data is composed mainly of motion data resulting from
smartphone interaction related to starting and stopping the logging, and of motion
data resulting from the user mounting or dismounting their ATV. Since neither
of these cases are similar to normal driving, it is undesirable to train a normal
driving classifier on these.

Labels for the data don’t exist, and it is most likely that other activities besides
ATV driving have been logged, which has to be handled somehow. A likely activity
to be among the data is walking, since it is necessary to get to and from the ATV
in order to drive it.

24
3.2.1 Removal of Accidentally Logged Walking Data: Preprocessing
of Data

Since it was desirable to train on largely pure driving data, it was necessary to
attempt to remove data that weren’t driving data, such as data created when
drivers forgot to turn off data logging after getting off their vehicles. The absence of
labels corresponding to the activities logged in the dataset lead to the identification
and removal of irrelevant data being left up to the authors.

In order to facilitate removing of data, different preprocessing methods were tried:

Preprocessing method 1 (not used): Although the data mentioned in Sec-


tion 3.1, was assumed to be of sufficient quality, some evaluation of the data was
also required. With the help of Matlab, data with too low sampling frequency
was removed (varying thresholds were tried in the interval 40 − 60 Hz). Also sam-
ples where too many sensors, or one sensor for a too long time, aren’t updating
their values, are scrapped (here, also, variations were tried, the best seemed to be
all sensors locked up or one sensor for 100 ms). A window length was also imple-
mented, meaning that if a continuous area of good data was too short it were also
scrapped. Every logged drive were evaluated and data sections not fulfilling the
demands were removed, meaning a single logged drive could be split into several
datasets, since most drives fit more than just a window length of continuous data.
A segment of bad data means that the data before the first bad sample is saved as
one interval and the first good sample after the bad segment is the first of a new
interval.

Preprocessing method 2 (used): To assist the method, 200 seconds of data


were removed from both start and end of each logged drive as it was assumed that
the likelihood of the drive having started was high after 200 seconds. This data
trimming was done for several reasons; to remove abnormal samples originating
from the driver turning on/off logging, removing areas of uncertain activity, such
as mounting/dismounting the ATV for example and train on longer datasets. This
left 21 hours of data from the 31.5 hours mentioned in Section 3.1, which around
1.5 hours were used for training and the rest for testing.

25
3.2.2 Removal of Accidentally Logged Walking Data: Removal of Data

Removal method 1 (not used): It was assumed that most or all of the irrelevant
data would consist of short periods of walking until the drivers realised that logging
was still taking place; therefore, an algorithm was developed to identify and classify
walking data only. About 80 minutes of walking data was collected using LogYard
with varying positioning of the smartphone (trouser pocket, shirt pocket and in
a handbag) and subsequently screened to remove sections of the data with low
sampling rates. This dataset was known to contain only walking and was used to
develop the classification algorithm. However the classification algorithm proved
to remove too much data, leaving only a fraction for training. Examining the
results, it seemed small groups or single samples were removed here and there in
the datasets, resulting in other criteria failing later on in the evaluation process.

Removal method 2 (not used): A signal to noise ratio (SNR) approach was
also tried, with the help of accelerometer data and Fast Fourier Transform (FFT )
on this data. Lower frequencies were considered signal, and remaining frequencies
noise, since walking typically is a low frequency activity. All frequencies below a
certain threshold were summarised and considered the signal, while the remaining
also were summarised but considered noise instead. To determine whether the cur-
rent sample was walking or not, the ratio between signal and noise were calculated
by dividing the signal with the noise, and if this ratio reached a certain level it
was considered walking. Similar to previous methods, too much data seemed to be
removed from the normal driving dataset, most likely due to miss-classification.

Removal method 3 (used): It was assumed that the preprocessing procedure


would suffice, since the volunteer drivers had been expressly told to collect driving
data [26]. The parameter ν (see Section 2.4.2) was adjusted during OC-SVM
training in iterations. In every iteration, the OC-SVM was used to classify test
data to see how much of it was considered normal driving. The method is detailed
in Section 3.2.4.

3.2.3 Using the Matlab Function fitcsvm to Train an OC-SVM

In Matlab’s Statistics and Machine Learning Toolbox the function fitcsvm could
be used to train an SVM from a set of training observations containing different
features and a vector with labels for each row (i.e. sample). Training an OC-SVM
in lieu of the default two-class SVM involved labeling all observations equally, e.g as

26
belonging to the only available class, and tuning parameters exclusively intended
for one-class learning. Barring Kernel selection, i.e selecting and/or modifying
the non-linear Kernel function, which is essential in both two-class SVM and OC-
SVM training, the main parameter to tune in one-class learning is the level of
regularisation in training, here symbolised by the Greek letter ν (nu). A small ν
heavily penalises misclassifications during training, i.e observations which do not
fall within the class boundaries, which may lead to overfitting. In turn, a large ν
produces simpler solutions with possibly better generalisation abilities; however,
this may lead to underfitting, i.e that the trained SVM does not properly classify
new cases (See Section 2.4.2). Fine tuning the ν parameter is essential for the final
SVM to produce satisfactory classifications.

Experimentation with training an OC-SVM on example observation sets generated


by a known mathematical function and subsequently using the trained OC-SVM
to classify equivalent observation sets with different levels of added noise showed
several important points. Firstly, training an OC-SVM on raw data produced
much higher misclassification rates than on data which was scaled, either by mean-
variance normalisation or by mapping the values of each feature to similar ranges
and, secondly, that large differences in the ranges of features undermined the
effect of ν-parameter tuning. Also, the experiments confirmed that increasing ν at
training time resulted in a larger tolerance for noise in the classifications, as well
as increasing the acceptance of observations generated by different mathematical
functions. The Kernel used throughout was the Radial Basis Function (RBF)
Kernel which is commonly used, and is advocated as a primary alternative by a
2010 report [38].

3.2.4 Training an OC-SVM Classifier on Normal Driving Data

Due to time constraints and the large amount of observations available (around
21 hours), an OC-SVM was trained on a small fraction of the total available data
(around 1.5 hours). However, in order to avoid training only on one or a few
sets of user data, and to avoid training on breakpoint areas between logs, a script
was developed which produced an adequate training set. Concisely, the script
first loaded all user data, categorised by user and log number. Then, a short
period of time was clipped from the start and end of each log as mentioned in
Section 3.2.1. The resulting logs were examined in order to produce feature vectors
for each observation, using mean-variance normalisation to obtain adequate value
ranges for each feature. Each new observation was added to a larger dataset
which would include all observations. Finally, the observations’ orders in the large

27
dataset were randomised several times, using a randomisation script built around
the Matlab random number generator. A fraction (1.5 hours) of the randomised
set was reserved as training data, while the other, much larger (19.5 hours), fraction
was saved as a test set to act as observations not before seen by the OC-SVM.

Several different variations to the method were tested, among others: feature se-
lection, historical sample influence on features, as well as the number of historical
samples to consider. However, the parameter which governed at training time was
the ν-parameter described in Section 3.2.3. Additional parameters were imple-
mented post-training in order to simplify individual run-time adaptation within
the later smartphone application. One such parameter is a feature weighting vector
which can tweak the run-time sensitivity to certain scenarios.

In the final OC-SVM solution, the input feature vector was comprised of data from
accelerometers, and gyroscopes, both ”current” and historical. Accelerometer and
gyroscope data was preprocessed, separately for each axis of the 3-axis data. An
estimated derivative of the gyroscope data is also included in order to let the
OC-SVM specifically keep track of quick turns, and a historical measure for each
parameter was saved as a new feature which would also go into the OC-SVM
feature vector. Five seconds of history is used in an attempt to balance the two
goals of catching inconsistencies in the short time leading up to an incident, while
not drowning these inconsistencies in large amounts of historical data. A ν value
of 0.75 was found to be a rough optimum in that it produced the lowest false alarm
rates for the normal driving datalogs while still detecting all simulated accidents
described in Section 3.2.5.

3.2.5 Collection of Simulated Crash Data

In order to obtain data for evaluation purposes, a number of abnormal movement


patterns were realised while collecting sensor data in a smartphone. The initial
experiment consisted of placing the smartphone in a rucksack and subsequently
subjecting it to impacts, falls and rolling, separated by short periods of rest or
carrying the rucksack around. A video was recorded during testing and the dif-
ferent events were later identified and marked in the sensor data by comparing
the timestamps in the data with the elapsed times pertaining to each event in the
video.

A second experiment was also realised, where an operator performed a number of


abnormal movements with two smartphones placed on their person, one in a trouser

28
pocket and one in a chest pocket, both simultaneously collecting sensor data.
The goal of this experiment was to collect a larger timeseries of approximately
naturalistic data, i.e. having a real person perform movements similar to accidents
rather than subjecting a rucksack to extreme movements, in order to evaluate the
performance of the IDA. The simulated events were as follows:

1. Tipping to the side and being caught underneath the vehicle.

2. Being thrown over the handle.

3. Rolling off the vehicle to the side.

4. Rolling off the vehicle in a forward direction.

5. Unintentionally performing a wheelie and falling backwards.

The different motions, apart from the backwards fall, were performed using a chair
placed on a lawn, see Figure 10. The acts of falling off towards the sides, as well as
being thrown forward, were simulated by an operator throwing their body in either
direction from a sitting position in the chair, landing flat on the grass and laying
still for a short period of time. Tipping and rolling were simulated by tipping or
rolling off of the chair, and by rolling a few times on the grass. The backwards
fall was simulated by sitting on a stool placed in front of a mattress. An operator
sat on the stool with their back towards the mattress, and also slightly elevated
above the mattress, and subsequently let themselves fall backwards onto the mat-
tress. These experiments were also filmed to allow for later event identification
and marking in the data. Sadly, data collected by the smartphone in the trouser
pocket were deemed poor, due to long sequences where no sensor’s value were
updated, probably due to a higher priority application running simultaneously.

29
(a) Tipping to the side and being caught un- (b) Being thrown over the handlebar.
derneath the vehicle.

(c) Rolling off the vehicle to the side. (d) Rolling off the vehicle in a forward direc-
tion

(e) Unintentionally performing a wheelie and


falling backwards.

Figure 10: Simulated accidents, how they were performed and which type of acci-
dent they simulate.

3.2.6 Collection of Potentially Problematic Data

An experiment was done with the goal of evaluating the possibility to avoid false
alarms during common actions which may seem abnormal to the IDA (i.e. not
normal driving). Such actions include mounting and dismounting the ATV, as well
as interacting with the smartphone running the IDA. As some form of smartphone
interaction is captured per definition in the beginning and end sections of all data,
no more smartphone interaction motion data was collected; however, data was
collected for the act of mounting and dismounting several times from different
directions. As an ATV was not available, a stone wall of similar height and width
was used in order to obtain approximately naturalistic data. The motions were
performed with two smartphones simultaneously collecting sensor data, one in a
trouser pocket and the other in a chest pocket. The experiment was filmed in

30
order to later identify and mark each event in the data. As with the simulated
crash data, the trouser pocket data where deemed poor for the same reason.

3.2.7 Summarisation of Data

In Table 3 a summarisation of data used in this project can be seen. All data
presented in the table has been subject to vetting, i.e. segments of deficient data
has been removed.

Table 3: Summarisation of good quality data used in this project.

Type of data Quantity Usage Comment


Normal ∼21 hours Train the Collected by several
driving OC-SVM volunteers.
Simulated 20 accidents Evaluate IDA 20 accidents occurring
accidents performance back-to-back.
Problematic 4 mountings Evaluate IDA Simulated using a
movements & 4 dismounts performance stone wall.
Walking ∼80 minutes Evaluate IDA Originally collected for a
performance separate algorithm to remove
accidentally logged data.

3.3 Notification of Emergency Services in Case of Accident

In Sweden a project started in 2006 that lets people with limited hearing and/or
speaking abilities register their mobile phone with the national emergency service
provider, SOS Alarm [39], which lets individuals and emergency operators commu-
nicate with each other via text message. This service is not available for everybody,
since the project is still in an early stage and communicating emergencies via text
messages has turned out to take longer time than a regular phone call. For this
project, the optimal solution would of course be to send a text message directly
to SOS Alarm with last known location and other vital information. However, the
infrastructure used to receive text messages is dedicate for those with disabilities
and a smartphone application that can automatically send text messages could
potentially overload the system.

31
In order to actually communicate the accident information, the driver must choose
an ICE before the application can be used. This ICE will receive a message (note:
not necessarily a text message) with the appropriate information together with the
task of contacting emergency services and/or finding the driver.

3.4 Smartphone Application Development: General Idea


of Program Operations

To easier grasp how an application would work a flow chart is presented here, see
Figure 11. Some steps are not included and others require further explanation.

This list explains functionalities of different boxes in Figure 11, the bold words
corresponds with the label of one or more boxes in the figure.

Application launched User launches the application, but the IDA isn’t launched.
Wait for IDA to be started (by user) It should be possible to start the appli-
cation without running the IDA, if the user wants to change some settings
for example. Is the IDA stopped for some reason (turned off by the user)
but the application is still running, this becomes the ”active” state.
Is the screen off ? (left column) An assumption was made that the driver isn’t
using the smartphone actively while driving, so if some interaction is detected
(here in the form of a lit screen) the IDA shouldn’t be running.
Wait for d1 sec To reach this box, the user has recently interacted with the
smartphone but now stopped, and in order to avoid generating a false alarm
from the driver mounting the ATV a delay of d1 seconds was added.
Is SVM triggered? The first step of the IDA is an OC-SVM which, since it
is trained with high sensitivity as a priority, results in some false positive
predictions (preferred over false negatives which would mean missing an in-
cident). If the OC-SVM isn’t triggered an interaction check is made.
Start timer1 As a positive prediction is made, a timer is started to limit how
long it takes to confirm the prediction.
Is ACC triggered? If the ACC1 are not fulfilled in less than x1 seconds the IDA
starts to check the OC-SVM predictions again.
1
The ACC will not be described in this report.

32
Start timer2 After a positive confirmation of the prediction a second timer is
started, because the potential alarm might still be rejected.

RC triggered? If the prediction is confirmed there still remains scenarios where


an alarm isn’t suitable. The RC must be triggered in less than x2 seconds
for the alarm to be canceled.

Wait for d2 sec In order to give the IDA time to collect a sufficient amount of
data for the next step a delay is implemented.

Walking detected? Since alarms shouldn’t occur if the driver dismounts the
ATV and starts walking, a continuous check is made if walking is detected
and the IDA is paused until the driver stops walking.

Is the screen off ? (right column) This check is an extra rejection criteria, if
all other predictions indicate an accident, a last check to see if the driver is
interacting with the phone is made before continuing.

Notify user, Start timer3 In order to give the driver a chance to cancel a false
alarm, the phone starts to notify the driver that it is about to send an alarm.
A timer is also started to limit the notification period.

Any user input? If the driver interacts with the application (note: application,
not smartphone), the notification is stopped. Depending on the input two
different actions are executed.

Ask for feedback In case of a false alarm, the driver is asked to give feedback
on the situation in order to further improve the IDA. A possible automatic
improvement could be to alter thresholds depending on this feedback.

Send alarm This box is reached if there is no interaction from the driver, or if
manually pressed by the driver. A message is transmitted containing vital
information (such as last known location, time stamp, and some suggestions
of actions) to an ICE.

Post alarm mode After an alarm has been sent, further actions can benefit both
the driver and emergency personnel. One possibility is to send messages with
fixed intervals to get the attention of the ICE (if the first message failed to
do this), also GPS positions may vary and a more accurate position might
have been acquired. Another possible action is to make the phone sound to
draw the attention of passerby to the site (or, if the phone is dropped by the
driver this would indicate where it is).

33
Figure 11: Flow chart of general smartphone application.

34
3.5 User Smartphone Interaction and Undesired Trigger-
ing of Alarms

By choosing smartphones for motion data collection (rather than a separate unit)
a problem arises due to the multipurpose machine a smartphone is. Regular usage
of a smartphone, such as texting, answering phone calls and playing games, cre-
ates sensor motion patterns that are closely related to incidents and crashes (for
instance, large acceleration under short periods of time).

To counter this implementation issue, a test program (for Andriod) was created to
determine which sensors and parameters could be used and how. Seen in Figure 12
is a screen shot of said test program, two sensors (both located on the front of
the smartphone) and two parameters set by the OS are sampled and their values
presented:

• Lum corresponds to the light sensor on the phone and is presented in lux.

• Prox represents the proximity sensor, which the phone mainly uses to deter-
mine where the phone is during a telephone conversation (close vs not close
to the head), and are presented in cm.

• Locked is the first parameter, in Android it is possible to extract if the phone


is locked or not (even if no password is needed to access the smartphone, a
”lock screen” exists).

• Screen is the second parameter and shows whether the screen is lit or not.

35
Figure 12: Test program for different sensors used to determine user action.

Enclosed in red rectangles are six different scenarios, they are as following:

1. The phone is unlocked and rests in an operator’s hand, exposed to normal


office light with a lit screen and nothing is close to the proximity sensor.

2. From the previous state the phone is raised up to the head, to mimic the
placement during a phone call.

3. In the same position, the screen is manually turned off. A specific setting of
this smartphone is that it isn’t locked immediately after the screen is turned
off.

4. The phone is placed in a trouser pocket.

5. Still in the pocket, the screen is turned on but remaining locked.

6. The smartphone is brought out of the pocket, with the screen lit while still
locked.

All these different scenarios create unique sets of sensor and parameter combina-
tions. Other possibilities include, e.g, the same scenarios, but in a dark room.
However, for a first version of a commercial application just a few things are

36
necessary, to be able to identify when the smartphone is placed in a pocket and
when the user interacts with the phone (which is simplistically represented by a
lit and unlocked screen). So with the four sensors and parameters listed, enough
information is extractable to handle user smartphone interaction.

3.6 Smartphone Application Development: Apple iOS

In order to evaluate the feasibility of an SVM-based IDA running in real time on


an Apple iPhone, a simple application was designed to incorporate some of the
functionality which would likely be necessary or valuable to the final app.

3.6.1 Sensor, Activity Detection and Calculation Performance Testing


Application

A simple iOS application was built to sample sensor data at a rate of 100 Hz and
display sensor values in real time as well as the results of built-in activity recog-
nition data. The application displayed current, max and min values of each axis
of the accelerometer and gyroscope, respectively, as well as for the absolute values
calculated at each sampling instance for the accelerometer and gyroscope axes,
respectively. A set of switch icons in the application showed which of the following
activity flags were set: Walking, Running, Cycling, Automotive, Stationary and
Unknown. The built-in level of confidence for the current activity state was also
displayed in the application, next to the set of switches. A screen shot taken with
the device, an iPhone 6, lying still on a desk is displayed in Figure 13a.

In order to assess the performance of the built-in activity recognition, an operator


brought the device onto a tram, travelled for two stops, and then got off. This
was documented using screen shots, the first of which, as seen in Figure 13b,
depicts activity states when the operator was on the moving tram. The activity
recognition estimated, with a high level of confidence, that the operator was in
a vehicle. In the subsequent case seen in Figure 13c, the tram was stationary at
a tram stop. The activity recognition correctly estimated that the operator had
not moved off the tram and was thus inside a stationary vehicle; however, the
confidence level was at a lower level. Seconds after the tram started moving again,
the screen shot of Figure 13d was captured, which shows clearly that the activity
recognition correctly determined that the vehicle was moving again. Finally, when
the screen shot of Figure 13e was taken, the operator had just gotten off the tram

37
and had taken a few steps towards the booth at the tram stop, thus leading the
activity recognition to estimate that the operator was walking, albeit with initially
low confidence.

Initial testing showed that an iPhone 6 had no problem running the application
and updating the sensor values in real time; in fact, historical activity recognition
results are available regardless of whether the application is running, as the device
actually already collects and saves historical data by default [40]. Accounting for
the possibility that frequency analysis would improve performance of the IDA,
a modified version of the application ran a FFT in real time at every sampling
instance, normalised the results and subsequently located and printed the value
of the highest peak. Running the application with the device connected to the
Xcode programming environment showed that battery consumption classified as
low, which suggests that sensor monitoring and frequency analysis will not seri-
ously impact battery life.

38
(a) Phone was lying still on a (b) Riding the tram. (c) Tram stationary at a tram
desk. stop.

(d) Tram moving again after (e) Just getting off the tram.
having stopped.

Figure 13: Testing of the built-in activity recognition of an Apple iPhone 6, using
a test application developed in this thesis.

39
3.6.2 Automatic ICE Notification in Apple iOS

In iOS it is not possible to automatically send neither a text message or an e-mail.


It is possible to display a message composition view with populated recipient,
subject and message body fields, but actually sending the message requires user
interaction [41]. However, the CFNetwork library supports both open and authen-
ticated communication over HTTP and FTP; thus, an external server can be used
to handle sending of text messages and e-mails [42].

3.7 Smartphone Application Development: Google’s An-


droid

Before the IDA could be implemented some test applications were programmed.
Some served the purpose to try different functionalities that most likely would
be incorporated in the final application, others were used to evaluate existing
functionality and if it could be used instead of creating similar methods, thus
saving time.

3.7.1 Choosing of API-level

Which API-level the application should be developed for is the first decision when
programming for Android (see Section 2.9.1). Regularly (about once a month)
Google collects information of devices visiting the Google Play Store and presents
different statistics, one of which is the ratio of API-levels [43]. At the writing
moment, an API-level of 15 will cover 97.3 % of all Android devices that visit the
Google play store, and is chosen as this project’s API-level. In perspective, the
API-level was raised to 15 on December 16, 2011 as a second update to the Ice
cream sandwich package [44]. On October 5, 2015 the API-level was updated to
23 [45] with the Marshmallow package, which is the highest level so far. If possible,
the API-level might be lowered in order to support more devices, although this is
only possible if new functionality in API-level 15 isn’t used.

40
3.7.2 Built-in Activity Classification Algorithms in Android

Activity classification in Android is dynamically updated. This means that a pro-


grammer can’t obtain activity classification information at any given time. Instead,
a request has to be sent, and a result is returned once the activity classification
determines it has useful/trustworthy information. A simple application was made,
which displayed the current classifications as on/off switches next to their respec-
tive confidence levels. A timer also displayed how long it had taken to reach the
current classification and how much time had passed since it was updated (see
Figure 14 for a screen shot of the application). In experiments conducted using
this built-in classification, requests were sent asking for results as soon as possible,
which were obtained every 2 to 40 seconds. If the activity exercised by the operator
was constant and continuous, updates came every 3 seconds on average, however
if the activity changed (for example the operator stopped walking and just stood
still) a new update could take up to 40 seconds to arrive, and often with a low
confidence of the new activity (barely over 50 %).

In Figure 14 a series of screen shots taken demonstrate how a bus stopping at a


station is classified. Before the screen shot presented in Figure 14a ”In vehicle”
was (correctly) classified with a confidence of 98 % and rather quickly updated to
the result presented in Figure 14a when the bus stopped at a bus stop. This result
is still correct, but fails to identify that the bus has stopped. In Figure 14b a
more correct assessment of the situation is presented, however with low confidence
meaning without knowledge of prior results it is not that trustworthy. Figure 14c
presents a more trustworthy and correct situation, however a long time (27 seconds)
is needed to get a confidence barely over 50 %. After a long update (such as the
one presented in Figure 14c) a more confident result will quickly be updated (if
the new activity is continued), as seen in Figure 14d, meaning the classification
algorithm seems to have trouble dealing with quick changes. When the bus starts
to move again a new update, Figure 14e, is acquired which has failed to catch the
change of activity (or rather, failed to identify the new activity since ”Still” has
a low confidence, which is correct). After another rather long time (20 seconds) a
correct result is presented, as seen in Figure 14f.

41
(a) Bus stopped at station. (b) First update after stop. (c) Second update after stop.

(d) Third update after stop. (e) Bus starts to move again. (f) First update after moving
again.

Figure 14: Testing of the built-in activity recognition of an Android Smartphone,


using a test application developed in this thesis.

42
3.7.3 Automatic ICE Notification in Android

To contact an ICE with relevant information about a possible accident, some sort of
communication is necessary. Rural areas, such as forests, sometime lack coverage
for cellular services, so a minimally demanding transfer protocol is desired. Of
those text based communications available for Android programmers, regular text
messages require the least from the network. To use the text message functionality
permission from the user is needed [46], which is acquired before the application
is installed. Therefore is it possible to compose a message containing information
such as last known location, time stamp, battery level etc. and send it to the
previously chosen ICE contact without any user interaction. Before the application
can be used, an ICE must be chosen and the application shouldn’t be functional
if no ICE exists.

43
44
4 Results

Three different algorithms make up the IDA;


OC-SVM, ACC and RC, the RC is however de-
pendent on real-time running methods on the
smartphone, and can therefore not be evaluated
on previously collected data. So, here OC-SVM
and ACC are evaluated for different datasets
and their results presented. Worth mentioning
is that all user smartphone interaction should
be handled by sensors not logged during data
collection, meaning that these samples will not
be handled by either OC-SVM or ACC and
therefore occur as false alarms in all datasets.

4.1 The Incident Detection Algo-


rithm

Tests with the OC-SVM proved it to be insuf-


ficient as an IDA on its own, what it uses as
input can be found in Section 3.2.4. However,
no accidents were missed (which is very desir-
able) so if the extra predictions, those that are
false positives, could be cancelled based on ad-
ditional sensor information, an accurate IDA
would have been achieved. Thus, following a
positive prediction from the OC-SVM, all Ac-
cident Confirmation Criteria (ACC ) must be
fulfilled within a limited time, see Figure 15 for
an overview of how the IDA works. This proved
very successful and, during the test cases, all
false positives were canceled; however, two sce-
narios existed that still caused problems. The
first was the short periods of time where the
user interacted with their phone, and the sec- Figure 15: Flow chart of the
ond was the act of mounting/dismounting an IDA.
ATV. These motion patterns have the poten-
tial to both trigger the OC-SVM and fulfill all

45
ACC. Both mounting and initial smartphone interaction could be handled with
a delayed start of the IDA, based on the assumption that the user starts the
application while mounted on, or shortly before mounting, an ATV. In order to
handle dismounting, the IDA checks for Rejection Criteria (RC ). The RC are
based on the built-in activity classification systems in Android and iOS. If a nor-
mal activity, such as walking, is identified with a high level of confidence, this is
a good indicator that an accident hasn’t occurred. If none of the RC are fulfilled
within a specific timeframe, it is likely that an accident has occurred; however,
as mentioned, abnormal data can be generated when the user interacts with the
smartphone. Therefore, one last check is done to rule out this false alarm possi-
bility by checking whether the system detects user interaction, such as the screen
being lit and the proximity sensor detecting that the smartphone is not near any-
thing (such as the inside of a pocket), before proceeding. If user interaction is not
detected, the anomaly is classified as an accident, see Figure 15.

46
4.2 IDA Performance During Walking

When the driver dismounts an ATV, the application should be manually stopped
by the driver. However it seems likely that it sometimes will be left on, by purpose
or accident, and in that case no alarm should be sent. If the application constantly
generates false alarms due to walking, the driver will most likely stop using the
application completely. So in order to be able to release a viable application,
walking shouldn’t generate false alarms.

In Figure 16 it can be seen how OC-SVM handles walking data, both in the
beginning and end there are some concentrations of alarms as well as some around
sample 0.5 · 104 and one almost at 2 · 104 . Since no manipulation has been done to
the raw file, the concentrations in the beginning and end consist of user smartphone
interaction, i.e, when the user starts and stops the logging, respectively. The other
occasions are false alarms, and are succeeding the two highest spikes of the absolute
acceleration, with the exception of during user smartphone interaction.

For the same dataset the ACC is evaluated, and the result can be seen in Fig-
ure 17. Only during user smartphone interaction is anything considered as an
accident, meaning that the false positives the OC-SVM generated are cancelled,
as is desirable.

47
5
Normalised absolute acceleration

Norm. absolute acceleration [m/s 2 ]


4.5 Classified as incident by SVM
4
3.5
3
2.5
2
1.5
1
0.5
predictions
Incident

0 2 4 6 8 10
Sample [k] ×104

Figure 16: Samples labeled as accidents by OC-SVM during a walk, false alarms
are raised at sample 0.5 · 104 and 2 · 104 .

5
Normalised absolute acceleration
Norm. absolute acceleration [m/s 2 ]

4.5 Classified as incident by ACC


4
3.5
3
2.5
2
1.5
1
0.5
predictions
Incident

0 2 4 6 8 10
Sample [k] ×104

Figure 17: Samples labeled as accidents by ACC during a walk, no false alarms are
confirmed.

48
4.3 IDA Performance During Mounting/Dismounting an
ATV

Included in the dataset are several mountings and dismounts, see Table 3, all logged
by a smartphone placed in a breast pocket. Several false positives are raised by the
OC-SVM, as can be seen in Figure 18. However, most of these should be handled
by the methods discussed in Section 3.4; for mounting a delayed start of the IDA,
and for dismounting the RC is used.

With a temporally altered ACC to accommodate the dataset, while still being
representative, false alarms are raised as seen in Figure 19, which further motivates
the need for RC.

49
Normalised absolute acceleration

Norm. absolute acceleration [m/s 2 ]


7 Classified as incident by SVM

1
predictions
Incident

0 1000 2000 3000 4000 5000 6000 7000 8000


Sample [k]

Figure 18: Samples labeled as accidents by OC-SVM during mounting/dismounting


an ATV, several false alarms are raised in this dataset.

Normalised absolute acceleration


Norm. absolute acceleration [m/s 2 ]

7 Classified as incident by ACC

1
predictions
Incident

0 1000 2000 3000 4000 5000 6000 7000 8000


Sample [k]

Figure 19: Samples labeled as accidents by ACC during mounting/dismounting an


ATV, from sample 1400 false alarms are raised for a period of ∼241 samples.

50
4.4 IDA Performance During Wheelie Accidents

A wheelie accident happens when the ATV lift its forward wheels of the ground and
continue this motion until the ATV falls over backwards, having rotated around
the rear wheel axis. In Figure 20 six wheeling accidents are logged, which are the
peaks located on or about sample 0.5, 0.8, 1.05, 1.34, 1.55 and 1.8·104 . All of these
are identified by the OC-SVM, and some false alarms are raised by the operator
standing up, as can be seen just before sample 0.9 · 104 . These false alarms are not
considered in the performance evaluation of the IDA since it doesn’t reflect the
proceedings of the IDA, which can be seen in Figure 15. The two concentrations
in the beginning and end are user smartphone interactions.

All accidents correctly labeled by the OC-SVM are confirmed by the ACC, see
Figure 21, while no false alarms are raised.

51
9
Normalised absolute acceleration

Norm. absolute acceleration [m/s 2 ]


8 Classified as incident by SVM

1
predictions
Incident

0 0.5 1 1.5 2
Sample [k] ×104

Figure 20: Samples labeled as accidents by OC-SVM during wheeling simulations,


accidents occurring around sample 0.5, 0.8, 1.05, 1.34, 1.55 and 1.8 · 104 .

9
Normalised absolute acceleration
Norm. absolute acceleration [m/s 2 ]

8 Classified as incident by ACC

1
predictions
Incident

0 0.5 1 1.5 2
Sample [k] ×104

Figure 21: Samples labeled as accidents by ACC during wheeling simulations, ac-
cidents occurring around sample 0.5, 0.8, 1.05, 1.34, 1.55 and 1.8 · 104 .

52
4.5 IDA Performance During Sudden Stops and Roll Over
Accidents

To evaluate the IDA for forward crashes (involving hitting objects straight on, such
as trees) and flying over the handle bars, rolling over to the side of the ATV (for
example if one side suddenly falls into a ditch, throwing the driver off) or rolling
over forward, which could happen if the driver turns to sharply and the ATV
looses grip. These three types of accidents are present in the dataset presented
in Figure 22, further information about the positive predictions are presented in
Table 4, there is some user smartphone interaction in the beginning and end of
the set as well.

Table 4: Explanations for positive incident predictions in Figure 22.

Generated by Location ·104 [-] Type


Forward crash 0.85, 1.2, 1.5, 1.9, 2.15 Accident
Roll over: side 2.75, 2.9, 3.1, 3.3, 3.45 Accident
Roll over: forward 3.94, 4.56, 4.8, 5.17 Accident
Placing chair 0.6 False alarm
Sitting down fast 1.35, 1.67 False alarm
Leaning over 3.72, 4.44 False alarm

Compared to OC-SVM, ACC performs better, all false alarms presented in Table 4
are canceled while accidents are confirmed, as seen in Figure 23 (worth noting
is that locations of the peaks etc. are slightly shifted from the value presented
in Table 4 due to the nature of the ACC). Omitting the known problem areas
containing user smartphone interaction, there were 42 instances of false alarms over
all crash scenarios; however, 13 of these were generated by the operator initiating
a fall, but stopping suddenly and aborting the fall. Thus, only 29 false alarms
were caused by completely irrelevant events.

53
11 Normalised absolute acceleration

Norm. absolute acceleration [m/s 2 ]


Classified as incident by SVM
10
9
8
7
6
5
4
3
2
1
predictions
Incident

0 1 2 3 4 5 6
Sample [k] ×10 4

Figure 22: Samples labeled as accidents by OC-SVM during sudden stops and roll
over simulations.

11 Normalised absolute acceleration


Norm. absolute acceleration [m/s 2 ]

Classified as incident by ACC


10
9
8
7
6
5
4
3
2
1
predictions
Incident

0 1 2 3 4 5 6
Sample [k] ×104

Figure 23: Samples labeled as accidents by ACC during sudden stops and roll over
simulations.

54
4.6 IDA Performance During Normal Driving

Although the OC-SVM was trained on a small fraction of the available data
(about 1.5 hours), owing to the randomised observation structure described in Sec-
tion 3.2.4, the resulting precision on the large test set (the remaining ∼20 hours)
was high. For the case including all observations in the dataset, shown in the
second row of Table 5, 99.29% of observations were classified correctly. However,
some of the individual logs produced rather worse results than others; therefore,
these were examined more closely.

Table 5: Results when letting the OC-SVM classify all observations in the dataset.

Case Number of Number of Precision


samples examined samples ”normal”
All 7807604 7751903 99.29 %
observations (∼ 21 h 41 min 16 s) (∼ 21 h 31 min 59 s)
Barring 7569697 7523585 99.39 %
abnormal logs (∼ 21 h 1 min 37 s) (∼ 20 h 53 min 56 s)

4.6.1 Inspecting Datasets with the Highest Levels of False Alarms

The two individual logs producing the most false alarms had specificities (true
negative rate) of 95.90% and 96.71%, respectively, and turned out to both be from
the same driver. Examining plots of their absolute accelerations revealed erratic
values and possible lapses in sampling which proved difficult for both the OC-
SVM to classify correctly, see Figure 24, as well as having false alarms cancelled
by the ACC, see Figure 25. Possible causes of these erratic patterns are data
corruption, the driver forgetting the logging application is running and doing some
other activity, driving an ATV with the smartphone in a lose pocket/bag so that
the smartphone moves around more independently than if it were confined in a
tighter pocket.

55
8 Normalised absolute acceleration

Norm. absolute acceleration [m/s 2 ]


Classified as incident by SVM
7

1
predictions
Incident

0 0.5 1 1.5 2
Sample [k] ×105

Figure 24: Samples labeled as accidents by the OC-SVM in the worst observed
individual log in the original dataset.

8 Normalised absolute acceleration


Norm. absolute acceleration [m/s 2 ]

Classified as incident by ACC


7

1
predictions
Incident

0 0.5 1 1.5 2
Sample [k] ×105

Figure 25: Samples labeled as accidents by ACC in the worst observed individual
log in the original dataset.

56
Since it is impossible to discern why the two logs included such erratic data, and
because they did not make up a significant part of the overall test set, these were
omitted to obtain the precision in the last row of Table 5. For comparison, the log
with the worst results, barring the two aforementioned, was plotted and examined.
The OC-SVM had performed on this log with a 97.47% precision and had produced
false alarms in a few sampling instances, as visible in Figure 26; however, as shown
in Figure 27, the ACC produced no alarms and thus all false alarms produced by
the OC-SVM would have been cancelled in this case.

57
14 Normalised absolute acceleration

Norm. absolute acceleration [m/s 2 ]


Classified as incident by SVM
12

10

2
predictions
Incident

0 1 2 3 4 5 6 7 8
Sample [k] ×104

Figure 26: Samples labeled as accidents by the OC-SVM in the worst observed
individual log in the modified dataset.

14 Normalised absolute acceleration


Norm. absolute acceleration [m/s 2 ]

Classified as incident by ACC


12

10

2
predictions
Incident

0 1 2 3 4 5 6 7 8
Sample [k] ×104

Figure 27: Samples labeled as accidents by ACC in the worst observed individual
log in the modified dataset.

58
4.6.2 Estimated F 1 -Score for the OC-SVM During Normal Driving

Barring the two erratic cases described in Section 4.6.1, and not taking into ac-
count post-processing by the ACC, the OC-SVM obtains an estimated F1 score of
99.69 %. Specifically, this was calculated as shown in Equation (2), where preci-
sion is the fraction of normal driving samples correctly classified (i.e 99.39 %), and
recall is the fraction of simulated accidents that were discovered by the OC-SVM
(i.e 100 %).

(precision) · (recall) (0.9939) · (1.00)


F1 = 2 · =2· ≈ 0.9969 (2)
(precision) + (recall) 0.9939 + 1.00

4.7 IDA Performance Summarised

All final alarms from the IDA assume that the alarm is not cancelled by RC after
the IDA has classified the event as an incident. RC is dependent on real-time
built-in methods in the smartphone and can’t be fairly evaluated with collected
data; furthermore, user smartphone interaction related alarms are not included as
these can be handled using other methods, see Section 3.5.

The number of alarms generated by the OC-SVM and ACC, as well as the calcu-
lated number of alarms from the IDA which would have been sent, are presented
in Table 6. Alarms sent by the IDA rely on the prevalence of ACC alarms in con-
junction with OC-SVM alarms (as well as no Rejection Criteria-confirmations)
and, as such, each period of continuous alarms is viewed as one incident case. For
the OC-SVM and ACC, results represent the total number of sampling instances
where each respective classifier classified that an accident had occurred, barring
areas of user smartphone interaction. As a reference: a one-second period where
all samples are classified as accidents will approximately equal 100 samples (due
to the sampling frequency).

Note that the number of final alarms raised by the IDA to the smartphone appli-
cation was not calculated for the worst observed driving data – these numbers are
only included for the interest of the reader.

59
Table 6: Summarised performance of OC-SVM, ACC and IDA. The number of
final alarms was not calculated for the worst observed driving data nor areas of
user smartphone interaction but it is clear that both would be large. Note that
these results assume no cancellation from RC.

Dataset OC-SVM ACC IDA


Alarms Whereof Alarms Whereof Alarms Whereof
raised false raised false raised false
Walking 3 3 0 0 0 0
data
Mounting & 75 75 141 141 1 1
Dismounting
Crashes 1515 42 4962 0 14 0
Wheelie 123 3 5341 0 6 0
accidents
Worst observed 3763 3763 5568 5568 - -
driving in
original dataset*
Worst observed 47 47 0 0 0 0
driving in
modified dataset

60
5 Discussion

Several aspects of this report needs further discussion, which can be found here
as well as some topics not mentioned in any section earlier, but that significantly
affects this report’s subject.

5.1 The Three Classification Methods Used in the IDA

Initially only an SVM was going to be used to determine if an incident had oc-
curred, it proved insufficient and additions were made.

First, the ACC were added, and immensely increased the performance of the whole
IDA. However the inner workings of the ACC isn’t suitable to run continuously,
since false alarms would be raised; if it had been, the OC-SVM would’ve been
scrapped since it is outperformed by the ACC. So a hybrid was created, the OC-
SVM does a first check of the sensors’ values and if it finds an anomaly the ACC is
brought in to further analyse it. This combination works great for most cases, but
proved to be insufficient in handling mounting/dismounting. No great method to
handle those motions has been developed, and are currently handled by the RC, if
walking is detected shortly after both OC-SVM and ACC has triggered an alarm
the alarm is canceled. Mounting is currently handled by a simple delayed start of
the IDA.

5.2 Cellular and Data Coverage in Rural Areas

As anyone who owns a smartphone knows, coverage varies with a lot of things. In
an urban environment being in a building will lower the reception, and with thicker
walls the coverage decreases even more. In a rural area, the topography of the land
plays a significant role in how good coverage is available. A risk exists with varying
coverage, if an accident occurs in a white spot (an area with no coverage) there is
no way to communicate either location or the fact that an accident has occurred.

This problem can be handled by continuous updates to a server, for example an


update every five minutes from the smartphone to the server containing current
position and speed. If a certain amount of updates are missed, the server can raise
an alarm and provide a last-known-location.

61
5.3 Notification of Emergency Services in Case of Accident
via Text Message

It is not uncommon that text messages can be received unnoticed, if the user
and smartphone is in separate rooms for example. People has a tendency to con-
sciously ignore private and/or turn off notifications while at work, meaning an
accident notification might not be discovered for several hours, losing valuable
time for the hurt driver. SOS Alarm do have the technology to handle text mes-
sages, and do so for a small part of the Swedish population (see Section 3.3) but is
restricted in allowing more people the same functionality. It is understandable that
a group not capable of using a phone by conventional standards (i.e. people with
hearing/talking disabilities) has priority over an automatic text message from an
unofficial source, but other situation exists where emergency help would preferably
be communicated via text messages. A common example is threatening situations,
such as home invasions, being able to silently text emergency services instead of
calling them seems as a reasonable technological advancement. Although decreas-
ing the human contact between subject and operator has some disadvantages, it’s
harder to gather a correct picture of the situation without hearing how the subject
speaks and prank texts will undoubtedly occur. Emergency applications, such as
the one proposed in this report, would also most likely require a lot of resources.
Even though many arguments can be made for why limitation is needed for this
functionality, development should be made in order to further spread availabil-
ity; one step in this process is eCall, which requires cars to automatically contact
emergency services if an accident has occurred [47].

5.4 IDA Performance

Four main areas has been investigated for the IDA, most of which performed
satisfactory at least, further discussion of the result follows here. What proved to
be hardest was not to correctly classify normal driving and simulated accidents,
but rather making the IDA more consumer friendly by adapting it to situations
likely to occur.

62
5.4.1 Walking

An IDA that would be triggered by walking would cause great annoyance for the
user, since several scenarios can be imagined that require the driver to dismount
the ATV for a short period of time, where temporarily turning of the IDA shouldn’t
be necessary. As presented in Section 4.2, thanks to ACC no alarms are raised,
together with the RC the IDA would be paused. Just using one walk from one
person might seem as a small set to evaluate over, and yes it is. This report’s focus
however isn’t the ability to distinguish walking from driving, and is only done to
prove that it is viable for a consumer version of the IDA, where the user most
likely doesn’t want to take out his or her smartphone every time they dismount
an ATV. Therefore this simple evaluation is included since the initial goal was to
create such an application, but before an actual release further evaluation must be
conducted.

5.4.2 Mounting/Dismounting an ATV

Before a commercialised application could be released, improvements must be


done with handling of mounting/dismounting. Some major simplifications has
been made, such as that the user will only turn on the IDA shortly before mount-
ing/driving away the ATV, for example, if they would turn it on, walk around
looking for a helmet high and low in the house, an alarm could be triggered. As
with dismounting, if the driver doesn’t walk around for 30 seconds continuously,
an Android version would most likely raise an alarm (see Section 3.7.2 for expla-
nation). The smartphone application proposal presented in Figure 11 should be
able to handle the driver dismounting the ATV, walk around for a bit and then
mounting the ATV again, if the RC is triggered (i.e. the alarm aborted), otherwise
if an alarm is raised and confirmed by the ACC and the driver has time to mount
the ATV before RC cancels it, it would proceed to notify the driver unnecessarily.
For a better performing application, the motion of mounting/dismounting would
ideally be identified, so that the driver state could be monitored and the IDA
automatically paused/started.

5.4.3 Simulated Accidents

With the combination of OC-SVM and ACC all collected accidents were correctly
classified, and no false alarms raised. However, it is possible that overfitting might

63
have occurred, and to further evaluate the IDA more data must be collected,
especially crashes. While each type of accident were simulated several times, al-
ternations varied little, mostly due to insufficient protection for the operator and
limitation in time and equipment. Due to the limitations, the simulated accident
are deemed to be in the lower spectrum of severity, meaning that more realistic
accidents would probably be even more distinct and easier to identify. To get even
more variance in a future data collection, several test persons should be used in
crash simulations since individuals might handle falling off an ATV differently.

5.4.4 Normal Driving

With the OC-SVM used, the majority of all samples of normal driving were cor-
rectly classified; however, it is possible that the training set failed to capture a
certain type of driving style prevalent in the two ”worst case” logs. This could be
remedied by allowing the training set to take more observations into account or
by increasing generalisation parameters during training as well as during run-time
on the user’s smartphone.

As an additional step in the evaluation of the IDA, use of the ACC should be
included in the OC-SVM results to obtain information on how many of the false
alarms could have been canceled. For example, in the case with 97.49% accuracy
detailed in Section 4.6.1, all false alarms would have been cancelled by the ACC,
giving the IDA in this case an accuracy of 100%. Furthermore, it is impossible to
discern whether an alarm would have actually been sent, since the RC are only
applicable to run-time conditions and not possible to replicate in Matlab using
the dataset available for the project.

5.5 Ethical & Privacy Considerations

Many applications already use motion information from accelerometers, gyroscopes


and magnetometers and the success of activity tracking applications such as Run-
keeper indicate positive sentiment towards the use of these sensors. However, use
of other sensors and data could be viewed as excessively intrusive and, overall,
access to sensors and information not obviously required for the primary use of an
application may lead to user concerns about privacy. For example, when the Face-
book Messenger application required that extensive permissions be granted before
installing, this sparked an irate article in the Huffington Post about the need to

64
set a stop to excessive requirements by application developers [48]. Although there
seemed to be reasons behind each permission requirement, these were not clearly
explained to users at install time. For the purposes of improving the IDA, it could
be valuable to e.g use the microphone to listen for the starting, stopping or revving
of a combustion engine; however, this could make users hesitant about using the
application at all. The collection of the application of sensitive information such
as health data could also be problematic, although the more obvious connection
between a safety application and possessing information about e.g blood type and
age may assuage user privacy concerns in this matter.

5.6 Self-fulfilling Prophecies & Psychological Considera-


tions

As a consequence of the IDA being trained and tested on a finite number of ATV
rides, and, although SVM’s generalise quite well, a situation may arise during nor-
mal driving which, for some reason, does not classify as normal. If the subsequent
checks for confirmation and rejection criteria also indicate that there has been an
accident, this will lead to the application notifying the user that an alarm is about
to be sent. Given that the user notices this, he or she may quickly grow anxious
that his or her ICE will receive frightening news, it is conceivable that concentra-
tion on driving will waver; thus, possibly leading to an accident where none would
have occurred.

An entirely different concern is the possibility that having a safety application


running will produce a false sense of security or invulnerability with the user and
thus promote reckless driving. Although accidents occurring as a result of this
may trigger alarms and, thus, the relatively swift arrival of medical professionals,
the fact of the matter is that the accident might never have happened without the
application.

65
66
6 Conclusion

The preliminary aims were:

1. to create an IDA which can identify accidents and crashes.

2. Optimise the IDA in order to obtain a high F1 -score and

3. release it as an application through official distribution channels, such as the


Google and Apple online application stores.

Regarding the first aim, all parts of the IDA has been constructed and evaluated
with good results, but no merged version exists since that only complicates things
while evaluating parameters. Also, the F1 -score is high but based on simulated
crash data.

For the third aim, work has been done to evaluate the possibility of running this
type of computation in real time on smartphones, which has only generated positive
results. So, to actually release an application a lot of pieces has to be put together,
but all of the pieces are there and have been checked that they work for each
platform chosen.

Also, no other hardware other than that available in a smartphone has been used
for any purpose in the IDA or the supporting structure around it.

Even though the goals chosen at the start of the project hasn’t been fully com-
pleted, the major objectives has. A working IDA exists and functions well, it is
fully implementable on both Apple’s iOS and Google’s Android and would only
use onboard sensors and built in functions to evaluate data.

67
68
7 Future Development Suggestions

During the course of the study, several interesting ideas and theories came up.
However, not all of them were possible to realise or even test. Some of the more
interesting ideas are shortly discussed here.

Evaluation of Close Call Situations

A dataset not actively collected and evaluated is close call situations. One was
accidentally logged (when an operator begins to throw himself out of a chair, but
aborts the simulation and subsequently remains in the chair) which resulted in
13 false alarm predictions (see Section 4.5). It would however be interesting to
actively collect data that are almost accidents, but not really. The best way to
do this would of course be to drive around on a real ATV in situations deemed
problematic, such as rough terrain (driving over logs, through cairns etc.) and
also ask regular drivers about situations they consider hazardous and relatively
frequently occurring. If such a set could be collected, better evaluation of the IDA
performance would be possible.

Implement Concussion Test

In order to handle drivers that think they are fine after an accident (and thus
manually cancel the alarm), but actually quickly worsen shortly after it, the im-
plementation of a concussion test has been discussed. The general idea is that
a short questionnaire appears after a canceled alarm, see Figure 28. A possible
model for the concussion test could be that used by many athletes [49], however,
this test requires a reference test to be able to indicate whether a concussion might
have happened or not.

Another possibility is also to monitor that the driver actually is conscious during
a period after the canceled alarm, by monitoring for example changes in GPS lo-
cation, accumulated magnitude for accelerometer and/or gyroscope or just regular
speed.

69
Figure 28: Questionnaire after a canceled alarm.

Implementation of eCall Protocol

A coming future development in automatic emergency safety is something called


eCall, which is a standard for cars to communicate with emergency services if
an accident has occurred (such as an airbag deployment) [47]. By April 2018 all
new cars sold within the European union must have this functionality, however, it
seems unlikely that any other operator will be allowed to use this infrastructure
in the near future, especially smartphone applications. Without some sort of
licensing/cooperation an open access for smartphone application developers to this
service would most likely result in a high percentage of false alarms. If, however,
in the future the service is made available, it is highly attractive to implement it.

Automatic Detection of Mounting/Dismounting in Order to


Ensure Running of Algorithm

To handle the state of the driver (i.e. sitting on the ATV or not) optimally, ide-
ally each mounting/dismounting should be detected. If so, further improvement
could be done such as letting the application (but not the IDA) run in the back-
ground continuously, until a mounting motion is detected and then start the IDA.
This would also improve the handling of false alarms from dismounted drivers (as

70
discussed in Section 5.4).

Accessing Medical Information Stored on an iOS Device

Using a framework called HealthKit, which is available on Apple iOS devices, it


is possible for users to store their health information on the device for use by
authorised applications [50]. Emergency health information such as age, blood
type, illnesses and medication is viewable from the lock screen of the device so
that a passerby can better aid an unconscious user. It could be useful to include
such information in an ICE-contact message so that he or she may relay accurate
and vital information to emergency services. Naturally, this possibility would be
governed by the prevalence of users who actually fill out their emergency health
information page.

71
72
References
[1] Folksam, “Bicycle helmet test 2015,” www.folksam.se, 2015, [Accessed: 19
May, 2016]. [Online]. Available: http:
//www.folksam.se/media/folksam-bicycle-helmet-test-2015 tcm5-24933.pdf

[2] The Specialty Vehicle Institute of America. ”What is an ATV”.


www.svia.org. [Accessed: 26 June, 2016]. [Online]. Available:
http://www.svia.org/#/aboutATV

[3] Trafikanalys, “Road traffic injuries 2015,” www.trafa.se, 2015, [Accessed: 19


May, 2016]. [Online]. Available: http://www.trafa.se/globalassets/statistik/
vagtrafik/vagtrafikskador/vaegtrafikskador 2015.pdf

[4] The Swedish Transport Administration, Better Safety on Quad Bikes. Joint
strategy version 1.0 for the years 2014-2020. The Swedish Transport
Administration, 2013.

[5] Statistiska Centralbyrån / Statistics Sweden, “Privatpersoners användning


av datorer och internet 2015 [use of computers and the internet by private
persons in 2015],” www.scb.se, 2015, (in Swedish). [Accessed: 20 January,
2016]. [Online]. Available: http://www.scb.se/Statistik/ Publikationer/
LE0108 2015A01 BR 00 IT01BR1501.pdf

[6] S. Candefjord, L. Sandsjö, R. Andersson, N. Carlborg, A. Szakal,


J. Westlund, and B. A. Sjöqvist, “Using smartphones to monitor cycling and
automatically detect accidents - towards ecall functionality for cyclists,” In:
Proceedings, International Cycling Safety Conference 2014, 18–19 November
2014, Gothenburg, Sweden; 2014:1-9. [Online]. Available:
hhttp://bada.hb.se/handle/2320/14570

[7] L. Bergbom, C. Engelbrektsson, S. Granberg, and L. Streling,


“Säkerhetsapp för ryttare [Safety app for horse riders],” Bachelore Thesis,
Chalmers Univeristy of Technology, Gothenburg, Sweden, 2015, (in
Swedish). [Online]. Available:
http://publications.lib.chalmers.se/records/fulltext/219261/219261.pdf

[8] S. Garland, “National estimates of victim, driver, and incident


characteristics for atv-related, emergency department-treated injuries in the
united states from january 2010 – august 2010 with an analysis of victim,
driver and incident characteristics for atv-related fatalities from 2005
through 2007,” Directorate for Epidemiology, Division of Hazard Analysis,
U.S Consumer Product Safety Commission, 2014. [Online]. Available:

73
https://www.cpsc.gov/Global/Research-and-Statistics/Injury-Statistics/
Sports-and-Recreation/ATVs/ATVSpecialStudyReport.pdf
[9] Everyday Mysteries. ”What is a GPS? How does it work?”. www.loc.gov.
[Accessed: 26 June, 2016]. [Online]. Available:
http://www.loc.gov/rr/scitech/mysteries/global.html
[10] Federal Communications Commission. ”Understanding Wireless Telephone
Coverage Areas”. www.fcc.gov. [Accessed: 26 June, 2016]. [Online].
Available: https://www.fcc.gov/consumers/guides/
understanding-wireless-telephone-coverage-areas
[11] D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. Reyes-Ortiz, “Human
Activity Recognition on Smartphones Using a Multiclass Hardware-Friendly
Support Vector Machine,” Ambient Assisted Living and Home Care, vol. 4,
2012.
[12] M. J. Abady, L. Luceri, M. Hassan, T. Chou, C, and M. Nicoli, “A
collaborative approach to heading estimation for smartphone-based PDR
indoor localisation,” 2014 International Conference on Indoor Positioning
and Indoor Navigation, 2014.
[13] S. Mazilu, M. Hardegger, Z. Zhu, D. Roggen, G. Tröster, M. Plotnik, and
J. Hausdorff, “Online Detection of Freezing of Gait with Smartphones and
Machine Learning Techniques,” International Conference on Pervasive
Computing Technologies for Healthcare (PervasiveHealth) and Workshops,
vol. 6, 2012.
[14] F. Sposaro and G. Tyson, “iFall: An android application for fall monitoring
and response,” Engineering in Medicine and Biology Society, vol. 6, 2009.
[15] J. Hu, D. Li, Q. Duan, Y. Han, G. Chen, and X. Si, “Fish species
classification by color, texture and multi-class support vector machine using
computer vision,” Computers and Electronics in Agriculture, vol. 88, 2012.
[16] D. M. J. Tax, “One-Class Classification,” ASCI dissertation series, vol. 65,
2001.
[17] B. Schölkopf, R. C. Williamson, A. J. Smola, J. Shawe-Taylor, and J. C.
Platt, “Support Vector Method for Novelty Detection,” NIPS, vol. 12, pp.
582–588, 1999.
[18] Azure Machine Learning Team. ”Microsoft Azure Machine Learning:
Algorithm Cheat Sheet”. http://azure.microsoft.com. [Accessed: 20 March,
2016]. [Online]. Available: http://aka.ms/MLCheatSheet

74
[19] R. Caruana and A. Niculescu-Mizil, “An Empirical Comparison of
Supervised Learning Algorithms,” Cornell University, Ithaca, USA, Tech.
Rep., 2006.

[20] S. Omar, A. Ngadi, and H. H. Jebur, “Machine Learning Techniques for


Anomaly Detection: An Overview,” Universiti Teknologi Malaysia, Kuala
Lumpur, Malaysia, Tech. Rep., 2013.

[21] F. Yuan and R. L. Cheu, “Incident detection using support vector


machines,” Transportation Research Part C: Emerging Technologies, vol. 11,
no. 3–4, pp. 309 – 328, 2003. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0968090X03000202

[22] R. A. Fisher, “The Use of Multiple Measurements in Taxonomic Problems,”


Annals of Eugenics, vol. 7, no. 2, pp. 179–188, 1936.

[23] I. Steinwart and A. Christmann, Support Vector Machines, 1st ed.


Springer, 2008.

[24] D. Tax and R. Duin, “Support Vector Data Description,” Machine Learning,
vol. 54, pp. 45–66, 2004.

[25] Jalp Systems. ”Jalp – Säkerhetslösningar för oskyddade trafikanter [Jalp –


Safety solutions for vulnerable road users]”. www.jalp.se. (in Swedish).
[Accessed: 28 June, 2016]. [Online]. Available: http://www.jalp.se

[26] S. Candefjord, L. Sandsjö, and B. A. Sjöqvist, “Statistikinsamling och


automatiskt olyckslarm för trafik med fyrhjulingar via en
smartmobilplattform [Statistical collection and automatic incident alarm for
traffic by quadwheeler via a smartphone platform],” SAFER, Chalmers
University of Technology and University of Borås, Tech. Rep., February
2016, (in Swedish).

[27] Apple Inc. ”Numbers for Mac”. www.apple.com. [Accessed: 29 June, 2016].
[Online]. Available: http://www.apple.com/mac/numbers/

[28] D. G. Lister, J. Carl, J. H. Morgan, M. Denning, David A amd Valentovic,


B. Trent, and B. L. Beaver, “Pediatric all-terrain vehicle trauma: a 5-year
state-wide experience,” Journal of Pediatric Surgery, vol. 33, no. 7, pp.
1081–1083, 1998.

[29] MedlinePlus. ”Concussion”. U.S. National Library of Medicine.


www.nlm.nih.gov. [Accessed: 8 Mars, 2016]. [Online]. Available:
https://www.nlm.nih.gov/medlineplus/concussion.html

75
[30] Thompsons Solicitors. ”Compensation Claims for Fractured Bones / Broken
Bones”. www.thompsons.law.co.uk. [Accessed: 27 June, 2016]. [Online].
Available: http://www.thompsons.law.co.uk/other-accidents/
compensation-claim-fractured-bone-broken-bone.htm

[31] IDC Research Inc. ”Smartphone OS Market Share, 2015 Q2”. www.idc.com.
[Accessed: 23 May, 2016]. [Online]. Available:
http://www.idc.com/prodserv/smartphone-os-market-share.jsp

[32] Apple Inc., The Swift Programming Language, 2nd ed. Apple Inc., 2015.

[33] ——. ”Measure Energy Impact with Xcode”. developer.apple.com. [Accessed:


3 May, 2016]. [Online]. Available:
https://developer.apple.com/library/watchos/documentation/Performance/
Conceptual/EnergyGuide-iOS/MonitorEnergyWithXcode.html#//
apple ref/doc/uid/TP40015243-CH34-SW1

[34] ——. ”vDSP Programming Guide”. developer.apple.com. [Accessed: 25


February, 2016]. [Online]. Available:
https://developer.apple.com/library/prerelease/ios/documentation/
Performance/Conceptual/vDSP Programming Guide/About vDSP/
About vDSP.html#//apple ref/doc/uid/TP40005147-CH2-SW1

[35] ——. ”Accelerate Framework Reference”. developer.apple.com. [Accessed: 26


February, 2016]. [Online]. Available: https://developer.apple.com/library/
prerelease/ios/documentation/Accelerate/Reference/AccelerateFWRef

[36] ——. ”CMMotionActivity Class Reference”. developer.apple.com. [Accessed:


25 February, 2016]. [Online]. Available:
https://developer.apple.com/library/ios/documentation/CoreMotion/
Reference/CMMotionActivity class/

[37] Android Developers. ”ActivityRecognitionResult”. developers.google.com.


[Accessed: 27 June, 2016]. [Online]. Available:
https://developers.google.com/android/reference/com/google/android/gms/
location/ActivityRecognitionResult#public-methods

[38] C.-W. Hsu, C.-C. Chang, and C.-J. Lin, “A Practical Guide to Support
Vector Classification,” National Taiwan University, Dept. of Computer
Science, Taipei 106, Taiwan, Tech. Rep., 2010.

[39] SOS Alarm, “SMS 112 in Sweden,” www.sosalarm.se, 2010, [Accessed: 19


May, 2016]. [Online]. Available: https://www.sosalarm.se/PageFiles/1155/
SMS%20112%20Systembeskrivning EN%20 2 .pdf

76
[40] Apple Inc. ”CMMotionActivityManager Class Reference”.
developer.apple.com. [Accessed: 3 May, 2016]. [Online]. Available:
https://developer.apple.com/library/ios/documentation/CoreMotion/
Reference/CMMotionActivityManager class/index.html#//apple ref/occ/
instm/CMMotionActivityManager/queryActivityStartingFromDate:toDate:
toQueue:withHandler:

[41] ——. ”MFMessageComposeViewController Class Reference”.


developer.apple.com. [Accessed: 27 May, 2016]. [Online]. Available:
https://developer.apple.com/library/ios/documentation/MessageUI/
Reference/MFMessageComposeViewController class/index.html

[42] ——. ”CFNetwork Programming Guide”. developer.apple.com. [Accessed: 30


May, 2016]. [Online]. Available:
https://developer.apple.com/library/mac/documentation/Networking/
Conceptual/CFNetwork/Introduction/Introduction.html#//apple ref/doc/
uid/TP30001132-CH1-DontLinkElementID 30

[43] Google Inc. ”Android usage statistics”. developer.android.com. [Accessed: 24


Mars, 2016]. [Online]. Available:
http://developer.android.com/about/dashboards/index.html

[44] X. Ducrohet. ”Android 4.0.3 Platform and Updated SDK tools”. 11


December 2011 [Blog entry]. Android Developers Blog. [Accessed: 24 May,
2016]. [Online]. Available: http://android-developers.blogspot.se/2011/12/
android-403-platform-and-updated-sdk.html

[45] B. Rakowski. ”Get ready for the sweet taste of Android 6.0 Marshmallow”.
5 October 2015 [Blog entry]. Android Official Blog. [Accessed: 24 May,
2016]. [Online]. Available: http://officialandroid.blogspot.se/2015/10/
get-ready-for-sweet-taste-of-android-60.html

[46] Android Developers. ”SmsManager in Android documentation”.


developer.android.com. [Accessed: 27 May, 2016]. [Online]. Available: https:
//developer.android.com/reference/android/telephony/SmsManager.html

[47] European Commission. ”eCall: Time saved = lives saved”. European


Commission. www.ec.europa.eu. [Accessed: 16 February, 2016]. [Online].
Available:
https://ec.europa.eu/digital-agenda/en/ecall-time-saved-lives-saved

[48] S. Fiorella. ”The Insidiousness of Facebook Messenger’s Android Mobile


App Permissions (Updated)”. 11 August 2014 [Blog entry]. The Huffington

77
Post, The Blog. [Accessed: 26 May, 2016]. [Online]. Available: http://www.
huffingtonpost.com/sam-fiorella/the-insidiousness-of-face b 4365645.html

[49] P. McCrory, K. Johnston, W. Meeuwisse, M. Aubry, R. Cantu, J. Dvorak,


T. Graf-Baumann, J. Kelly, M. Lovell, and P. Schamasch, “Summary and
agreement statement of the 2nd International Conference on Concussion in
Sport, Prague 2004,” British Journal of Sports Medicine, vol. 39, pp.
196–204, 2005.

[50] Apple Inc. ”HealthKit Framework Reference”. developer.apple.com.


[Accessed: 1 Jun, 2016]. [Online]. Available: https://developer.apple.com/
library/ios/documentation/HealthKit/Reference/HealthKit Framework/

78
Search tutorials, code and more Project Code Contact

 JAVA For Android Android Studio Design Android UI Android Material Design Android Programming Android Database Create Android App

Promo Tri Berkah Berlipat


Learn Android Nikmati puas internetan hingga 50GB mulai dari 3ribu! Promo Ramadan Tri Berkah Beli Sekarang
UI Berlipat
triindonesia

XML in Android

Ac vity Life Cycle SeekBar Tutorial With Example In Android Studio


Layout In Android In Android, SeekBar is an extension of ProgressBar that adds a draggable thumb, a user can Premium Project Source
 touch the thumb and drag le or right to set the value for current progress. Code:
Adapter In
Android  Live TV Streaming /
Youtube Channel
ListView Android App Source
Code
GridView
Quiz Game Android
Fragment App Project Source
Code
ScrollView &
HorizontalScrollView Food Ordering
Android App Project
Spinner  Source Code
TextView Ecommerce Store
SeekBar is one of the very useful user interface element in Android that allows the selec on
Android App Project
EditText of integer values using a natural user interface. An example of SeekBar is your device’s Source Code
Bu on
brightness control and volume control.
Smart News Android
Important Note: A ribute of a SeekBar are same as ProgressBar and the only difference is App Project Source
ImageView 
user determine the progress by moving a slider (thumb) in SeekBar. To add a SeekBar to a Code
ImageBu on layout (XML) file, you can use the <SeekBar> element. Convert Website Into
Advance Android App
CheckBox SeekBar code in XML: Project Source Code
Switch (On/Off)
<SeekBar City Guide Android
android:id="@+id/simpleSeekBar" App Project Source
ToggleBu on
android:layout_width="fill_parent" Code
android:layout_height="wrap_content" />
RadioBu on &
QR/Barcode Scanner
RadioGroup
Android App Project
Ra ngBar Source Code

Table Of Contents [hide] Radio Streaming


WebView
Android App Project
AutoCompleteTextView
1 How To Add Listener To No fy The Changes In SeekBar: Source Code
2 A ributes of SeekBar In Android:
Mul AutoCompleteTextView
3 SeekBar Example In Android Studio:
4 Custom ver cal SeekBar Example In Android Studio:
ProgressBar

TimePicker
How To Add Listener To No fy The Changes In SeekBar:
DatePicker
SeekBar.OnSeekBarChangeListener is a listener used as a callback that no fies client when
Calendar View the progress level of seekbar has been changed. This listener can be used for both user
ini ated changes (in xml file or java class) as well as for programma c changes.
AnalogClock,
DigitalClock And
TextClock

SeekBar

ExpandableListView

Adapter Used In
ExpandableListView

Chronometer

ZoomControls

CheckedTextView

VideoView seekBarInstanceVariable.setOnSeekBarChangeListener(new OnSeekBarChangeListener() {…} –


TabHost This method is used to no fy the user changes/ac ons in the SeekBar.

SearchView
Methods Needs To Be Implemented:

In Seek bar for ge ng changes in progress we need to implement three abstract methods.
Sliding Drawer
Below is detail descrip on of these three methods:
TextSwitcher
1. public void onProgressChanged (SeekBar
ViewSwitcher seekBar, int progresValue, boolean fromUser) {…} –
ImageSwitcher
This listener method will be invoked if any change is made in the SeekBar.

2. public void onStartTrackingTouch(SeekBar seekBar) {…} –


ViewFlipper
This listener method will be invoked at the start of user’s touch event. Whenever a user
AdapterViewFlipper touch the thumb for dragging this method will automa cally called.
ViewAnimator 3. public void onStopTrackingTouch(SeekBar seekBar) {…} –
ViewStub
This listener method will be invoked at the end of user touch event. Whenever a user stop
dragging the thump this method will be automa cally called.
Include Merge Tag Search tutorials, code and more Project Code Contact
getMax():
Gallery
We can get the maximum value of the SeekBar programma cally means in java class. This
CountDownTimer
method returns an integer value. Below code will get the maximum value from a Seekbar.
Alert Dialog
SeekBar simpleSeekBar = (SeekBar) findViewById(R.id.simpleSeekBar); // initiate
Custom Alert
Dialog int maxValue=simpleSeekBar.getMax(); // get maximum value of the Seek bar

ProgressDialog getProgress():
HTML In Android We can get the current progress value from a Seekbar in java class using getProgress()
Intent In Android method. This method returns an integer value. Below code is used to get the current
progress value from a Seek bar.
Shared Preference
In Android SeekBar simpleSeekBar=(SeekBar)findViewById(R.id.simpleSeekBar); // initiate the

Toast In Android
int seekBarValue= simpleSeekBar.getProgress(); // get progress value from the Se

Internal Storage

External Storage
A ributes of SeekBar In Android:
Shared Preference
Now let’s we discuss important a ributes that helps us to configure a SeekBar in xml file
SQLite in Android (layout).
JSON Parsing 1. id: id a ribute uniquely iden fy SeekBar.
Splash Screen
<SeekBar
android:id="@+id/simpleSeekBar"
AsyncTask
android:layout_width="fill_parent"
android:layout_height="wrap_content" /> <!-- id of a Seek bar used to unique
Volley

Retrofit 2. max: max a ribute in SeekBar define the maximum it can take. It must be an integer
value like 10, 20, 100, 200 etc. We can set the max value in XML file as well as in java class.
Google Maps
By default, a SeekBar takes maximum value of 100.
Camera in Android
Below we set 150 maximum value for a Seek bar.
Picasso In Android
<SeekBar
Material Design  android:id="@+id/simpleSeekBar"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:max="150"/><!-- set 150 maximum value for the progress -->

Exclusive
Launch
Tiarapot
Pro
12 fungsi Display Panel Flat
dalam 1
Tiarapot Pro, MSI
masak gak
lagi repot, yuk Kami terus memberi Anda
beli di BIibli world-class seamless
diskon
600Ribu!
gaming experience di game
MSI AAA.
Polytron

Se ng max value of SeekBar Progress In Java Class :

Shop Now /*Add in Oncreate() funtion after setContentView()*/


SeekBar simpleSeekBar=(SeekBar) findViewById(R.id.simpleSeekBar); // initiate th
simpleSeekBar.setMax(150); // 150 maximum value for the Seek bar

3. progress: progress is an a ribute of SeekBar used to define the default progress value,
between 0 and max. It must be an integer value.
Below we set the 200 max value and then set 50 default progress value.

<SeekBar
android:id="@+id/simpleSeekBar"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:max="200"
android:progress="50"/><!-- set 150 maximum value for the progress -->
Search tutorials, code and more Project Code Contact

Se ng progress In Java Class :

SeekBar simpleSeekBar=(ProgressBar) findViewById(R.id.simpleSeekBar); // initiat

simpleSeekBar.setMax(200); // 200 maximum value for the Seek bar

simpleSeekBar.setProgress(50); // 50 default progress value

4. progressDrawable: progress drawable a ribute is used in Android to set the custom


drawable xml for the progress mode of a seekbar. Progress drawable is used when we need
to use a custom progress of a seekbar.
Below we set the custom gradient drawable for the progress mode of a Seekbar.

Step 1: Add this code in ac vity_main.xml or main.xml

<SeekBar
android:id="@+id/simpleSeekBar"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:max="200"
android:progress="50"
android:progressDrawable="@drawable/custom_progress"/><!-- set custom progress d

Step 2: Create a new drawable resource xml in drawable folder and name it
custom_progress. Here add the below code which creates gradient effect in seekbar.

<?xml version="1.0" encoding="utf-8"?>


<layer-list xmlns:android="http://schemas.android.com/apk/res/android" >
<item>
<shape>
<gradient
android:endColor="#fff"
android:startColor="#f00"
android:useLevel="true" />
</shape>
</item>

</layer-list>

5. indeterminate: indeterminate is an a ribute used in android to enable the


indeterminate mode of a seekbar. In this mode a seekbar shows a cyclic anima on without
an indica on of progress. This mode is used in applica on when we don’t know the amount
of work has been done. Here actual progress will be hidden from user.

Below we set the indeterminate to true.

<SeekBar
android:id="@+id/simpleSeekBar"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:max="200"
android:indeterminate="true"/><!-- set custom progress drawable for the progress

6. background: background a ribute of seekbar is used to set the background. We can set a
color or a drawable in the background of a Seekbar. We can also set the background color in
JAVA class.

Below we set the green color for the background of a Seek bar.

<SeekBar
android:id="@+id/simpleSeekBar"
android:layout_width="fill_parent"
Search tutorials, code and more
android:layout_height="wrap_content" Project Code Contact
android:max="200"
android:indeterminate="true"
android:background="#0F0"/><!-- set green color in the background of seek bar-->

Se ng Background of SeekBar In Java class:

SeekBar simpleSeekBar=(SeekBar)findViewById(R.id.simpleSeekBar); // initiate the

simpleSeekBar.setBackgroundColor(Color.GREEN); // green background color for the

7. padding: padding a ribute of seekbar is used to set the padding from le , right, top or
bo om.
paddingRight: padding right a ribute is used to set the padding from the right side of
the Seekbar.
paddingLe : padding le a ribute is used to set set the padding from the le side of
the Seekbar.
paddingTop: padding top a ribute is used to set the padding from the top side of the
Seekbar.
paddingBo om: padding bo om a ribute is used to set the padding from the
bo om side of the Seek bar.
Padding: padding a ribute is used to set the padding from the all side’s of the
Seekbar.
Below we set the 20dp padding from the top of the Seek bar.

<SeekBar
android:id="@+id/simpleSeekBar"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:max="200"
android:progress="150"
android:background="#34A853"
android:paddingTop="40dp"/><!-- set 20dp padding from the top of the seek b

8. thumb: thumb a ribute is used in seekbar to draw a thumb on a seekbar. We can use an
image or a drawable for the thumb.

Below is an example code in which we set a drawable icon for the thumb of the seekbar.

First download the thumb icon from here and save in drawable folder of your project. You
can also click on below icon and then download it:

<SeekBar
android:id="@+id/simpleSeekBar"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:max="200"
android:progress="100"
android:thumb="@drawable/thumb"/><!-- set a thumb drawable icon for the seek ba
SeekBar Example
SearchIn Android
tutorials, codeStudio:
and more Project Code Contact
Example 1: In the below example of seekbar in Android we display a simple seekbar by
using its different a ributes as discussed earlier in this post. We also perform seekbar
changed listener event which is used to get the changes in the progress of a seek bar. A er
ge ng changes, the changed value of progress is displayed by using a Toast. Below is the
download code, final output and step by step tutorial:

Important Note: Don’t miss the 2nd example of Custom SeekBar in Android which is
discussed right a er this example.
Download Code ?

Step 1: Create a new project and name it SeekBarExample


Step 2: Open res -> layout -> ac vity_main.xml (or) main.xml and add following code:

In this step we open an xml file and add the code for displaying a seekbar by using its
different a ributes like max, default progress and few more.

<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:paddingBottom="@dimen/activity_vertical_margin"
android:paddingLeft="@dimen/activity_horizontal_margin"
android:paddingRight="@dimen/activity_horizontal_margin"
android:paddingTop="@dimen/activity_vertical_margin"
tools:context=".MainActivity">

<SeekBar
android:id="@+id/simpleSeekBar"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_marginTop="20dp"
android:max="200"
android:progress="60"
/>

</RelativeLayout>

Step 3: Open src -> package -> MainAc vity.java


In this step we open MainAc vity and add the code to ini ate the seekbar and then
perform seekbar changed listener event for ge ng the changes in the progress of the
seekbar. By using this event listener we set get the current value of a seekbar and when a
user stop the tracking touch, the value of progress is displayed by using a Toast.

package example.gb.seekbarexample;

import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.support.v7.widget.ButtonBarLayout;
import android.view.Menu;
import android.view.MenuItem;
import android.view.View;
import android.widget.SeekBar;
import android.widget.Button;
import android.widget.Toast;

public class MainActivity extends AppCompatActivity {

Button submitButton;
SeekBar simpleSeekBar;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
// initiate views
simpleSeekBar=(SeekBar)findViewById(R.id.simpleSeekBar);
Search tutorials,
// perform seek bar code change
and morelistener event used for getting the progressProject Code Contact
simpleSeekBar.setOnSeekBarChangeListener(new SeekBar.OnSeekBarChangeList
int progressChangedValue = 0;

public void onProgressChanged(SeekBar seekBar, int progress, boolean


progressChangedValue = progress;
}

public void onStartTrackingTouch(SeekBar seekBar) {


// TODO Auto-generated method stub
}

public void onStopTrackingTouch(SeekBar seekBar) {


Toast.makeText(MainActivity.this, "Seek bar progress is :" + pro
Toast.LENGTH_SHORT).show();
}
});

Custom ver cal SeekBar Example In Android Studio:


In the 2nd example of seekbar we displayed a custom ver cal seekbar by using its different
a ributes. Similar to first example we perform seekbar changed listener event which is
used in ge ng the changes done in the progress and then those changed value of progress
is displayed by using a Toast.

Download Code ?

Step 1: Create a new project and name it SeekBarExample

Step 2: Open res -> layout -> ac vity_main.xml (or) main.xml and add following code:

In this step we open xml file and add the code for displaying a ver cal seekbar by using its
different a ributes like max, default progress etc. In this xml for displaying a ver cal
seekbar we used a rota onal a ribute and set 270 value for that.

<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:paddingBottom="@dimen/activity_vertical_margin"
android:paddingLeft="@dimen/activity_horizontal_margin"
android:paddingRight="@dimen/activity_horizontal_margin"
android:paddingTop="@dimen/activity_vertical_margin"
tools:context=".MainActivity">

<SeekBar
android:id="@+id/customSeekBar"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_centerInParent="true"
android:max="200"
android:progress="40"
android:thumb="@drawable/thumb_icon"
android:rotation="270"
android:progressDrawable="@drawable/custom_progress"/>

</RelativeLayout>

Step 3: Create an xml file in drawable -> custom_progress.xml


In this step we create a custom drawable xml for the seek bar. In this xml we create a layer
list in which we create an item and then set the gradient colors for our custom seek bar.
<?xml version="1.0" encoding="UTF-8"?>
Search tutorials, code and more Project Code Contact
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
<item>

<shape>
<gradient android:useLevel="true"
android:startColor="#560"
android:endColor="#454455"/>
</shape>
</item>
</layer-list>

Step 4: Open src -> package -> MainAc vity.java

In this step we open MainAc vity and here we add the code to ini ate the ver cal seekbar
and then perform seek bar changed listener event for ge ng the changes in the progress of
the seek bar. By using this event listener we get the current progress value of a seek bar and
when a user stop the tracking touch, that value of progress is displayed by using a Toast.

package example.gb.seekbarexample;

import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.view.Menu;
import android.view.MenuItem;
import android.widget.SeekBar;
import android.widget.Button;
import android.widget.Toast;

public class MainActivity extends AppCompatActivity {

Button submitButton;
SeekBar customSeekBar;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
// initiate views
customSeekBar =(SeekBar)findViewById(R.id.customSeekBar);
// perform seek bar change listener event used for getting the progress
customSeekBar.setOnSeekBarChangeListener(new SeekBar.OnSeekBarChangeList
int progressChangedValue = 0;

public void onProgressChanged(SeekBar seekBar, int progress, boolean


progressChangedValue = progress;
}

public void onStartTrackingTouch(SeekBar seekBar) {


// TODO Auto-generated method stub
}

public void onStopTrackingTouch(SeekBar seekBar) {


Toast.makeText(MainActivity.this, "Seek bar progress is :" + pro
Toast.LENGTH_SHORT).show();
}
});

9 thoughts on “SeekBar Tutorial With Example In


Android Studio”
1. Search tutorials,
zeeshan says: code and more Project Code Contact
October 12, 2018 at 5:08 am

good

Reply

2. Joaquim says:
January 24, 2018 at 3:02 pm

Thanks for the nice and complete tutorial

Reply

3. hi chouhan says:
December 2, 2017 at 9:03 am
i want The NumberPicker to display the numbers between the specified range, and
the seekbar will show the progress and work according to the number selected from
the NumberPicker

Reply

4. Pavan says:
October 28, 2017 at 4:45 am

Good example. Can you help me to save the seekbar progress value. Even if the
ac vity is closed and restarted the posi on of the seekbar should be same as set by
user.

Reply

5. Breera says:
October 14, 2017 at 1:20 pm

good job and thanks.


can u also giv the answers of the ques ons??

Reply

6. Knoidea says:
June 4, 2017 at 6:49 am

Be er than nothing.

Reply

7. shafiq_ur_rehman says:
April 20, 2017 at 7:20 am

Good example

Reply

8. Abhinay M says:
September 3, 2016 at 4:36 am

Good examples but s ll you s ll use “fill_parent”??

Reply

9. ravi says:
July 9, 2016 at 10:43 am

good side in nice devlopmant

Reply

Leave a Reply
Your email address will not be published.*Required fields are marked

Name *

Email *

Save my name, email, and website in this browser for the next me I comment.

Post Comment
Search tutorials, code and more Project Code Contact

Continue Reading:
DatePicker Tutorial With Example in Android Studio
AnalogClock, DigitalClock And TextClock Tutorial With Example in Android Studio
ExpandableListView Tutorial With Example in Android Studio
Chronometer Tutorial With Example in Android Studio
Zoom Controls Tutorial With Example in Android Studio

Let’s Connect Connect With


AbhiAndroid

Hi, I'm Abhishek, founder and owner


of this site. I believe in team work and
there is a experienced developer team
working on AbhiAndroid with a
mission to simplify learning of Android
App Development.
CONNECT WITH ME:
FACEBOOK | INSTAGRAM

© Abhi Android | Terms | Privacy Policy


SITE TITLE

 

SELF-DEFENSE, WHEN IT IS APPLIED? With special


reference to Nepalese Law
DECEMBER 9, 2016 / RAVI143SHANKAR

ESSAY PAPER ON SELF-DEFENSE, WHEN IT IS APPLIED

Abstract.

Self-defense is the use of necessity force to prevent oneself from any harm or danger.
Self-defense has got various principles like balance in the use of force, threat of force,
third party intervention, police can make use of force in the name of self-defense, and
defense against legally immune person. There are various conditions where the theory of
self-defense is applied. In case of Nepalese law there are separately mentioned the
various conditions of self-defense. The precedents of Supreme Court on self-defense are
studied.

The essayist will clear the basic concept of self-defense. Various ancient theories of self-
defense will be highlighted in brief. The paper will deal with the various conditions where
self- defense is applied. There are various principles related with self-defense discussed
in this essay. Various law in Nepal related with self-defense will be explained. The paper
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their
will not go beyond the Nepalese law and Nepalese context. The objectives of this Closeessay is
and accept
use.
use.
To to ndmore,
nd out outincluding
varioushow
conditions of self-defense
to control cookies, to be
see here: Cookie applied, to know various laws related
Policy Follow
with self-defense and to nd out various precedence made by Supreme Court while
dealing the cases of self-defense. Due to time limitation except Nepalese law no other
countries law are mentioned in paper. To support the argument the essayist has brought
number of cases from Nepal Kanoon Patrika related to self defense. Literature review
includes books, Nepal Kanoon Patrika, Mului Ain, constitution, articles and news paper.

Introduction

Self-defense is the use of necessary force to prevent oneself from any harm or danger.
Black’s law dictionary de nes self-defense as the use of force to protect oneself, one’s
own family or one’s property from a real or threatened attack. Generally, a person is
justi ed in using a reasonable amount of force in self-defense if he or she believes that
the danger of bodily harm is imminent and that force is necessary to avoid this danger.[1]
Self-defense is also termed as defense of self. Self-defense provides the right to take life
based on the necessity, real or reasonably apparent of the killing an unlawful aggressor to
save oneself from imminent death or great bodily harm.[2]

Concepts of self-defense

Various concepts of self-defense existed during various era. In Anglo America, any killing
even in self-defense was blamable. Once the accused was found liable, regardless of
blameworthiness, the remedy was monetary compensation to, or personal retribution
wrought by, the victim’s family.

In Roman law Principle of Dominium, attack on the member of family or the property it
owned was a personal attack on the pater familias.[3]

Manusmriti has the concept that every state has duty to protect its citizen’s life from any
harm or danger. But no nation all the time can put their police behind every citizen, no
matter how well developed the nation is. So individual are naturally permitted to protect
their life.[4]

Thomas Hobbes in his Leviathan (1651) argued that although some men may be stronger
or more intelligent than others in their natural state, none are so strong as to be beyond a
fear of violent death, which therefore justi es self-defense as man’s highest necessity.[5]
In 1769, Blackstone in his Social Theory of Self-defense explained the justi able homicide
could only be killings required by law that promoted the social good. Personal killings in
self-defense could only be excused because they could not be absolutely free from guilt.
[6]

The principles of self-defense

Private-defense is entertained able only if it is within the prescribed limit of law. There
can be no self-defense if one goes beyond the principles set by criminal law. There are ve
different principles of self-defense.

Balance in the use of force

One cannot make use of imbalance force required for self-defense. The degree of force
re ects the intension and motive of the accused[7]. The law limits the force to be used by
defender against the victim. For example one should not react with gun against the
person who is to attack with stick. If there is imbalance in the use of force against the
victim, the perpetrator may be held guilty of provocation based incident and there can be
no self-defense.

Threat of force

One can use necessary force against the other in the name of self-defense until someone
pose a threat to someone’s life or property is in threat. For example if there is any threat
to A’s property against B , A can use force to protect his/her property from B. and while in
the course of protecting property if B is killed then A gets the immunity from criminal
liability. A can use force until there is threat of B in the property of A. if there is no any
threat of B in A’s property, A cannot use any kind of force against B

Third party intervention

If A is being badly beaten by B and in the mean time there comes C and prevents B from
beating A, in the course if B is killed by C then C is not held liable of criminal liability. It is
because the law imposes the legal duty upon the surrounding persons to help the victim
(section 19 in homicide chapter and section 16 in theft chapter). The parents can come to
protect their children from danger in the name of self-defense. In the case of rape any
person can help the one who is being rape and while helping if the one who is raping dies,
the other person who is helping will be immune as the victim.
the other person who is helping will be immune as the victim.

Police exercising self-defense

No person has right to attack on police of cer in the name of self-defense without giving
him/her opportunity to show the authority of interference against person or property.
The police upon such attack can exercise counter attack in the name of self-defense. For
example if A is police and D is some person. if A comes to arrest D, Z cannot use any kind
of force prior to show the police an authority to arrest D. If D attacks the Police, the
police can use force in the name of self-defense.

Private-defense against legally immune person

The law recognized the insane, drunken person or child as immune from criminal liability.
But in a situation if such immune person prove to be a threat against any person or
property, the victim can make use of necessary force against him/her. For example if a 10
years boy holds loaded gun in the Public Park and is ready to shoot. In this case one can
make use of necessary force to prevent the boy from shooting. In the course of
preventing the boy from shooting if he gets hurt or dies, the one who tries to stop the boy
will be legally immune from criminal liability.[8]

Conditions for the applicability of self-defense

In accordance with the principles of Criminal Justice the Nepalese law has categorized
situations where self –defense is applied. In other word there are various conditions
approve by Nepalese law where one gets legally immune from criminal liability in the
name of self –defense, even if the homicide has occurred. Homicide is strictly prohibited
by the law but in the case of self-defense the homicide is not regarded as liable for
punishment. Like, private-defense, defense of chastity, defense in reference to
kidnapping, hostage taking, hurt, defense of private as well as public property.[9

Private-defense

Number 7 of chapter on Homicide and number 4 of chapter on Hurt/Battery of Muluki


Ain have guaranteed the immunity of people from criminal liability in the name of self-
defense.[10] The nature of being itself initiates a person to counter attack against other
person who tries to hurt. For example if Z is sitting on his room and there comes X. X tries
to hurt Z and to save from X’s hurt Z defends x. in the mean time if X dies then Z will be
immune from criminal liability. It is so because Z has not done any illegal work. It is X who
initiated the con ict. So Z is criminally immune.

In the case of Dhanabahadur Rai V HMG NKP 2072 the court de nes ‘to get self-defense
the defender most not be the initiator of the action’. The fact of the case is on the date
2067/8/21 ManBahadur was returning home by harvesting crops from the eld. On the
way he meets DhanaBahadur who makes ManBahadur ready to play wrestling. While
both of them were wrestling ManBahadur tries to grasp the khukuri but DhanaBahadur
manage to grab the khukuri faster than ManBahadur and hits nine times to ManBahadur.

DhanaBahadur claims for self-defense in the court but the court examine the very fact of
the case and decide DhanaBahadur liable for homicide on the ground DhanBahadur
himself initiated the scene to happen. It was not ManBahadur who created the situation
to wrestle. The court didn’t nd any proof which match to the provision of Muluki Ain
chapter on Homicide chapter 13(1)[11]

Defense of property

Self-defense can be applied in the name of defense of property. The Muluki Ain allows the
person to use necessary force if he/she gets any danger to his/her property from other
person. While defending if the person who is interfering dies the owner of property won’t
be liable for criminal liability.

For example if M has a fruit farm and N comes to cut the trees of M’s farm and while
trying to stop N, if M killed N then M will get defense of property. If N does not pose any
threat to M’s farm then M cannot make any harm to N.

Defense of public property

Nobody can make use of force against any public of cers who are for the security of
public property. And if anybody attacks on such public of cers, the public of cers has
right to use necessary force in to prevent them from destruction of public. In the case of
private security guard too the defense of public property is applied. The case of HMG V
Jogendra Bahadur Karki relates with the defense of public property. The supreme court
has made the decision on the ground the defense of public property. As per the fact of the
case some 30-40 men come and try to threat the police and snatch the ri e of police.
During the course of snatching the ri e Dhanabahadur hits a person named
Prembahadur. Prembahadur is taken to custody room and in the morning he dies. The
Supreme Court gives Dhanbahadur self-defense in the name of public property. The main
ground for the decision is the action of Dhanabahadur was rationale because the ri e of
his hand was tried to loot.

Defense of chastity and right to retaliation

The law has made the provision of using necessary by the person whose chastity is in
danger against the chastity destroyer. Number 8 chapter Rape of Muluki Ain provided the
right to defend own chastity as well as right to retaliate within one hour of rape but on
2072 new law came into existence which amend the provision of Muluki Ain’s number 8
of chapter on Rape. And this new law reduces the time of killing from one hour to
immediately after the incident. In the case of HMG V Dhanamaya Chhetrini a mother
Dhanamaya kills her own son because the son rapes her.[12] The court gives Dhanamaya
defense of chastity and right to retaliation as mentioned in our Muluki Ain chapter on
rape section 8[13].

Private defense in reference to cow killing

Article 1, sub article (9) of the Constitution of Nepal declared cow as the national animal.
Cow and ox are connected with Hindu culture and religion. Whereas, Muluki Ain
prohibits killing of cow. Cow killing is punishable up to 12 years of imprisonment. If any
person with weapon tries to kill the cow another person should warn the person not to
kill the cow and during the course if the person with weapon attacks another person, the
one who is warning can make use of necessary force against such person. For example if A
tries to kill a cow and X warns A not to kill a cow and while X trying to stop A killing a cow
kills A then in this case X will get private defense in reference to cow killing.

Self-defense in reference to assault and battery

Muluki Ain 2020 chapter on hurt/battery de nes assault as “If a person causes bloodshed
(Ragatpachhe), wound, injury, grievous hurt (Angabhanga) or causes any pain or harm to
the body of another person, the person shall be deemed to have committed the offence of
hurt/battery”.[14] The Muluki Ain chapter on Assault and Battery number 4 has made the
provision that one shall be free from criminal liability in the name of self defense. The
example will make clearer on it, if person A is innocent, person B comes and harm to A. A
warns B not to continue harming A, but B goes. At the mean time A in order to protect
him/herself from B’s hurt, A counter attack on B and due to severe injury B dies. In this
case A will be immune from criminal liability because A did not have intention to kill B, it
was B who tries to harm A in the beginning.

Analysis/conclusion

Self-defense is applied in the situations like private defense, defense of chastity, defense
of private as well as public property, defense in the name of battery/assault, defense in
the name of cow killing. Except these situations the theory of self-defense is not
acceptable. Self-defense is entertained in case of oneself as well as other person.[15] The
court has to observe minutely the overall principles of self-defense while dealing the case
of self-defense. Self-defense is the Actus-Reus of necessity. Our country has guaranteed
various laws related to self-defense. The laws of self-defense are not under the single
topic of self-defense but under the Muluki Ain’s chapter on Homicide, theft, quadruped,
battery/assault and rape. The laws are in scattered form.

The Supreme Court is the body of state where once the verdict is given it cannot be
repealed, annulled or change. So while giving decision the court should be very much
cautious. There are various precedence which the Supreme Court has made genuine
decision. Like in the case of DhanaBahadur Rai V HMG[16] , HMG V Dhanamaya
Chhetrini[17], HMG V JogendraBahadur Karki[18], and others. But there are also the
cases which the Supreme Court has failed to address the justice in the part of self-
defense. For example the cases like, Bal Manjari V HMG[19], Hanif Ansari V HMG[20],
Shiva Mahato V HMG[21] and others. There is one very important quote in criminal
justice system that let 1000 criminal go but prevent 1 single innocent from being punish.

The theory of self-defense is not applied when it goes beyond the principles of self-
defense.

Bibliography

Aparadh Samhita Tathaa Faujdari Karyabidhi Samhita (Parimarjan Tatha Samsodhit Masouda),
Foujdari Kanoon Parimarjan Tathaa Sudhar Karyadal 2065. New Era Offset Press, Page 6

Bryan A, Black’s Law Dictionary, 8th edition, page 1390.

De nition, Theory, http://www.newworldencyclopedia.org/entry/Self-defense (Accessed


De nition, Theory, http://www.newworldencyclopedia.org/entry/Self-defense (Accessed
on April 17th 2016)

Dhanabahadur Rai V HMG NKP 2072, 9298

George E.Dix and M.Michael Sharlot, Criminal Law Cases and Materials, 4th edition, West
publishing co. ST.Paul, MINN, 1966, page 763, para-3.

GyaIndra Bahadur Shrestha and keshari Raj Pandit, An Introduction to Criminal Law, 2nd
edition, Nepal Law Books Company pvt limited, page 215

Justi cation: Self-Defense – History, http://law.jrank.org/pages/1466/Justi cation-Self-


Defense-History.html

Muluki Ain 2020(1963)

Prof. Madhav Pd. Acharya and Ganesh Bhattarai, Criminal Jurisprudence, Kathmandu,
2065

(Accessed on 18th July, 2016).

[1] Bryan A, Black’s Law Dictionary, 8th edition, page 1390.

[2] George E.Dix and M.Michael Sharlot, Criminal Law Cases and Materials, 4th edition,
West publishing co. ST.Paul, MINN, 1966, page 763, para-3.

[3] De nition, Theory, http://www.newworldencyclopedia.org/entry/Self-defense


(Accessed on April 17th 2016)

[4] GyaIndra Bahadur Shrestha and keshari Raj Pandit, An Introduction to Criminal Law,
2nd edition, Nepal Law Books Company private limited, page 215

[5] Ibid

[6] Justi cation: Self-Defense – History, http://law.jrank.org/pages/1466/Justi cation-


Self-Defense-History.html

th
(Accessed on 18th July, 2016)

[7] Dhanabahadur Rai V HMG NKP 2072, 9298

[8] Prof. Madhav Pd. Acharya and Ganesh Bhattarai, Criminal Jurisprudence, Kathmandu,
2065

[9] Aparadh Samhita Tathaa Faujdari Karyabidhi Samhita (Parimarjan Tatha Samsodhit
Masouda), Foujdari Kanoon Parimarjan Tathaa Sudhar Karyadal 2065. New Era Offset Press,
Page 6

[10] Muluki Ain 2020(1963)

[11] Ibid (n7)

[12] HMG V Dhanamaya Chhetrini NKP 2031, 123

[13] Ibid (n8)

[14] Ibid (n8), p. 356

[15] Ibid (n9)

[16] Ibid (n7)

[17] HMG V Dhanamaya Chhetrini NKP 2031, 123

[18] HMG V JogendraBahadur Karki NKP 2033, 24

[19] HMG V Bal Manjari and others NKP 2040, 297

[20] HMG V Hanif Ansari NKP2044, 3290

[21] HMG V Shiva Mahato NKP 2046, 1039

Share this:

 Twitter  Facebook
 

Like
Be the first to like this.

 Uncategorized

← First blog post Government with effective control


represents state. Argue it as Customary
International Law →

Leave a Reply

Enter your comment here...

Search … SEARCH

ARCHIVE

September 2020
December 2016
NAVIGATION

Home
About
Contact

Search … SEARCH

CREATE A FREE WEBSITE OR BLOG AT WORDPRESS.COM.


Platform Android Studio Google Play Docs More
  Search LANGUAGE  SIGN IN

Documentation

OVERVIEW GUIDES REFERENCE SAMPLES DESIGN & QUALITY




Services
Background tasks
Permissions
App data & les
Android Developers  Docs  Guides Rate and review

User data & identity


User location Motion sensors 
Touch & input
CameraX Table of contents

Camera2 Android Open Source Project sensors

Camera Use the gravity sensor


Use the linear accelerometer
 Sensors
Overview Use the rotation vector sensor

Sensors overview Use the signi cant motion sensor

Motion sensors 
Position sensors
The Android platform provides several sensors that let you monitor the motion of a device.
Environment sensors
Raw GNSS measurements The sensors' possible architectures vary by sensor type:


Connectivity
Renderscript The gravity, linear acceleration, rotation vector, signi cant motion, step counter, and step detector sensors are
Web-based content either hardware-based or software-based.
Android App Bundles
The accelerometer and gyroscope sensors are always hardware-based.
Google Play
App Actions Most Android-powered devices have an accelerometer, and many now include a gyroscope. The availability of the

Slices software-based sensors is more variable because they often rely on one or more hardware sensors to derive their data.
Depending on the device, these software-based sensors can derive their data either from the accelerometer and
Games magnetometer or from the gyroscope.

Develop
Motion sensors are useful for monitoring device movement, such as tilt, shake, rotation, or swing. The movement is
usually a re ection of direct user input (for example, a user steering a car in a game or a user controlling a ball in a
game), but it can also be a re ection of the physical environment in which the device is sitting (for example, moving
with you while you drive your car). In the rst case, you are monitoring motion relative to the device's frame of reference
or your application's frame of reference; in the second case you are monitoring motion relative to the world's frame of
reference. Motion sensors by themselves are not typically used to monitor device position, but they can be used with
other sensors, such as the geomagnetic eld sensor, to determine a device's position relative to the world's frame of
reference (see Position Sensors for more information).

All of the motion sensors return multi-dimensional arrays of sensor values for each SensorEvent . For example, during a
single sensor event the accelerometer returns acceleration force data for the three coordinate axes, and the gyroscope
returns rate of rotation data for the three coordinate axes. These data values are returned in a float array ( values )
along with other SensorEvent parameters. Table 1 summarizes the motion sensors that are available on the Android
platform.

Table 1. Motion sensors that are supported on the Android platform.

Sensor Sensor event data Description Units of measure

TYPE_ACCELEROMETER SensorEvent.values[0] Acceleration force along the x axis m/s2


(including gravity).

SensorEvent.values[1] Acceleration force along the y axis


(including gravity).

SensorEvent.values[2] Acceleration force along the z axis


(including gravity).

TYPE_ACCELEROMETER_ SensorEvent.values[0] Measured acceleration along the X axis m/s2


UNCALIBRATED without any bias compensation.

SensorEvent.values[1] Measured acceleration along the Y axis


without any bias compensation.

SensorEvent.values[2] Measured acceleration along the Z axis


without any bias compensation.

SensorEvent.values[3] Measured acceleration along the X axis


with estimated bias compensation.

SensorEvent.values[4] Measured acceleration along the Y axis with


estimated bias compensation.

S E l [5] M d l i l h Z i ih
SensorEvent.values[5] Measured acceleration along the Z axis with
Platform Android Studio Google Play Docs More
 estimated biasSearch
compensation. LANGUAGE  SIGN IN

TYPE_GRAVITY SensorEvent.values[0] Force of gravity along the x axis. m/s2

Documentation SensorEvent.values[1] Force of gravity along the y axis.

OVERVIEW GUIDES REFERENCE SAMPLES SensorEvent.values[2]


DESIGN & QUALITY Force of gravity along the z axis.

TYPE_GYROSCOPE SensorEvent.values[0] Rate of rotation around the x axis.




Services rad/s
Background tasks
SensorEvent.values[1] Rate of rotation around the y axis.
Permissions
App data & les SensorEvent.values[2] Rate of rotation around the z axis.
User data & identity
TYPE_GYROSCOPE_UNCALIBRATED SensorEvent.values[0] Rate of rotation (without drift rad/s
User location
compensation) around the x axis.
Touch & input
CameraX SensorEvent.values[1] Rate of rotation (without drift
compensation) around the y axis.
Camera2
Camera SensorEvent.values[2] Rate of rotation (without drift
 Sensors compensation) around the z axis.
Overview
SensorEvent.values[3] Estimated drift around the x axis.
Sensors overview
Motion sensors SensorEvent.values[4] Estimated drift around the y axis.
Position sensors
SensorEvent.values[5] Estimated drift around the z axis.
Environment sensors
Raw GNSS measurements TYPE_LINEAR_ACCELERATION SensorEvent.values[0] Acceleration force along the x axis m/s2
(excluding gravity).


Connectivity
Renderscript
SensorEvent.values[1] Acceleration force along the y axis
Web-based content (excluding gravity).
Android App Bundles
Google Play
SensorEvent.values[2] Acceleration force along the z axis
(excluding gravity).
App Actions
TYPE_ROTATION_VECTOR SensorEvent.values[0] Rotation vector component along the x axis

Slices Unitless
(x * sin(θ/2)).
Games
SensorEvent.values[1] Rotation vector component along the y axis

Develop (y * sin(θ/2)).

SensorEvent.values[2] Rotation vector component along the z axis


(z * sin(θ/2)).

SensorEvent.values[3] Scalar component of the rotation vector


((cos(θ/2)).1

TYPE_SIGNIFICANT_MOTION N/A N/A N/A

TYPE_STEP_COUNTER SensorEvent.values[0] Number of steps taken by the user since the Steps
last reboot while the sensor was activated.

TYPE_STEP_DETECTOR N/A N/A N/A

 1 The scalar component is an optional value.

The rotation vector sensor and the gravity sensor are the most frequently used sensors for motion detection and
monitoring. The rotational vector sensor is particularly versatile and can be used for a wide range of motion-related
tasks, such as detecting gestures, monitoring angular change, and monitoring relative orientation changes. For
example, the rotational vector sensor is ideal if you are developing a game, an augmented reality application, a 2-
dimensional or 3-dimensional compass, or a camera stabilization app. In most cases, using these sensors is a better
choice than using the accelerometer and geomagnetic eld sensor or the orientation sensor.

Android Open Source Project sensors

The Android Open Source Project (AOSP) provides three software-based motion sensors: a gravity sensor, a linear
acceleration sensor, and a rotation vector sensor. These sensors were updated in Android 4.0 and now use a device's
gyroscope (in addition to other sensors) to improve stability and performance. If you want to try these sensors, you can
identify them by using the getVendor() method and the getVersion() method (the vendor is Google LLC; the version
number is 3). Identifying these sensors by vendor and version number is necessary because the Android system
considers these three sensors to be secondary sensors. For example, if a device manufacturer provides their own gravity
sensor, then the AOSP gravity sensor shows up as a secondary gravity sensor. All three of these sensors rely on a
gyroscope: if a device does not have a gyroscope, these sensors do not show up and are not available for use.

Use the gravity sensor

The gravity sensor provides a three dimensional vector indicating the direction and magnitude of gravity. Typically, this
sensor is used to determine the device's relative orientation in space. The following code shows you how to get an
Platform Android Studio Google Play Docs
instance of the default gravity sensor:
More
  Search LANGUAGE  SIGN IN

Documentation KOTLIN JAVA

OVERVIEW GUIDES REFERENCE SAMPLES DESIGN & QUALITY




Services
Background tasks val sensorManager = getSystemService(Context.SENSOR_SERVICE) as SensorManager
Permissions val sensor: Sensor? = sensorManager.getDefaultSensor(Sensor.TYPE_GRAVITY)
App data & les
User data & identity
The units are the same as those used by the acceleration sensor (m/s2), and the coordinate system is the same as the
User location
one used by the acceleration sensor.
Touch & input
CameraX
Camera2  Note: When a device is at rest, the output of the gravity sensor should be identical to that of the accelerometer.

Camera

 Sensors
Overview
Use the linear accelerometer
Sensors overview
Motion sensors
The linear acceleration sensor provides you with a three-dimensional vector representing acceleration along each device
Position sensors
axis, excluding gravity. You can use this value to perform gesture detection. The value can also serve as input to an
Environment sensors
inertial navigation system, which uses dead reckoning. The following code shows you how to get an instance of the
Raw GNSS measurements
default linear acceleration sensor:


Connectivity
Renderscript
KOTLIN JAVA
Web-based content
Android App Bundles
Google Play val sensorManager = getSystemService(Context.SENSOR_SERVICE) as SensorManager
App Actions val sensor: Sensor? = sensorManager.getDefaultSensor(Sensor.TYPE_LINEAR_ACCELERATION)

Slices

Conceptually, this sensor provides you with acceleration data according to the following relationship:
Games

Develop
linear acceleration = acceleration - acceleration due to gravity

You typically use this sensor when you want to obtain acceleration data without the in uence of gravity. For example,
you could use this sensor to see how fast your car is going. The linear acceleration sensor always has an offset, which
you need to remove. The simplest way to do this is to build a calibration step into your application. During calibration
you can ask the user to set the device on a table, and then read the offsets for all three axes. You can then subtract that
offset from the acceleration sensor's direct readings to get the actual linear acceleration.

The sensor coordinate system is the same as the one used by the acceleration sensor, as are the units of measure
(m/s2).

Use the rotation vector sensor

The rotation vector represents the orientation of the device as a combination of an angle and an axis, in which the
device has rotated through an angle θ around an axis (x, y, or z). The following code shows you how to get an instance
of the default rotation vector sensor:

KOTLIN JAVA

val sensorManager = getSystemService(Context.SENSOR_SERVICE) as SensorManager


val sensor: Sensor? = sensorManager.getDefaultSensor(Sensor.TYPE_ROTATION_VECTOR)

The three elements of the rotation vector are expressed as follows:

Where the magnitude of the rotation vector is equal to sin(θ/2), and the direction of the rotation vector is equal to the
direction of the axis of rotation.
Platform Android
The Studio
three Google
elements of Play
the rotation Docsare equal
vector to 
More
 Search of a unit quaternion LANGUAGE
the last three components (cos(θ/2),  SIGN IN

x*sin(θ/2), y*sin(θ/2), z*sin(θ/2)). Elements of the rotation vector are unitless. The x, y, and z axes are de ned in the
same way as the acceleration sensor. The reference coordinate system is de ned as a direct orthonormal basis (see
Documentation
gure 1). This coordinate system has the following characteristics:
OVERVIEW GUIDES REFERENCE SAMPLES DESIGN & QUALITY


Services X is de ned as the vector product Y x Z. It is tangential to the ground at


Background tasks the device's current location and points approximately East.
Permissions
Y is tangential to the ground at the device's current location and points
App data & les
toward the geomagnetic North Pole.
User data & identity
User location Z points toward the sky and is perpendicular to the ground plane.
Touch & input
CameraX
For a sample application that shows how to use the rotation vector sensor, see
Camera2
RotationVectorDemo.java .
Camera

 Sensors Figure 1. Coordinate system used by


Overview
Use the significant motion sensor the rotation vector sensor.
Sensors overview
Motion sensors
The signi cant motion sensor triggers an event each time signi cant motion is
Position sensors
detected and then it disables itself. A signi cant motion is a motion that might
Environment sensors
lead to a change in the user's location; for example walking, biking, or sitting in a moving car. The following code shows
Raw GNSS measurements
you how to get an instance of the default signi cant motion sensor and how to register an event listener:


Connectivity
Renderscript KOTLIN JAVA
Web-based content
Android App Bundles
val sensorManager = getSystemService(Context.SENSOR_SERVICE) as SensorManager
Google Play val mSensor: Sensor? = sensorManager.getDefaultSensor(Sensor.TYPE_SIGNIFICANT_MOTION)
App Actions val triggerEventListener = object : TriggerEventListener() {
override fun onTrigger(event: TriggerEvent?) {

Slices
// Do work
}
Games
}

Develop mSensor?.also { sensor ->


sensorManager.requestTriggerSensor(triggerEventListener, sensor)
}

For more information, see TriggerEventListener .

Use the step counter sensor

The step counter sensor provides the number of steps taken by the user since the last reboot while the sensor was
activated. The step counter has more latency (up to 10 seconds) but more accuracy than the step detector sensor.

 Note: You must declare the ACTIVITY_RECOGNITION permission in order for your app to use this sensor on devices
running Android 10 (API level 29) or higher.

The following code shows you how to get an instance of the default step counter sensor:

KOTLIN JAVA

val sensorManager = getSystemService(Context.SENSOR_SERVICE) as SensorManager


val sensor: Sensor? = sensorManager.getDefaultSensor(Sensor.TYPE_STEP_COUNTER)

To preserve the battery on devices running your app, you should use the JobScheduler class to retrieve the current
value from the step counter sensor at a speci c interval. Although different types of apps require different sensor-
reading intervals, you should make this interval as long as possible unless your app requires real-time data from the
sensor.

Use the step detector sensor

The step detector sensor triggers an event each time the user takes a step. The latency is expected to be below 2
seconds.

 Note: You must declare the ACTIVITY_RECOGNITION permission in order for your app to use this sensor on devices
running Android 10 (API level 29) or higher.
Platform Android Studio Google Play Docs More
  Search LANGUAGE  SIGN IN
The following code shows you how to get an instance of the default step detector sensor:

Documentation KOTLIN JAVA

OVERVIEW GUIDES REFERENCE SAMPLES DESIGN & QUALITY




Services
Background tasks val sensorManager = getSystemService(Context.SENSOR_SERVICE) as SensorManager
Permissions val sensor: Sensor? = sensorManager.getDefaultSensor(Sensor.TYPE_STEP_DETECTOR)
App data & les
User data & identity
User location
Touch & input
Work with raw data
CameraX
Camera2 The following sensors provide your app with raw data about the linear and rotational forces being applied to the device.
Camera In order to use the values from these sensors effectively, you need to lter out factors from the environment, such as
gravity. You might also need to apply a smoothing algorithm to the trend of values to reduce noise.
 Sensors
Overview
Sensors overview
Use the accelerometer
Motion sensors
Position sensors An acceleration sensor measures the acceleration applied to the device, including the force of gravity. The following
Environment sensors code shows you how to get an instance of the default acceleration sensor:
Raw GNSS measurements


Connectivity
KOTLIN JAVA
Renderscript
Web-based content
Android App Bundles
val sensorManager = getSystemService(Context.SENSOR_SERVICE) as SensorManager
val sensor: Sensor? = sensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER)
Google Play
App Actions

Slices Conceptually, an acceleration sensor determines the acceleration that is applied to a device (Ad) by measuring the
forces that are applied to the sensor itself (Fs) using the following relationship:
Games

Develop

However, the force of gravity is always in uencing the measured acceleration according to the following relationship:

For this reason, when the device is sitting on a table (and not accelerating), the accelerometer reads a magnitude of g =
9.81 m/s2. Similarly, when the device is in free fall and therefore rapidly accelerating toward the ground at 9.81 m/s2, its
accelerometer reads a magnitude of g = 0 m/s2. Therefore, to measure the real acceleration of the device, the
contribution of the force of gravity must be removed from the accelerometer data. This can be achieved by applying a
high-pass lter. Conversely, a low-pass lter can be used to isolate the force of gravity. The following example shows
how you can do this:

KOTLIN JAVA

override fun onSensorChanged(event: SensorEvent) {


// In this example, alpha is calculated as t / (t + dT),
// where t is the low-pass filter's time-constant and
// dT is the event delivery rate.

val alpha: Float = 0.8f

// Isolate the force of gravity with the low-pass filter.


gravity[0] = alpha * gravity[0] + (1 - alpha) * event.values[0]
gravity[1] = alpha * gravity[1] + (1 - alpha) * event.values[1]
gravity[2] = alpha * gravity[2] + (1 - alpha) * event.values[2]

// Remove the gravity contribution with the high-pass filter.


linear_acceleration[0] = event.values[0] - gravity[0]
linear_acceleration[1] = event.values[1] - gravity[1]
linear_acceleration[2] = event.values[2] - gravity[2]
}

 Note: You can use many different techniques to lter sensor data. The code sample above uses a simple lter constant
(alpha) to create a low-pass lter. This lter constant is derived from a time constant (t), which is a rough representation of
the latency that the lter adds to the sensor events, and the sensor's event delivery rate (dt). The code sample uses an
alpha value of 0 8 for demonstration purposes If you use this ltering method you may need to choose a different alpha
alpha value of 0.8 for demonstration purposes. If you use this ltering method you may need to choose a different alpha
value.
Platform Android Studio Google Play Docs More
  Search LANGUAGE  SIGN IN

Accelerometers use the standard sensor coordinate system. In practice, this means that the following conditions apply
Documentation
when a device is laying at on a table in its natural orientation:

OVERVIEW GUIDES REFERENCE SAMPLES


If you push theDESIGN
device&on
QUALITY
the left side (so it moves to the right), the x acceleration value is positive.


Services If you push the device on the bottom (so it moves away from you), the y acceleration value is positive.
Background tasks
If you push the device toward the sky with an acceleration of A m/s2, the z acceleration value is equal to A + 9.81,
Permissions
which corresponds to the acceleration of the device (+A m/s2) minus the force of gravity (-9.81 m/s2).
App data & les
User data & identity The stationary device will have an acceleration value of +9.81, which corresponds to the acceleration of the device
User location (0 m/s2 minus the force of gravity, which is -9.81 m/s2).
Touch & input
CameraX
In general, the accelerometer is a good sensor to use if you are monitoring device motion. Almost every Android-
Camera2
powered handset and tablet has an accelerometer, and it uses about 10 times less power than the other motion sensors.
Camera
One drawback is that you might have to implement low-pass and high-pass lters to eliminate gravitational forces and
reduce noise.
 Sensors
Overview
Sensors overview
Use the gyroscope
Motion sensors
Position sensors The gyroscope measures the rate of rotation in rad/s around a device's x, y, and z axis. The following code shows you
Environment sensors how to get an instance of the default gyroscope:
Raw GNSS measurements


Connectivity KOTLIN JAVA


Renderscript
Web-based content
Android App Bundles
val sensorManager = getSystemService(Context.SENSOR_SERVICE) as SensorManager
val sensor: Sensor? = sensorManager.getDefaultSensor(Sensor.TYPE_GYROSCOPE)
Google Play
App Actions

Slices The sensor's coordinate system is the same as the one used for the acceleration sensor. Rotation is positive in the
counter-clockwise direction; that is, an observer looking from some positive location on the x, y or z axis at a device
Games positioned on the origin would report positive rotation if the device appeared to be rotating counter clockwise. This is

Develop the standard mathematical de nition of positive rotation and is not the same as the de nition for roll that is used by the
orientation sensor.

Usually, the output of the gyroscope is integrated over time to calculate a rotation describing the change of angles over
the timestep. For example:

KOTLIN JAVA

// Create a constant to convert nanoseconds to seconds.


private val NS2S = 1.0f / 1000000000.0f
private val deltaRotationVector = FloatArray(4) { 0f }
private var timestamp: Float = 0f

override fun onSensorChanged(event: SensorEvent?) {


// This timestep's delta rotation to be multiplied by the current rotation
// after computing it from the gyro sample data.
if (timestamp != 0f && event != null) {
val dT = (event.timestamp - timestamp) * NS2S
// Axis of the rotation sample, not normalized yet.
var axisX: Float = event.values[0]
var axisY: Float = event.values[1]
var axisZ: Float = event.values[2]

// Calculate the angular speed of the sample


val omegaMagnitude: Float = sqrt(axisX * axisX + axisY * axisY + axisZ * axisZ)

// Normalize the rotation vector if it's big enough to get the axis
// (that is, EPSILON should represent your maximum allowable margin of error)
if (omegaMagnitude > EPSILON) {
axisX /= omegaMagnitude
axisY /= omegaMagnitude
axisZ /= omegaMagnitude
}

// Integrate around this axis with the angular speed by the timestep
// in order to get a delta rotation from this sample over the timestep
// We will convert this axis-angle representation of the delta rotation
// into a quaternion before turning it into the rotation matrix.
val thetaOverTwo: Float = omegaMagnitude * dT / 2.0f
val sinThetaOverTwo: Float = sin(thetaOverTwo)
val cosThetaOverTwo: Float = cos(thetaOverTwo)
deltaRotationVector[0] = sinThetaOverTwo * axisX
deltaRotationVector[1] = sinThetaOverTwo * axisY
[ ]
deltaRotationVector[2] = sinThetaOverTwo * axisZ
Platform Android Studio Google Play Docs More

deltaRotationVector[3] = cosThetaOverTwo  Search LANGUAGE  SIGN IN

}
timestamp = event?.timestamp?.toFloat() ?: 0f
Documentation val deltaRotationMatrix = FloatArray(9) { 0f }
SensorManager.getRotationMatrixFromVector(deltaRotationMatrix, deltaRotationVector);
OVERVIEW GUIDES REFERENCE // User code
SAMPLES should
DESIGN concatenate the delta rotation we computed with the current rotation
& QUALITY

// in order to get the updated rotation.




Services
Background tasks // rotationCurrent = rotationCurrent * deltaRotationMatrix;
}
Permissions
App data & les
User data & identity Standard gyroscopes provide raw rotational data without any ltering or correction for noise and drift (bias). In practice,
User location gyroscope noise and drift will introduce errors that need to be compensated for. You usually determine the drift (bias)
Touch & input and noise by monitoring other sensors, such as the gravity sensor or accelerometer.
CameraX
Camera2
Use the uncalibrated gyroscope
Camera

 Sensors The uncalibrated gyroscope is similar to the gyroscope, except that no gyro-drift compensation is applied to the rate of
Overview rotation. Factory calibration and temperature compensation are still applied to the rate of rotation. The uncalibrated
Sensors overview gyroscope is useful for post-processing and melding orientation data. In general, gyroscope_event.values[0] will be
Motion sensors close to uncalibrated_gyroscope_event.values[0] - uncalibrated_gyroscope_event.values[3] . That is,
Position sensors
Environment sensors calibrated_x ~= uncalibrated_x - bias_estimate_x
Raw GNSS measurements


Connectivity
Renderscript
 Note: Uncalibrated sensors provide more raw results and may include some bias, but their measurements contain fewer
jumps from corrections applied through calibration. Some applications may prefer these uncalibrated results as smoother
Web-based content and more reliable. For instance, if an application is attempting to conduct its own sensor fusion, introducing calibrations
Android App Bundles can actually distort results.
Google Play
App Actions
In addition to the rates of rotation, the uncalibrated gyroscope also provides the estimated drift around each axis. The

Slices
following code shows you how to get an instance of the default uncalibrated gyroscope:

Games
KOTLIN JAVA

Develop

val sensorManager = getSystemService(Context.SENSOR_SERVICE) as SensorManager


val sensor: Sensor? = sensorManager.getDefaultSensor(Sensor.TYPE_GYROSCOPE_UNCALIBRATED)

Additional code samples

The BatchStepSensor  sample further demonstrates the use of the APIs covered on this page.

You should also read

Sensors

Sensors Overview

Position Sensors

Environment Sensors

Previous Next

 Sensors overview Position sensors 

Rate and review

Content and code samples on this page are subject to the licenses described in the Content License. Java is a registered trademark of Oracle and/or its a liates.

Last updated 2021-03-13 UTC.

Twitter YouTube
Follow @AndroidDev on Twitter Check out Android Developers on YouTube
Platform Android Studio
More Android
Google Play Docs
Support
More
  Search
Documentation
LANGUAGE  SIGN IN

Android Report platform bug Developer guides


Documentation
Enterprise Report documentation bug Design guides

OVERVIEW GUIDES Security


REFERENCE SAMPLES DESIGN & QUALITY Google Play support API reference


Services Source Join research studies Samples


Background tasks
Android Studio
Permissions
App data & les
User data & identity
User location
Touch & input
Android Chrome Firebase Google Cloud Platform All products
CameraX
Camera2
Camera

 Sensors Privacy | License | Brand guidelines Get news and tips by email SUBSCRIBE LANGUAGE 
Overview
Sensors overview
Motion sensors
Position sensors
Environment sensors
Raw GNSS measurements


Connectivity
Renderscript
Web-based content
Android App Bundles
Google Play
App Actions

Slices

Games

Develop
Platform Android Studio LANGUAGE  SIGN IN

Documentation

REFERENCE MORE

Android Developers  Docs  Reference


Table of contents

Developer Guides
Summary
Nested classes
Constants
Inherited constants


Added in API level 1

MediaRecorder

Kotlin | Java

public class MediaRecorder


extends Object implements AudioRouting, AudioRecordingMonitor, MicrophoneDirection

java.lang.Object
↳ android.media.MediaRecorder

Used to record audio and video. The recording control is based on a simple state machine (see below).

A common case of using MediaRecorder to record audio works as follows:


MediaRecorder recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
recorder.setOutputFile(PATH_NAME);
recorder.prepare();
recorder.start(); ∕∕ Recording is now started
...
recorder.stop();
recorder.reset(); ∕∕ You can reuse the object by going back to setAudioSource() step
recorder.release(); ∕∕ Now the object cannot be reused

Applications may want to register for informational and error events in order to be informed of some internal update
and possible runtime errors during recording. Registration for such events is done by setting the appropriate listeners
(via calls (to setOnInfoListener(android.media.MediaRecorder.OnInfoListener)setOnInfoListener and/or
setOnErrorListener(android.media.MediaRecorder.OnErrorListener)setOnErrorListener). In order to receive
the respective callback associated with these listeners, applications are required to create MediaRecorder objects on
threads with a Looper running (the main UI thread by default already has a Looper running).

Note: Currently, MediaRecorder does not work on the emulator.

 Developer Guides

For more information about how to use MediaRecorder for recording video, read the Camera developer guide. For more
information about how to use MediaRecorder for recording sound, read the Audio Capture developer guide.

Summary

Nested classes

class MediaRecorder.AudioEncoder

De몭nes the audio encoding.

class MediaRecorder.AudioSource

De몭nes the audio source.

class MediaRecorder.MetricsConstants

interface MediaRecorder.OnErrorListener

Interface de몭nition for a callback to be invoked when an error occurs while recording.

interface MediaRecorder.OnInfoListener

Interface de몭nition of a callback to be invoked to communicate some info and/or warning about
the recording.

class MediaRecorder.OutputFormat

De몭nes the output format.

class MediaRecorder.VideoEncoder

De몭nes the video encoding.

class MediaRecorder.VideoSource

De몭nes the video source.

Constants

int MEDIA_ERROR_SERVER_DIED

Media server died.

int MEDIA_RECORDER_ERROR_UNKNOWN

Unspeci몭ed media recorder error.


int MEDIA_RECORDER_INFO_MAX_DURATION_REACHED

A maximum duration had been setup and has now been reached.

int MEDIA_RECORDER_INFO_MAX_FILESIZE_APPROACHING

A maximum 몭lesize had been setup and current recorded 몭le size has reached 90% of the limit.

int MEDIA_RECORDER_INFO_MAX_FILESIZE_REACHED

A maximum 몭lesize had been setup and has now been reached.

int MEDIA_RECORDER_INFO_NEXT_OUTPUT_FILE_STARTED

A maximum 몭lesize had been reached and MediaRecorder has switched output to a new 몭le set by
application setNextOutputFile(File).

int MEDIA_RECORDER_INFO_UNKNOWN

Unspeci몭ed media recorder info.

Inherited constants

From interface android.media.MicrophoneDirection

Public constructors

MediaRecorder()

This constructor is deprecated. Use MediaRecorder(android.content.Context)instead

MediaRecorder(Context context)

Default constructor.

Public methods

void addOnRoutingChangedListener(AudioRouting.OnRoutingChangedListener
listener, Handler handler)

Adds an AudioRouting.OnRoutingChangedListenerto receive noti몭cations of routing


changes on this MediaRecorder.

List<MicrophoneInfo> getActiveMicrophones()

Return A lists of MicrophoneInforepresenting the active microphones.

AudioRecordingConfiguration getActiveRecordingConfiguration()

Returns the current active audio recording for this audio recorder.

static final int getAudioSourceMax()

Gets the maximum value for audio sources.

LogSessionId getLogSessionId()

Returns the LogSessionIdfor MediaRecorder.

int getMaxAmplitude()

Returns the maximum absolute amplitude that was sampled since the last call to this
method.

PersistableBundle getMetrics()

Return Metrics data about the current Mediarecorder instance.

AudioDeviceInfo getPreferredDevice()

Returns the selected input device speci몭ed by


setPreferredDevice(AudioDeviceInfo).

AudioDeviceInfo getRoutedDevice()

Returns an AudioDeviceInfoidentifying the current routing of this MediaRecorder Note:


The query is only valid if the MediaRecorder is currently recording.

Surface getSurface()
Gets the surface to record from when using SURFACE video source.

boolean isPrivacySensitive()

Returns whether this MediaRecorder is marked as privacy sensitive or not with regard to
audio capture.

void pause()

Pauses recording.

void prepare()

Prepares the recorder to begin capturing and encoding data.

void registerAudioRecordingCallback(Executor executor,


AudioManager.AudioRecordingCallback cb)

Register a callback to be noti몭ed of audio capture changes via a


AudioManager.AudioRecordingCallback.

void release()

Releases resources associated with this MediaRecorder object.

void removeOnRoutingChangedListener(AudioRouting.OnRoutingChangedListener
listener)

Removes an AudioRouting.OnRoutingChangedListenerwhich has been previously


added to receive rerouting noti몭cations.

void reset()

Restarts the MediaRecorder to its idle state.

void resume()

Resumes recording.

void setAudioChannels(int numChannels)

Sets the number of audio channels for recording.

void setAudioEncoder(int audio_encoder)

Sets the audio encoder to be used for recording.

void setAudioEncodingBitRate(int bitRate)

Sets the audio encoding bit rate for recording.

void setAudioSamplingRate(int samplingRate)

Sets the audio sampling rate for recording.

void setAudioSource(int audioSource)

Sets the audio source to be used for recording.

void setCamera(Camera c)

This method was deprecated in API level 21. Use getSurface()and the
android.hardware.camera2API instead.

void setCaptureRate(double fps)

Set video frame capture rate.

void setInputSurface(Surface surface)

Con몭gures the recorder to use a persistent surface when using SURFACE video source.

void setLocation(float latitude, float longitude)

Set and store the geodata (latitude and longitude) in the output 몭le.

void setLogSessionId(LogSessionId id)

Sets the LogSessionIdfor MediaRecorder.

void setMaxDuration(int max_duration_ms)

Sets the maximum duration (in ms) of the recording session.

void setMaxFileSize(long max_filesize_bytes)

Sets the maximum 몭lesize (in bytes) of the recording session.


void setNextOutputFile(File file)

Sets the next output 몭le to be used when the maximum 몭lesize is reached on the prior output
setOutputFile(File)or setNextOutputFile(File)).

void setNextOutputFile(FileDescriptor fd)

Sets the next output 몭le descriptor to be used when the maximum 몭lesize is reached on the
prior output setOutputFile(File)or setNextOutputFile(File)).

void setOnErrorListener(MediaRecorder.OnErrorListener l)

Register a callback to be invoked when an error occurs while recording.

void setOnInfoListener(MediaRecorder.OnInfoListener listener)

Register a callback to be invoked when an informational event occurs while recording.

void setOrientationHint(int degrees)

Sets the orientation hint for output video playback.

void setOutputFile(FileDescriptor fd)

Pass in the 몭le descriptor of the 몭le to be written.

void setOutputFile(String path)

Sets the path of the output 몭le to be produced.

void setOutputFile(File file)

Pass in the 몭le object to be written.

void setOutputFormat(int output_format)

Sets the format of the output 몭le produced during recording.

boolean setPreferredDevice(AudioDeviceInfo deviceInfo)

Speci몭es an audio device (via an AudioDeviceInfoobject) to route the input from this
MediaRecorder.

boolean setPreferredMicrophoneDirection(int direction)

Speci몭es the logical microphone (for processing).

boolean setPreferredMicrophoneFieldDimension(float zoom)

Speci몭es the zoom factor (i.e.

void setPreviewDisplay(Surface sv)

Sets a Surface to show a preview of recorded media (video).

void setPrivacySensitive(boolean privacySensitive)

Indicates that this capture request is privacy sensitive and that any concurrent capture is not
permitted.

void setProfile(CamcorderProfile profile)

Uses the settings from a CamcorderPro몭le object for recording.

void setVideoEncoder(int video_encoder)

Sets the video encoder to be used for recording.

void setVideoEncodingBitRate(int bitRate)

Sets the video encoding bit rate for recording.

void setVideoEncodingProfileLevel(int profile, int level)

Sets the desired video encoding pro몭le and level for recording.

void setVideoFrameRate(int rate)

Sets the frame rate of the video to be captured.

void setVideoSize(int width, int height)

Sets the width and height of the video to be captured.

void setVideoSource(int video_source)


Sets the video source to be used for recording.

void start()

Begins capturing and encoding data to the 몭le speci몭ed with setOutputFile().

void stop()

Stops recording.

void unregisterAudioRecordingCallback(AudioManager.AudioRecordingCallback
cb)

Unregister an audio recording callback previously registered with


registerAudioRecordingCallback(java.util.concurrent.Executor,
android.media.AudioManager.AudioRecordingCallback).

Protected methods

void finalize()

Called by the garbage collector on an object when garbage collection determines that there are no more references to the
object.

Inherited methods

From class java.lang.Object

From interface android.media.AudioRouting

From interface android.media.AudioRecordingMonitor

From interface android.media.MicrophoneDirection

Constants

MEDIA_ERROR_SERVER_DIED

Added in API level 17

public static final int MEDIA_ERROR_SERVER_DIED

Media server died. In this case, the application must release the MediaRecorder object and instantiate a new one.

See also:

MediaRecorder.OnErrorListener

Constant Value: 100 (0x00000064)

MEDIA_RECORDER_ERROR_UNKNOWN

Added in API level 3

public static final int MEDIA_RECORDER_ERROR_UNKNOWN

Unspeci몭ed media recorder error.

See also:

MediaRecorder.OnErrorListener

Constant Value: 1 (0x00000001)


MEDIA_RECORDER_INFO_MAX_DURATION_REACHED

Added in API level 3

public static final int MEDIA_RECORDER_INFO_MAX_DURATION_REACHED

A maximum duration had been setup and has now been reached.

See also:

MediaRecorder.OnInfoListener

Constant Value: 800 (0x00000320)

MEDIA_RECORDER_INFO_MAX_FILESIZE_APPROACHING

Added in API level 26

public static final int MEDIA_RECORDER_INFO_MAX_FILESIZE_APPROACHING

A maximum 몭lesize had been setup and current recorded 몭le size has reached 90% of the limit. This is sent once per
몭le upon reaching/passing the 90% limit. To continue the recording, applicaiton should use
setNextOutputFile(File) to set the next output 몭le. Otherwise, recording will stop when reaching maximum 몭le
size.

See also:

MediaRecorder.OnInfoListener

Constant Value: 802 (0x00000322)

MEDIA_RECORDER_INFO_MAX_FILESIZE_REACHED

Added in API level 3

public static final int MEDIA_RECORDER_INFO_MAX_FILESIZE_REACHED

A maximum 몭lesize had been setup and has now been reached. Note: This event will not be sent if application already
set next output 몭le through setNextOutputFile(File).

See also:

MediaRecorder.OnInfoListener

Constant Value: 801 (0x00000321)

MEDIA_RECORDER_INFO_NEXT_OUTPUT_FILE_STARTED

Added in API level 26

public static final int MEDIA_RECORDER_INFO_NEXT_OUTPUT_FILE_STARTED

A maximum 몭lesize had been reached and MediaRecorder has switched output to a new 몭le set by application
setNextOutputFile(File). For best practice, application should use this event to keep track of whether the 몭le
previously set has been used or not.

See also:

MediaRecorder.OnInfoListener

Constant Value: 803 (0x00000323)


MEDIA_RECORDER_INFO_UNKNOWN

Added in API level 3

public static final int MEDIA_RECORDER_INFO_UNKNOWN

Unspeci몭ed media recorder info.

See also:

MediaRecorder.OnInfoListener

Constant Value: 1 (0x00000001)

Public constructors

MediaRecorder

Added in API level 1

public MediaRecorder ()

 This constructor is deprecated.


Use MediaRecorder(android.content.Context)instead

Default constructor.

MediaRecorder

Added in Android S

public MediaRecorder (Context context)

Default constructor.

Parameters

context Context: Context the recorder belongs to This value cannot be null.

Public methods

addOnRoutingChangedListener

Added in API level 28

public void addOnRoutingChangedListener (AudioRouting.OnRoutingChangedListener listener,


Handler handler)

Adds an AudioRouting.OnRoutingChangedListener to receive noti몭cations of routing changes on this


MediaRecorder.

Parameters

listener AudioRouting.OnRoutingChangedListener: The


AudioRouting.OnRoutingChangedListenerinterface to receive noti몭cations of rerouting
events.

handler Handler: Speci몭es the Handlerobject for the thread on which to execute the callback. If null,
the handler on the main looper will be used.

getActiveMicrophones

Added in API level 28

public List<MicrophoneInfo> getActiveMicrophones ()

Return A lists of MicrophoneInfo representing the active microphones. By querying channel mapping for each active
microphone, developer can know how the microphone is used by each channels or a capture stream.

Returns

List<MicrophoneInfo> a lists of MicrophoneInforepresenting the active microphones

Throws

IOException if an error occurs

getActiveRecordingCon몭guration

Added in API level 29

public AudioRecordingConfiguration getActiveRecordingConfiguration ()

Returns the current active audio recording for this audio recorder.

Returns

AudioRecordingConfiguration a valid AudioRecordingConfigurationif this recorder is active or null otherwise.

See also:

AudioRecordingConfiguration

getAudioSourceMax

Added in API level 4

public static final int getAudioSourceMax ()

Gets the maximum value for audio sources.

Returns

int

See also:

MediaRecorder.AudioSource

getLogSessionId
Added in Android S

public LogSessionId getLogSessionId ()

Returns the LogSessionId for MediaRecorder.

Returns

LogSessionId the global ID for monitoring the MediaRecorder performance This value cannot be null.

getMaxAmplitude

Added in API level 1

public int getMaxAmplitude ()

Returns the maximum absolute amplitude that was sampled since the last call to this method. Call this only after the
setAudioSource().

Returns

int the maximum absolute amplitude measured since the last call, or 0 when called for the 몭rst time

Throws

IllegalStateException if it is called before the audio source has been set.

getMetrics

Added in API level 26

public PersistableBundle getMetrics ()

Return Metrics data about the current Mediarecorder instance.

Returns

PersistableBundle a PersistableBundlecontaining the set of attributes and values available for the media being
generated by this instance of MediaRecorder. The attributes are descibed in MetricsConstants.
Additional vendor-speci몭c 몭elds may also be present in the return value.

getPreferredDevice

Added in API level 28

public AudioDeviceInfo getPreferredDevice ()

Returns the selected input device speci몭ed by setPreferredDevice(AudioDeviceInfo). Note that this is not
guaranteed to correspond to the actual device being used for recording.

Returns

AudioDeviceInfo

getRoutedDevice
Added in API level 28

public AudioDeviceInfo getRoutedDevice ()

Returns an AudioDeviceInfo identifying the current routing of this MediaRecorder Note: The query is only valid if the
MediaRecorder is currently recording. If the recorder is not recording, the returned device can be null or correspond to
previously selected device when the recorder was last active.

Returns

AudioDeviceInfo

getSurface

Added in API level 21

public Surface getSurface ()

Gets the surface to record from when using SURFACE video source.

May only be called after prepare(). Frames rendered to the Surface before start() will be discarded.

Returns

Surface

Throws

IllegalStateException if it is called before prepare(), after stop(), or is called when VideoSource is not set to
SURFACE.

See also:

MediaRecorder.VideoSource

isPrivacySensitive

Added in API level 30

public boolean isPrivacySensitive ()

Returns whether this MediaRecorder is marked as privacy sensitive or not with regard to audio capture.

See setPrivacySensitive(boolean)

Returns

boolean true if privacy sensitive, false otherwise

pause

Added in API level 24

public void pause ()

Pauses recording. Call this after start(). You may resume recording with resume() without recon몭guration, as opposed
to stop(). It does nothing if the recording is already paused. When the recording is paused and resumed, the resulting
output would be as if nothing happend during paused period, immediately switching to the resumed scene.
Throws

IllegalStateException if it is called before start() or after stop()

prepare

Added in API level 1

public void prepare ()

Prepares the recorder to begin capturing and encoding data. This method must be called after setting up the desired
audio and video sources, encoders, 몭le format, etc., but before start().

Throws

IllegalStateException if it is called after start() or before setOutputFormat().

IOException if prepare fails otherwise.

registerAudioRecordingCallback

Added in API level 29

public void registerAudioRecordingCallback (Executor executor,


AudioManager.AudioRecordingCallback cb)

Register a callback to be noti몭ed of audio capture changes via a AudioManager.AudioRecordingCallback. A


callback is received when the capture path con몭guration changes (pre-processing, format, sampling rate...) or capture
is silenced/unsilenced by the system.

Parameters

executor Executor: Executorto handle the callbacks. This value cannot be null. Callback and listener
events are dispatched through this Executor, providing an easy way to control which thread is
used. To dispatch events through the main thread of your application, you can use
Context.getMainExecutor(). To dispatch events through a shared thread pool, you can use
AsyncTask#THREAD_POOL_EXECUTOR.

cb AudioManager.AudioRecordingCallback: non-null callback to register This value cannot be


null.

release

Added in API level 1

public void release ()

Releases resources associated with this MediaRecorder object. It is good practice to call this method when you're
done using the MediaRecorder. In particular, whenever an Activity of an application is paused (its onPause() method is
called), or stopped (its onStop() method is called), this method should be invoked to release the MediaRecorder object,
unless the application has a special need to keep the object around. In addition to unnecessary resources (such as
memory and instances of codecs) being held, failure to call this method immediately if a MediaRecorder object is no
longer needed may also lead to continuous battery consumption for mobile devices, and recording failure for other
applications if no multiple instances of the same codec are supported on a device. Even if multiple instances of the
same codec are supported, some performance degradation may be expected when unnecessary multiple instances
are used at the same time.
removeOnRoutingChangedListener

Added in API level 28

public void removeOnRoutingChangedListener (AudioRouting.OnRoutingChangedListener listener)

Removes an AudioRouting.OnRoutingChangedListener which has been previously added to receive rerouting


noti몭cations.

Parameters

listener AudioRouting.OnRoutingChangedListener: The previously added


AudioRouting.OnRoutingChangedListenerinterface to remove.

reset

Added in API level 1

public void reset ()

Restarts the MediaRecorder to its idle state. After calling this method, you will have to con몭gure it again as if it had just
been constructed.

resume

Added in API level 24

public void resume ()

Resumes recording. Call this after start(). It does nothing if the recording is not paused.

Throws

IllegalStateException if it is called before start() or after stop()

See also:

pause()

setAudioChannels

Added in API level 8

public void setAudioChannels (int numChannels)

Sets the number of audio channels for recording. Call this method before prepare(). Prepare() may perform additional
checks on the parameter to make sure whether the speci몭ed number of audio channels are applicable.

Parameters

numChannels int: the number of audio channels. Usually it is either 1 (mono) or 2 (stereo).

setAudioEncoder

Added in API level 1


public void setAudioEncoder (int audio_encoder)

Sets the audio encoder to be used for recording. If this method is not called, the output 몭le will not contain an audio
track. Call this after setOutputFormat() but before prepare().

Parameters

audio_encoder int: the audio encoder to use.

Throws

IllegalStateException if it is called before setOutputFormat() or after prepare().

See also:

MediaRecorder.AudioEncoder

setAudioEncodingBitRate

Added in API level 8

public void setAudioEncodingBitRate (int bitRate)

Sets the audio encoding bit rate for recording. Call this method before prepare(). Prepare() may perform additional
checks on the parameter to make sure whether the speci몭ed bit rate is applicable, and sometimes the passed bitRate
will be clipped internally to ensure the audio recording can proceed smoothly based on the capabilities of the platform.

Parameters

bitRate int: the audio encoding bit rate in bits per second.

setAudioSamplingRate

Added in API level 8

public void setAudioSamplingRate (int samplingRate)

Sets the audio sampling rate for recording. Call this method before prepare(). Prepare() may perform additional checks
on the parameter to make sure whether the speci몭ed audio sampling rate is applicable. The sampling rate really
depends on the format for the audio recording, as well as the capabilities of the platform. For instance, the sampling
rate supported by AAC audio coding standard ranges from 8 to 96 kHz, the sampling rate supported by AMRNB is
8kHz, and the sampling rate supported by AMRWB is 16kHz. Please consult with the related audio coding standard for
the supported audio sampling rate.

Parameters

samplingRate int: the sampling rate for audio in samples per second.

setAudioSource

Added in API level 1

public void setAudioSource (int audioSource)

Sets the audio source to be used for recording. If this method is not called, the output 몭le will not contain an audio
track. The source needs to be speci몭ed before setting recording-parameters or encoders. Call this only before
setOutputFormat().

Parameters

audioSource int: the audio source to use Value is MediaRecorder.AudioSource.DEFAULT,


MediaRecorder.AudioSource.MIC, MediaRecorder.AudioSource.VOICE_UPLINK,
MediaRecorder.AudioSource.VOICE_DOWNLINK,
MediaRecorder.AudioSource.VOICE_CALL, MediaRecorder.AudioSource.CAMCORDER,
MediaRecorder.AudioSource.VOICE_RECOGNITION,
MediaRecorder.AudioSource.VOICE_COMMUNICATION,
MediaRecorder.AudioSource.UNPROCESSED, or
MediaRecorder.AudioSource.VOICE_PERFORMANCE

Throws

IllegalStateException if it is called after setOutputFormat()

See also:

MediaRecorder.AudioSource

setCamera

Added in API level 3


Deprecated in API level 21

public void setCamera (Camera c)

 This method was deprecated in API level 21.


Use getSurface()and the android.hardware.camera2API instead.

Sets a Camera to use for recording.

Use this function to switch quickly between preview and capture mode without a teardown of the camera object.
Camera.unlock() should be called before this. Must call before prepare().

Parameters

c Camera: the Camera to use for recording

setCaptureRate

Added in API level 11

public void setCaptureRate (double fps)

Set video frame capture rate. This can be used to set a different video frame capture rate than the recorded video's
playback rate. This method also sets the recording mode to time lapse. In time lapse video recording, only video is
recorded. Audio related parameters are ignored when a time lapse recording session starts, if an application sets
them.

Parameters

fps double: Rate at which frames should be captured in frames per second. The fps can go as low as
desired. However the fastest fps will be limited by the hardware. For resolutions that can be
captured by the video camera, the fastest fps can be computed using
Camera.Parameters.getPreviewFpsRange(int[]). For higher resolutions the fastest fps
may be more restrictive. Note that the recorder cannot guarantee that frames will be captured at
the given rate due to camera/encoder limitations. However it tries to be as close as possible.
setInputSurface

Added in API level 23

public void setInputSurface (Surface surface)

Con몭gures the recorder to use a persistent surface when using SURFACE video source.

May only be called before prepare(). If called, getSurface() should not be used and will throw
IllegalStateException. Frames rendered to the Surface before start() will be discarded.

Parameters

surface Surface: a persistent input surface created by


MediaCodec#createPersistentInputSurfaceThis value cannot be null.

Throws

IllegalStateException if it is called after prepare()and before stop().

IllegalArgumentException if the surface was not created by MediaCodec#createPersistentInputSurface.

See also:

MediaCodec.createPersistentInputSurface()

MediaRecorder.VideoSource

setLocation

Added in API level 14

public void setLocation (float latitude,


float longitude)

Set and store the geodata (latitude and longitude) in the output 몭le. This method should be called before prepare().
The geodata is stored in udta box if the output format is OutputFormat.THREE_GPP or OutputFormat.MPEG_4, and is
ignored for other output formats. The geodata is stored according to ISO-6709 standard.

Parameters

latitude float: latitude in degrees. Its value must be in the range [-90, 90].

longitude float: longitude in degrees. Its value must be in the range [-180, 180].

Throws

IllegalArgumentException if the given latitude or longitude is out of range.

setLogSessionId

Added in Android S

public void setLogSessionId (LogSessionId id)

Sets the LogSessionId for MediaRecorder.

Parameters

id LogSessionId: the global ID for monitoring the MediaRecorder performance This value cannot
be null.

setMaxDuration

Added in API level 3

public void setMaxDuration (int max_duration_ms)

Sets the maximum duration (in ms) of the recording session. Call this after setOutputFormat() but before prepare().
After recording reaches the speci몭ed duration, a noti몭cation will be sent to the MediaRecorder.OnInfoListener
with a "what" code of MEDIA_RECORDER_INFO_MAX_DURATION_REACHED and recording will be stopped. Stopping
happens asynchronously, there is no guarantee that the recorder will have stopped by the time the listener is noti몭ed.

When using MPEG-4 container ( setOutputFormat(int) with OutputFormat#MPEG_4), it is recommended to set


maximum duration that 몭ts the use case. Setting a larger than required duration may result in a larger than needed
output 몭le because of space reserved for MOOV box expecting large movie data in this recording session. Unused
space of MOOV box is turned into FREE box in the output 몭le.

Parameters

max_duration_ms int: the maximum duration in ms (if zero or negative, disables the duration limit)

Throws

IllegalArgumentException

setMaxFileSize

Added in API level 3

public void setMaxFileSize (long max_filesize_bytes)

Sets the maximum 몭lesize (in bytes) of the recording session. Call this after setOutputFormat() but before prepare().
After recording reaches the speci몭ed 몭lesize, a noti몭cation will be sent to the MediaRecorder.OnInfoListener with
a "what" code of MEDIA_RECORDER_INFO_MAX_FILESIZE_REACHED and recording will be stopped. Stopping happens
asynchronously, there is no guarantee that the recorder will have stopped by the time the listener is noti몭ed.

When using MPEG-4 container ( setOutputFormat(int) with OutputFormat#MPEG_4), it is recommended to set


maximum 몭lesize that 몭ts the use case. Setting a larger than required 몭lesize may result in a larger than needed output
몭le because of space reserved for MOOV box expecting large movie data in this recording session. Unused space of
MOOV box is turned into FREE box in the output 몭le.

Parameters

max_filesize_bytes long: the maximum 몭lesize in bytes (if zero or negative, disables the limit)

Throws

IllegalArgumentException

setNextOutputFile

Added in API level 26

public void setNextOutputFile (File file)

Sets the next output 몭le to be used when the maximum 몭lesize is reached on the prior output setOutputFile(File)
or setNextOutputFile(File)). File should be seekable. After setting the next output 몭le, application should not use
the 몭le until stop(). Application must call this after receiving on the MediaRecorder.OnInfoListener a "what"
code of MEDIA_RECORDER_INFO_MAX_FILESIZE_APPROACHING and before receiving a "what" code of
MEDIA_RECORDER_INFO_MAX_FILESIZE_REACHED. The 몭le is not used until switching to that output. Application will
receive MEDIA_RECORDER_INFO_NEXT_OUTPUT_FILE_STARTED when the next output 몭le is used. Application will not
be able to set a new output 몭le if the previous one has not been used. Application is responsible for cleaning up
unused 몭les after stop() is called.

Parameters

file File: The 몭le to use.

Throws

IllegalStateException if it is called before prepare().

IOException if setNextOutputFile fails otherwise.

setNextOutputFile

Added in API level 26

public void setNextOutputFile (FileDescriptor fd)

Sets the next output 몭le descriptor to be used when the maximum 몭lesize is reached on the prior output
setOutputFile(File) or setNextOutputFile(File)). File descriptor must be seekable and writable. After setting
the next output 몭le, application should not use the 몭le referenced by this 몭le descriptor until stop(). It is the
application's responsibility to close the 몭le descriptor. It is safe to do so as soon as this call returns. Application must
call this after receiving on the MediaRecorder.OnInfoListener a "what" code of
MEDIA_RECORDER_INFO_MAX_FILESIZE_APPROACHING and before receiving a "what" code of
MEDIA_RECORDER_INFO_MAX_FILESIZE_REACHED. The 몭le is not used until switching to that output. Application will
receive MEDIA_RECORDER_INFO_NEXT_OUTPUT_FILE_STARTED when the next output 몭le is used. Application will not be
able to set a new output 몭le if the previous one has not been used. Application is responsible for cleaning up unused
몭les after stop() is called.

Parameters

fd FileDescriptor: an open 몭le descriptor to be written into.

Throws

IllegalStateException if it is called before prepare().

IOException if setNextOutputFile fails otherwise.

setOnErrorListener

Added in API level 3

public void setOnErrorListener (MediaRecorder.OnErrorListener l)

Register a callback to be invoked when an error occurs while recording.

Parameters

l MediaRecorder.OnErrorListener: the callback that will be run


setOnInfoListener

Added in API level 3

public void setOnInfoListener (MediaRecorder.OnInfoListener listener)

Register a callback to be invoked when an informational event occurs while recording.

Parameters

listener MediaRecorder.OnInfoListener: the callback that will be run

setOrientationHint

Added in API level 9

public void setOrientationHint (int degrees)

Sets the orientation hint for output video playback. This method should be called before prepare(). This method will
not trigger the source video frame to rotate during video recording, but to add a composition matrix containing the
rotation angle in the output video if the output format is OutputFormat.THREE_GPP or OutputFormat.MPEG_4 so that a
video player can choose the proper orientation for playback. Note that some video players may choose to ignore the
compostion matrix in a video during playback.

Parameters

degrees int: the angle to be rotated clockwise in degrees. The supported angles are 0, 90, 180, and 270
degrees.

Throws

IllegalArgumentException if the angle is not supported.

setOutputFile

Added in API level 3

public void setOutputFile (FileDescriptor fd)

Pass in the 몭le descriptor of the 몭le to be written. Call this after setOutputFormat() but before prepare().

Parameters

fd FileDescriptor: an open 몭le descriptor to be written into.

Throws

IllegalStateException if it is called before setOutputFormat() or after prepare()

setOutputFile

Added in API level 1

public void setOutputFile (String path)

Sets the path of the output 몭le to be produced. Call this after setOutputFormat() but before prepare().
Parameters

path String: The pathname to use.

Throws

IllegalStateException if it is called before setOutputFormat() or after prepare()

setOutputFile

Added in API level 26

public void setOutputFile (File file)

Pass in the 몭le object to be written. Call this after setOutputFormat() but before prepare(). File should be seekable.
After setting the next output 몭le, application should not use the 몭le until stop(). Application is responsible for
cleaning up unused 몭les after stop() is called.

Parameters

file File: the 몭le object to be written into.

setOutputFormat

Added in API level 1

public void setOutputFormat (int output_format)

Sets the format of the output 몭le produced during recording. Call this after setAudioSource()/setVideoSource() but
before prepare().

It is recommended to always use 3GP format when using the H.263 video encoder and AMR audio encoder. Using an
MPEG-4 container format may confuse some desktop players.

Parameters

output_format int: the output format to use. The output format needs to be speci몭ed before setting recording-
parameters or encoders.

Throws

IllegalStateException if it is called after prepare() or before setAudioSource()/setVideoSource().

See also:

MediaRecorder.OutputFormat

setPreferredDevice

Added in API level 28

public boolean setPreferredDevice (AudioDeviceInfo deviceInfo)

Speci몭es an audio device (via an AudioDeviceInfo object) to route the input from this MediaRecorder.

Parameters
deviceInfo AudioDeviceInfo: The AudioDeviceInfospecifying the audio source. If deviceInfo is null,
default routing is restored.

Returns

boolean true if succesful, false if the speci몭ed AudioDeviceInfois non-null and does not correspond to
a valid audio input device.

setPreferredMicrophoneDirection

Added in API level 29

public boolean setPreferredMicrophoneDirection (int direction)

Speci몭es the logical microphone (for processing).

Parameters

direction int: Direction constant. Value is MicrophoneDirection.MIC_DIRECTION_UNSPECIFIED,


MicrophoneDirection.MIC_DIRECTION_TOWARDS_USER,
MicrophoneDirection.MIC_DIRECTION_AWAY_FROM_USER, or
MicrophoneDirection.MIC_DIRECTION_EXTERNAL

Returns

boolean true if sucessful.

setPreferredMicrophoneFieldDimension

Added in API level 29

public boolean setPreferredMicrophoneFieldDimension (float zoom)

Speci몭es the zoom factor (i.e. the 몭eld dimension) for the selected microphone (for processing). The selected
microphone is determined by the use-case for the stream.

Parameters

zoom float: the desired 몭eld dimension of microphone capture. Range is from -1 (wide angle), though
0 (no zoom) to 1 (maximum zoom). Value is between -1.0 and 1.0 inclusive

Returns

boolean true if sucessful.

setPreviewDisplay

Added in API level 1

public void setPreviewDisplay (Surface sv)

Sets a Surface to show a preview of recorded media (video). Calls this before prepare() to make sure that the desirable
preview display is set. If setCamera(android.hardware.Camera) is used and the surface has been already set to the
camera, application do not need to call this. If this is called with non-null surface, the preview surface of the camera
will be replaced by the new surface. If this method is called with null surface or not called at all, media recorder will not
change the preview surface of the camera.
Parameters

sv Surface: the Surface to use for the preview

See also:

Camera.setPreviewDisplay(android.view.SurfaceHolder)

setPrivacySensitive

Added in API level 30

public void setPrivacySensitive (boolean privacySensitive)

Indicates that this capture request is privacy sensitive and that any concurrent capture is not permitted.

The default is not privacy sensitive except when the audio source set with setAudioSource(int) is
AudioSource#VOICE_COMMUNICATION or AudioSource#CAMCORDER.

Always takes precedence over default from audio source when set explicitly.

Using this API is only permitted when the audio source is one of:

AudioSource#MIC

AudioSource#CAMCORDER

AudioSource#VOICE_RECOGNITION

AudioSource#VOICE_COMMUNICATION

AudioSource#UNPROCESSED

AudioSource#VOICE_PERFORMANCE

Invoking prepare() will throw an IOException if this condition is not met.

Must be called after setAudioSource(int) and before setOutputFormat(int).

Parameters

privacySensitive boolean: True if capture from this MediaRecorder must be marked as privacy sensitive, false
otherwise.

Throws

IllegalStateException if called before setAudioSource(int)or after setOutputFormat(int)

setPro몭le

Added in API level 8

public void setProfile (CamcorderProfile profile)

Uses the settings from a CamcorderPro몭le object for recording. This method should be called after the video AND
audio sources are set, and before setOutputFile(). If a time lapse CamcorderPro몭le is used, audio related source or
recording parameters are ignored.

Parameters

profile CamcorderProfile: the CamcorderPro몭le to use

See also:
CamcorderProfile

setVideoEncoder

Added in API level 3

public void setVideoEncoder (int video_encoder)

Sets the video encoder to be used for recording. If this method is not called, the output 몭le will not contain an video
track. Call this after setOutputFormat() and before prepare().

Parameters

video_encoder int: the video encoder to use.

Throws

IllegalStateException if it is called before setOutputFormat() or after prepare()

See also:

MediaRecorder.VideoEncoder

setVideoEncodingBitRate

Added in API level 8

public void setVideoEncodingBitRate (int bitRate)

Sets the video encoding bit rate for recording. Call this method before prepare(). Prepare() may perform additional
checks on the parameter to make sure whether the speci몭ed bit rate is applicable, and sometimes the passed bitRate
will be clipped internally to ensure the video recording can proceed smoothly based on the capabilities of the platform.

Parameters

bitRate int: the video encoding bit rate in bits per second.

setVideoEncodingPro몭leLevel

Added in API level 26

public void setVideoEncodingProfileLevel (int profile,


int level)

Sets the desired video encoding pro몭le and level for recording. The pro몭le and level must be valid for the video encoder
set by setVideoEncoder(int). This method can called before or after setVideoEncoder(int) but it must be called
before prepare(). prepare() may perform additional checks on the parameter to make sure that the speci몭ed
pro몭le and level are applicable, and sometimes the passed pro몭le or level will be discarded due to codec capablity or
to ensure the video recording can proceed smoothly based on the capabilities of the platform.
Application can also use the MediaCodecInfo.CodecCapabilities#profileLevels to query applicable
combination of pro몭le and level for the corresponding format. Note that the requested pro몭le/level may not be
supported by the codec that is actually being used by this MediaRecorder instance.

Parameters

profile int: declared in MediaCodecInfo.CodecProfileLevel.

level int: declared in MediaCodecInfo.CodecProfileLevel.


Throws

IllegalArgumentException when an invalid pro몭le or level value is used.

setVideoFrameRate

Added in API level 3

public void setVideoFrameRate (int rate)

Sets the frame rate of the video to be captured. Must be called after setVideoSource(). Call this after
setOutputFormat() but before prepare().

Parameters

rate int: the number of frames per second of video to capture

Throws

IllegalStateException if it is called after prepare() or before setOutputFormat(). NOTE: On some devices that have auto-
frame rate, this sets the maximum frame rate, not a constant frame rate. Actual frame rate will
vary according to lighting conditions.

setVideoSize

Added in API level 3

public void setVideoSize (int width,


int height)

Sets the width and height of the video to be captured. Must be called after setVideoSource(). Call this after
setOutputFormat() but before prepare().

Parameters

width int: the width of the video to be captured

height int: the height of the video to be captured

Throws

IllegalStateException if it is called after prepare() or before setOutputFormat()

setVideoSource

Added in API level 3

public void setVideoSource (int video_source)

Sets the video source to be used for recording. If this method is not called, the output 몭le will not contain an video
track. The source needs to be speci몭ed before setting recording-parameters or encoders. Call this only before
setOutputFormat().

Parameters
video_source int: the video source to use

Throws

IllegalStateException if it is called after setOutputFormat()

See also:

MediaRecorder.VideoSource

start

Added in API level 1

public void start ()

Begins capturing and encoding data to the 몭le speci몭ed with setOutputFile(). Call this after prepare().

Since API level 13, if applications set a camera via setCamera(android.hardware.Camera), the apps can use the
camera after this method call. The apps do not need to lock the camera again. However, if this method fails, the apps
should still lock the camera back. The apps should not start another recording session during recording.

Throws

IllegalStateException if it is called before prepare() or when the camera is already in use by another app.

stop

Added in API level 1

public void stop ()

Stops recording. Call this after start(). Once recording is stopped, you will have to con몭gure it again as if it has just
been constructed. Note that a RuntimeException is intentionally thrown to the application, if no valid audio/video data
has been received when stop() is called. This happens if stop() is called immediately after start(). The failure lets the
application take action accordingly to clean up the output 몭le (delete the output 몭le, for instance), since the output 몭le
is not properly constructed when this happens.

Throws

IllegalStateException if it is called before start()

unregisterAudioRecordingCallback

Added in API level 29

public void unregisterAudioRecordingCallback (AudioManager.AudioRecordingCallback cb)

Unregister an audio recording callback previously registered with


registerAudioRecordingCallback(java.util.concurrent.Executor,
android.media.AudioManager.AudioRecordingCallback).

Parameters

cb AudioManager.AudioRecordingCallback: non-null callback to unregister This value cannot


be null.
Protected methods

몭nalize

Added in API level 1

protected void finalize ()

Called by the garbage collector on an object when garbage collection determines that there are no more references to
the object. A subclass overrides the finalize method to dispose of system resources or to perform other cleanup.

The general contract of finalize is that it is invoked if and when the Java™ virtual machine has determined that
there is no longer any means by which this object can be accessed by any thread that has not yet died, except as a
result of an action taken by the 몭nalization of some other object or class which is ready to be 몭nalized. The finalize
method may take any action, including making this object available again to other threads; the usual purpose of
finalize, however, is to perform cleanup actions before the object is irrevocably discarded. For example, the 몭nalize
method for an object that represents an input/output connection might perform explicit I/O transactions to break the
connection before the object is permanently discarded.

The finalize method of class Object performs no special action; it simply returns normally. Subclasses of
Object may override this de몭nition.

The Java programming language does not guarantee which thread will invoke the finalize method for any given
object. It is guaranteed, however, that the thread that invokes 몭nalize will not be holding any user-visible
synchronization locks when 몭nalize is invoked. If an uncaught exception is thrown by the 몭nalize method, the
exception is ignored and 몭nalization of that object terminates.

After the finalize method has been invoked for an object, no further action is taken until the Java virtual machine
has again determined that there is no longer any means by which this object can be accessed by any thread that has
not yet died, including possible actions by other objects or classes which are ready to be 몭nalized, at which point the
object may be discarded.

The finalize method is never invoked more than once by a Java virtual machine for any given object.

Any exception thrown by the finalize method causes the 몭nalization of this object to be halted, but is otherwise
ignored.

Content and code samples on this page are subject to the licenses described in the Content License. Java is a registered trademark of Oracle and/or its
a몭liates.

Last updated 2021-04-21 UTC.

Twitter YouTube
Follow @AndroidDev on Twitter Check out Android Developers on
YouTube

More Android Support Documentation


Android Report platform bug Developer guides

Enterprise Report documentation bug Design guides

Security Google Play support API reference

Source Join research studies Samples

Android Studio

Android Chrome Firebase Google Cloud Platform All products


Priva | Licen | Brand Get news and tips
SUBSCRIBE LANGUAGE 
cy se guidelines by email
Human Tolerance and Crash Survivability

Dennis F. Shanahan, M.D., M.P.H.


Injury Analysis, LLC
2839 Via Conquistador
Carlsbad, CA 92009-3020
USA

ABSTRACT

Aircraft and motor vehicle crashes will continue to occur in spite of all human efforts to prevent them.
However, serious injury and death are not inevitable consequences of these crashes. It has been estimated that
approximately 85 percent of all aircraft crashes are potentially survivable without serious injury for the
occupants of these aircraft. Nevertheless, many deaths and serious injuries occur in crashes that are classified
as “survivable”. This is because the protective systems within the aircraft such as seats, restraint systems, and
cabin strength were inadequate to protect the occupants in a crash that would have otherwise been non-
injurious. In order to maximize survivability in a crash, one must have an understanding of the tolerance of
humans to abrupt acceleration and then design an aircraft that is capable of maintaining its cabin/cockpit
integrity up to the limits of human tolerance. This should be combined with judicious use of energy absorbing
technologies that reduce accelerations experienced by the occupants and by restraint systems that provide
appropriate support and prevent injurious contacts. This paper discusses basic principles of human tolerance
to abrupt acceleration as well as basic concepts of crashworthiness design. Although these concepts are
discussed in the context of helicopter crashes, the same principles apply to other vehicles.

INTRODUCTION
Aircraft and motor vehicle crashes will continue to occur in spite of all human efforts to prevent them.
However, serious injury and death are not inevitable consequences of these crashes. It has been estimated that
approximately 85 percent of all aircraft crashes are potentially survivable without serious injury for the
occupants of these aircraft (1,2,3). This estimate is based upon the determination that 85 percent of all
crashes met two basic criteria. First, the forces involved in the crash were within the limits of human
tolerance without serious injury to abrupt acceleration (1). Second, the structure within the occupant’s
immediate environment remained substantially intact, providing a livable volume throughout the crash
sequence (1). In other words, contrary to popular belief, most aircraft crashes are not “smoking holes”.

Nevertheless, many deaths and serious injuries occur in crashes that were classified as “survivable” by crash
investigators. This is because the protective systems within the aircraft such as cabin strength, seats, and
restraint systems were inadequate to protect the occupants in a crash that would have otherwise been non-
injurious. This is why the definition of survivability of a crash is based solely on aircraft and impact related
factors and not upon the outcome for the occupants of the crashed aircraft. A mismatch between the
survivability of the crash and the outcome for the occupants suggests an inadequacy of protective systems
design or utilization.

Paper presented at the RTO HFM Lecture Series on “Pathological Aspects and Associated
Biodynamics in Aircraft Accident Investigation”, held in Madrid, Spain, 28-29 October 2004;
Königsbrück, Germany, 2-3 November 2004, and published in RTO-EN-HFM-113.

RTO-EN-HFM-113 6-1
Human Tolerance and Crash Survivability

It should also be recognized that transmission of forces to the occupants as well as the degree a vehicle
maintains its structural integrity during a crash, the two components of survivability, are determined, in large
part, by the design of the vehicle. The process of establishing the degree to which any particular vehicle will
protect occupants in a crash, or its crashworthiness, involves a series of trade-off decisions during its design
and manufacture. One of the adages of aircraft design, “it is possible to build a brick outhouse, but you can’t
make it fly”, applies to this situation. Increased crashworthiness and advanced crash protection systems
increase both the cost and the weight of the final design and, therefore, potentially decrease profit margins as
well as aircraft performance. The “trade-off” is to provide the right degree of protection for the projected
crash environment without sacrificing too much in terms of cost or performance. Obviously, the bases for
determining the “right” trade-off are frequently the source of considerable debate both during the design phase
and over the lifetime of any vehicle. One recurring error in these trade-off decisions is a lack of
understanding of human tolerance and protection concepts by the decision makers as well as a failure to
adequately determine or estimate the crash environment.

The other factor entering into this process is government design requirements. These requirements are also
the result of considerable compromise made more for political and economic reasons than for their technical
merit. Suffice it to say that Federal design standards should be considered minimal requirements and not
representative of the current state-of-the-art in occupant protection.

To fully understand these issues requires a clear comprehension of the crash environment to which any
particular vehicle is exposed as well as an understanding of human tolerance to acceleration and the basic
principles of occupant crash protection. The purpose of this paper is to introduce the reader to some of the
more basic concepts relating to personal survival in aircraft and other vehicular crashes.

COORDINATE SYSTEMS
1. Injury in a crash is the result of human response to force application to the body. Force and acceleration
are vector quantities comprising both magnitude and direction.
2. For purposes of description, both the aircraft and the seated human are arbitrarily assigned coordinate axes
which are related as follows (Figures 1 and 2):

Aircraft Human
Roll X
Pitch Y
Yaw Z

3. Any applied force or acceleration may be described according to its components directed along each of
the orthogonal axes.
4. Figure 1 is a representation of the aircraft coordinate system commonly used in military and other
government publications and standards. It represents a “left-hand rule” coordinate system (1). It should
be noted that there are other coordinate systems in use, and it is important for the reader to establish which
system is in use for any particular publication or standard.

6-2 RTO-EN-HFM-113
Human Tolerance and Crash Survivability

Figure 1. Aircraft coordinates

5. Figure 2 depicts a commonly used coordinate system applied to the seated human. The reference to
movement of the eyeballs describes the body’s inertial reaction to the applied acceleration, which is
opposite and equal to the applied acceleration (1). It is the body’s inertial response to an acceleration that
results in injury.

Figure 2. Human coordinate system

RTO-EN-HFM-113 6-3
Human Tolerance and Crash Survivability

ACCELERATION
1. Acceleration is defined as the rate of change in velocity of a mass and is frequently stated in units of feet
per second per second or feet/second2 (meters/second2. It is related to force by the familiar equation, F =
ma, where F = force, m = mass, and a = acceleration.
2. Acceleration may be described in units of G which is the ratio of a particular acceleration (a) to the
acceleration of gravity at sea level (g = 32.2 ft/sec2 or 9.8 m/sec2) or G = a/g. As a result, crash forces can
be thought of in terms of multiples of the weight of the objects being accelerated.
3. Acceleration values given in various reports generally refer to the acceleration of the vehicle near its
center of mass, unless otherwise specified.
4. Note that a deceleration is simply a negative acceleration.
5. An impact or crash is frequently described in terms of a crash pulse (Figure 3). A crash pulse is a
description of the accelerations occurring in the crash over time, or the acceleration-time history of the
crash. Although the shape of a crash pulse can be highly complex and variable from crash to crash, for
practical purposes, most aircraft and automobile crash pulses may be considered to be generally triangular
in shape. This assumption vastly simplifies calculations related to the crash and provides reasonable
estimates of acceleration exposure for field investigators. Note that in a triangular pulse, the average
acceleration of a pulse is one-half of the peak acceleration.

Figure 3. Triangular Crash Pulse

6. If the velocity of the vehicle at the time of the crash can be estimated and the stopping distance (vehicle
crush plus soil deformation) measured then the acceleration of the vehicle during the crash can be
estimated through a simple formula, assuming a triangular pulse:

6-4 RTO-EN-HFM-113
Human Tolerance and Crash Survivability

a. Peak G = __v2__
(g) x s

where v = velocity change of the impact,


s = stopping distance and,
g = acceleration of gravity at sea level = 32.2 ft/s2 or 9.8 m/s2

b. Average G is equal to one half of the peak G.

TOLERANCE TO ABRUPT ACCELERATION


1. An understanding of human tolerance to abrupt acceleration is essential to developing appropriate
crashworthiness or protective system design standards for any vehicle. If one knows the crash
environment to which a vehicle will be exposed and the limits of human tolerance to acceleration, then
one can rationally develop crashworthiness design requirements to protect occupants in foreseeable
crashes of that vehicle.
2. In general, human tolerance to acceleration is a function of five extrinsic factors (5). These factors are
related to characteristics of the crash pulse and to the design of the seating and restraint systems:
a. Magnitude of the acceleration
Clearly, the higher the acceleration, the more likely it is to cause injury.
b. Direction of the acceleration
The human is better able to withstand accelerations applied along certain axes of the body
(Figures 4 and 6). The direction that is most tolerable is the +Gx or acceleration in the forward
direction (eyeballs in). The least tolerable direction is apparently the Gz or vertical axis (eyeballs
up or down). The lateral axis (Gy) used to be considered the least tolerable, but recent data
derived from crashes of Indianapolis Race Cars indicates that this is probably not the case.
c. Duration of the acceleration
How long one is subjected to an acceleration is one of the determinants of human tolerance. In
general, the shorter the pulse for the same magnitude of acceleration, the more tolerable (Figures
4 and 6). Acceleration tolerance is usually considered to comprise two distinct realms—abrupt
acceleration and sustained acceleration—because of distinctly different human response patterns
to abrupt and sustained accelerations. Most crash impacts have a duration of less than 250
milliseconds or one-quarter of a second, which is considered to be in the realm of abrupt
acceleration. Human tissues and the vascular system respond considerably differently to these
very short duration pulses than they do the more sustained pulses experienced by fighter pilots
and astronauts. Consequently, a 10 G turn or “pull-up” may cause unconsciousness in a pilot and
result in a crash, but a 10 G crash impact may have little effect on the occupant of an automobile
or aircraft.
d. Rate of onset
Rate of onset of acceleration refers to how rapidly the acceleration is applied. It is reflected in the
slope of the curve depicted in figure 3. For a given magnitude and duration of acceleration, the
greater the rate of onset, the less tolerable the acceleration (Figure 5).
e. Position/Restraint/Support
This is one of the most critical factors determining human tolerance to a crash pulse. It refers to
how well the occupant is restrained and supported by his seat and restraint system and the degree
to which the loads experienced in the crash are distributed over his body surface. It is this factor

RTO-EN-HFM-113 6-5
Human Tolerance and Crash Survivability

that is the primary determinant of lack of survival in a survivable crash, if post-crash fire is
excluded.
3. Also of importance in considering human tolerance to abrupt acceleration are various intrinsic factors, or
factors that are directly related to the individual subjected to the impact. These factors are independent of
the extrinsic factors discussed above. They, in large part, explain the observed biological variability of
humans subjected to identical impacts:
a. Age of the subject
Young, healthy adults are best able to withstand impact accelerations. Consequently, a vehicle
designed for military applications may allow more severe accelerations to be experienced by
occupants than a vehicle intended for the general population.
b. Health of subject
Chronic medical conditions such as heart disease and osteoporosis, clearly degrade one’s ability
to withstand impact accelerations. History of previous injuries may also adversely affect one’s
tolerance.
c. Sex of subject
There are clearly sex differences in tolerance to acceleration. Women have a different mass
distribution than men as well as differences in muscle mass. This has been of particular concern
for the neck where women have approximately one third less muscle mass than men of
comparable stature.
d. Physical conditioning
Physical conditioning appears to increase ones tolerance both to abrupt and sustained acceleration,
probably due to increases in muscle mass and strength. Physical conditioning is also considered
to be a factor in recovery from injuries.
e. Other factors
Certainly, there are other intrinsic factors that affect one’s ability to withstand acceleration.
Unfortunately, these various factors will probably remain somewhat nebulous due to the obvious
limitations on performing research in this area.

HUMAN TOLERANCE CURVES (EIBAND CURVES)


1. In 1959, Eiband compiled what was then known about the tolerance of a restrained individual to abrupt
accelerations (1). These data were compiled primarily from the pioneering work of Colonel John Stapp
who performed human tolerance experiments on live volunteers, himself and coworkers, using
acceleration sleds and other acceleration devices. Eiband also included in his summary, human surrogate
experiments that had also been performed. The tolerance curves that Eiband constructed are illustrated
below in Figures 4 and 6.
2. Figure 4 is the Eiband Curve for accelerations in the +Gz axis, analogous to the direction of forces
experienced in an ejection seat or a vertical crash of a helicopter. It is a plot of uniform acceleration of the
vehicle as demonstrated in the lower right-hand corner, versus the duration of the acceleration for pulses
up to approximately 150 milliseconds. As the legend on the graphs notes, these exposures were all
survivable with essentially idealized seat and restraint systems. The graph illustrates that individuals
voluntarily tolerate accelerations up to approximately 18 G without injury, and spinal injury does not
occur below accelerations of approximately 20-25 G.
3. Figure 6 depicts the analogous curve for the –Gx direction, such as would be experienced in a head-on
collision. Note that the tolerance in this axis is over 40 G.
4. Similar curves are available for the other axes. A summary of estimates of human tolerance in all axes is
shown below:

6-6 RTO-EN-HFM-113
Human Tolerance and Crash Survivability

Human Tolerance Limits

Direction of Occupant’s Inertial Tolerance Level


Accelerative Force Response
Headward (+ Gz) Eyeballs Down 20-25 G
Tailward (- Gz) Eyeballs Up 15 G
Lateral Right (+ Gy) Eyeballs Left 20 G
Lateral Left (- Gy) Eyeballs Right 20 G
Back to Chest (+Gx) Eyeballs Out 45 G
Chest to Back (- Gx) Eyeballs In 45 G
Note: Reference: Crash Survival Design Guide, TR 79-22.
(0.10 Second time duration of crash pulse; full restraint)

Figure 4. Eiband Curve for +Gz

RTO-EN-HFM-113 6-7
Human Tolerance and Crash Survivability

Figure 5. Effect of Rate of Onset

Figure 6. Eiband Curve for -Gx

6-8 RTO-EN-HFM-113
Human Tolerance and Crash Survivability

CLASSIFICATION OF TRAUMATIC INJURY


1. At the risk of oversimplifying the issue, it is useful from a designer’s or investigator’s standpoint to divide
injury suffered in vehicular crashes into mechanical injury and environmental injury. Mechanical
injury is further subdivided into contact injury and acceleration injury (4). Environmental injury refers
to burns, both chemical and thermal, and events such as drowning.
2. In a strict sense both acceleration and contact injuries arise from application of force to the body through
an area of contact with an accelerating surface. In the case of acceleration injury, the application is more
distributed so that the site of force application usually does not receive a significant injury. The site of
injury is distant from the area of application and is due to the body’s inertial response to the acceleration.
An example of acceleration injury is rupture of the aorta in a high sink rate crash. Here the application of
force occurs through the individual’s thighs, buttocks, and back where he is in contact with the seat. The
injury itself is due to shearing forces at the aorta generated from the inertial response of the heart and
aorta to the upward acceleration of the body.
3. A contact injury occurs when a localized portion of the body comes into contact with a surface in such a
manner that injury occurs at the site of the contact (“the secondary collision”). Relative motion between
the body part and the contacting surface is required. An example of this type of injury is a depressed skull
fracture resulting from the head striking a bulkhead.
4. A mixed form of injury may also occur when acceleration generated by a localized contact produces
injury at a site distant from the point of contact as well as at the point of contact. An example of this type
of injury is a contracoup brain injury.
5. Distinction is made between these two basic forms of injury since prevention involves different strategies.
Providing means of absorbing the energy of a crash before it can be transmitted to an occupant prevents
acceleration injury. Structural crush zones, energy absorbing seats, and energy absorbing landing gear all
provide this function.
6. The primary strategy employed to prevent contact injury, on the other hand, is to prevent the contact
between the occupant and a potentially injurious object. This can be accomplished through a variety of
methods including improved occupant restraint or relocation of the potentially injurious object. If contact
cannot be prevented, injury can be mitigated by reducing the consequences of body contact through such
strategies as padding of the object, or making the object frangible so that contact causes the object to yield
before injury occurs.

RESTRAINT ISSUES
1. As discussed above, good restraint is critical to survival in all but the most minor impacts. Restraint
systems serve many important functions including:
a. Preventing ejection of occupants from their seats or the vehicle
b. Preventing the “secondary collision” which refers to body impact with interior structures in the
vehicle such as windshields, controls, and instrument panels due to flailing of the body in response to
accelerations caused by the vehicle collision.
c. Distributing crash loads over a wide portion of the body. This is essential in frontal impacts for
forward facing occupants. Properly designed restraints also ensure these loads are borne by the
portions of the body most able to withstand dynamic forces namely the pelvis, chest, and shoulder
girdle. Restraints that contact the neck or ride up into the abdomen can result in dire consequences for
the occupant in relatively minor impacts.
d. Tightly coupling the body to the vehicle, thus preventing magnification of forces due the development
of relative velocities between the decelerating vehicle and its occupants (dynamic overshoot).
e. Providing for “ride down” of the crash forces.

RTO-EN-HFM-113 6-9
Human Tolerance and Crash Survivability

2. Prevention of the secondary collision is essential to crash survival since relatively minor crashes can result
in fatal impacts with interior vehicle structures. There are many different types of belt restraint systems
available today, but they mainly involve either pelvic restraint (lap belt) or upper torso restraint (shoulder
belt) or a combination of both (3-point, 4-point, and 5-point systems).
3. Lap belt only configurations (2-point restraints) permit tremendous flail of the upper torso in crashes as
shown in Figure 6. The upper torso flail illustrated in this figure is for a 95th percentile male Army
aviator subjected to a 30 G forward and 30 G lateral impact on an acceleration sled (1). The amount of
excursion depicted is the average of a number of tests. With a head excursion of approximately 40 inches
(102 cm.) in the forward direction, it can be seen why lap belt only restraint will not protect a driver of a
car or pilot of an aircraft from impact with control surfaces or the instrument panel.
4. Figure 7 illustrates how these strike envelopes are significantly reduced for the same impact conditions
when dual harness upper torso restraint and tie-down strap is added to the system (5-point restraint).
5. An Additional advantage offered by upper torso restraint in combination with pelvic restraint is that multi-
belt restraints provide additional distribution of impact loads across the upper torso instead of focusing the
entire load across a 2 to 3 inch strip across the pelvis.
6. Upper torso restraint and tie-down straps also help prevent a situation known as “submarining” from
occurring (Figure 7). This is where the lap belt rides over the pelvic brim and compresses the soft tissues
of the abdomen resulting in serious abdominal and spinal injuries. Submarining occurs due to the pelvis
rotating under the lap belt, usually due to inappropriate location of the lap belt anchors or due to poor
design of the seat bottom or a combination of both. Lap belt only restraints so commonly inflicted serious
injuries on users in automobile crashes that the medical community coined a new term, “the seat belt
syndrome”, to describe the constellation of injuries caused by submarining under the lap belt (6, 9, 10).
7. An exciting new development in helicopter restraint systems is the planned implementation of inflatable
restraint systems in Army helicopters. These systems include air bag systems similar to those used in
automobiles as well as inflatable bags contained in belt restraint systems intended to provide
pretensioning and body support. Such systems are projected to reduce injury in crashes of some
helicopters by as much as 30 percent.

Figure 7. Submarining

6 - 10 RTO-EN-HFM-113
Human Tolerance and Crash Survivability

Figure 8. Strike Envelope for Lap Belt Restraint

RTO-EN-HFM-113 6 - 11
Human Tolerance and Crash Survivability

Figure 9. Strike Envelope for 5-Point Restraint.

6 - 12 RTO-EN-HFM-113
Human Tolerance and Crash Survivability

PERSONAL SURVIVAL IN VEHICULAR CRASHES


1. The above discussion was directed toward human tolerance to impact for relatively ideally restrained
occupants subjected to abrupt accelerations. In practice, occupants are rarely ideally restrained and there
are many other factors beside restraint and acceleration involved in the crash, which determine whether a
person is injured, or not.
2. In analyzing personal outcome for individuals involved in a particular crash, many investigators use what
is known as the “CREEP Principle”. CREEP is merely an acronym for the five factors considered to
influence personal survival in a crash. Although these factors may not encompass the entire complex set
of factors involved in surviving a crash, they provide an extremely useful framework for conducting a
systematic analysis (5) of personal survival. The five factors are:
a. Container
The potential for survival during a crash is severely compromised if the occupied spaces
collapse or are penetrated by external objects.
b. Restraint
Effective personal restraint is essential for injury prevention in all but the most minor crashes.
Of almost equal importance is restraint of potentially injurious objects within the cabin space
such as cargo and luggage.
c. Environment
This refers to potentially injurious objects located within the strike zone of each occupant.
Ideally restraints systems should prevent occupant contact with internal structures. If the
strike cannot be prevented in foreseeable crashes, then the object should be relocated, or if
this is not feasible, it should be rendered non-injurious by padding or frangibility.
d. Energy Absorption
In severe crashes, accelerations may exceed human tolerance limits in spite of excellent
restraint and seat systems. Under these circumstances, providing means of managing the
energy of the crash in a controlled manner can greatly increase the survivability envelope.
Automobile designers accomplish this by providing “crush zones” in the front and rear of
automobiles wherein crushing of the vehicle structure absorbs a portion of the energy of the
crash, thus reducing the forces experienced by the occupants. Helicopters tend to crash
mainly in a vertical direction creating very high accelerations in the vertical axis. Rather than
increasing structure in the bottom of helicopters to help absorb energy in these crashes, many
military helicopters are provided with energy absorbing seats. These seats stroke vertically in
a crash, thus absorbing energy and reducing accelerationss experienced by the occupants.
Fixed landing gear can also be designed to absorb a considerable portion of the energy in
vertical impacts.
e. Post-Crash Factors
In many crashes, the occupants survive the crash only to succumb to post-crash hazards such
as fire, drowning or natural environmental elements such as heat and cold. These conditions
are frequently aggravated by an inability to egress the crashed aircraft, due to obstructions
within the aircraft, blockage or malfunctioning of emergency exits, or an insufficient number
or size of exits.
3. All of the factors listed above should be considered in the analysis of any vehicular crash. Collectively,
knowledge gained from individual crashes can be used to detect trends and provide information that can
help manufacturers and regulators develop improved means of protecting occupants in a crash.
Unfortunately, such data also documents needless injuries and deaths, which subsequently provide the
“blood priorities” often-required before necessary improvements in regulations or design are effected.

RTO-EN-HFM-113 6 - 13
Human Tolerance and Crash Survivability

4. An excellent example of effective technology for preventing injury in crashes, which has been extremely
slowly adopted, is the use of crashworthy or crash resistant fuel systems (CWFS/CRFS) in helicopters.
These are fuel systems that are designed to completely contain fuel in potentially survivable crashes and,
thus, prevent fuel fed post-crash fires. The U.S. Army experience revealed that in Viet Nam era
helicopters approximately 40 percent of fatal injuries in survivable crashes were due to post-crash fires.
This led the Army to develop and install CWFS on most of its helicopters. Since the introduction of
CWFS into Army helicopters, there have only been one or two documented deaths due to thermal injury
in survivable crashes of CWFS equipped helicopters. This was a remarkable achievement; particularly
considering the cost of retrofitting these systems to Army helicopters was relatively low. For example,
the cost of modifying a UH1-H in the mid-1970’s was $7,517 with a weight penalty of 160 pounds and a
reduction in fuel capacity of only 11 gallons (7). As effective as these systems are, they have only been
slowly adopted by other military services, and they are rarely installed in civilian helicopters.
5. Other injury prevention technologies developed by the military such as energy absorbing seats and 5-point
restraint systems, though perhaps less effective than CWFS, have also been slow to find their way into
civilian applications. This is due to the reluctance of regulators to mandate their use, the reluctance of
manufacturers to provide them as standard equipment or as an option, and reluctance of consumers to
purchase them when offered as an option.

CONCLUSIONS
The human body is able to withstand remarkable crash forces if provided with appropriate restraint and if
protected from collapsing structure and injurious interior objects. Vehicle designers can extend the envelope
of survivability through intelligent crashworthiness designs that incorporate means of managing the energy of
the crash as well as strengthening the space immediately surrounding occupants. The U.S. Army has proven
that these protective technologies can be economically incorporated into helicopters such as the UH-60 Black
Hawk, and the crash experience of this helicopter and others, have proven the efficacy of these
crashworthiness concepts. The same concepts have been very effectively integrated into Indianapolis and
NASCAR racecars with remarkable results. In fact, crash recorders installed in “Indy Cars” indicate that a
properly protected human may be able to withstand accelerations considerably in excess of the 40 G limit
previously determined by Colonel John Stapp and others. Several Indy car drivers have withstood impacts in
excess of 100 G without serious injuries (8).

Some of this technology has been applied to automotive designs and, to a lesser degree, to civilian aircraft.
Nevertheless, vehicles could be made considerably safer and more crashworthy. Unfortunately, progress in
this area will require heightened awareness of both the problems and the possibilities by the general public,
regulators, and legislators. Manufacturers will not be willing to perform the research and development
required to incorporate significantly improved crashworthiness into their vehicles unless consumers make
safety a priority and reveal their willingness to pay for it. Likewise, legislators and regulators will not be
inclined to require significant improvements in crashworthiness or increase research funding in this area
unless the public demands it. The potential for improvements in automobile and aircraft crash safety is
enormous. Hopefully, the impetus for change will occur through education and increased public awareness.

SELECTED REFERENCES
1. Desjardins, S. H., Laananen, D. H., Singley, G. T., III: Aircraft crash survival design guide. Ft. Eustis,
VA, Applied Technology Laboratory, US Army Research and Technology Laboratories (AVRADCOM),
1979; USARTL-TR-79-22A.

6 - 14 RTO-EN-HFM-113
Human Tolerance and Crash Survivability

2. Haley, J. L., Jr.: Analysis of US Army helicopter accidents to define impact injury problems. In Linear
acceleration of the impact type. Neuilly-sur-Seine, France, AGARD Conference Proceedings No. 88-71,
1971, pp. 9-1 to 9-12.
3. Haley, J. L., Jr., Hicks, J. E.: Crashworthiness versus cost: A study of Army rotary wing aircraft accidents
in period Jan 70 through Dec 71. In Saczalski, K., et al. (eds): Aircraft Crashworthiness. Charlottesville,
University Press of Virginia, 1975.
4. Shanahan, D. F., Shanahan, M. 0.: Injury in U.S. Army helicopter crashes October 1979-September 1985.
J Trauma, 29: 415-23, 1989.
5. Shanahan, D. F. : Basic principles of helicopter crashworthiness. Ft. Rucker, AL, U.S. Army
Aeromedical Research Laboratory, 1993; USAARL TR-93-15.
6. Sims, J. K., Ebisu, R. J., Wong, R. K. M., et al.: Automobile accident occupant injuries. J. Coll. Emerg.
Phys., 5: 796-808, 1976.
7. Singley, G. T., III: Army aircraft occupant crash-impact protection. Army R, D & A, 22(4): 10-12, 1981.
8. Society of Automotive Engineers. Indy racecar crash analysis. Automotive Engineering International,
June 1999, 87-90.
9. Traylor, F. A., Morgan, W. W., Jr, Lucero, J. I., et al.: Abdominal trauma from seat belts. Am. Surg. 35:
313-316, 1969.
10. Williams, J. S., Kirkpatrick, J. R.: The nature of seat belt injuries. J. Trauma, 11: 207-218,1971.

RTO-EN-HFM-113 6 - 15
Human Tolerance and Crash Survivability

6 - 16 RTO-EN-HFM-113
IDEA DUMPSTER
Just a place to dump random thoughts and ideas

H OME TECH STUFFS LI FE H ACKS SELF DI SCOVERY

FEBRUARY 2, 2017 BY IRHSPUR

Adding Nepali Localization to Android apps


We all know that we can change the language for the Android apps through the Settings menu in
our Android devices. While doing so the language for the various strings used in the our app is also
changed accordingly. We can add this feature to our app using the Localization feature that
Android provides. This method includes creating a separate res/values folder for the languages
that you can support. This can be automatically generated by using the Translations Editor from
Android Studio by right clicking the strings.xml folder/ le.
There are a number of tutorials for doing this. But all of these tutorials are based upon the
languages that contain English characters (a-z, A-Z). Like for Spanish “Thank You” is translated to
“Gracias” which contains only English characters. However these tutorials don’t show how to do
this for the languages that are character based language like Nepali which has its own characters.
After a number of hit and trials and tinkering around in Android Studio I nally gured out the
way.
We can accomplish showing Nepali characters in our app by using Nepali Unicode. These Unicode
can be obtained from any translation platform like Google Translate. We simply need to translate
the text and copy the translated code and paste it in the Translation Editor interface in Android
Studio.

In this interface we need to add the Nepali language from the Add Locale button (globe button) on
the top left corner. This will display all the values from the res/values/strings.xml le. On the
right side we get the column for each language we added from the above step. Then all we need to
do now is to get the translated Unicode and paste in the corresponding language column next to
the corresponding Default Value.
REPORT THIS AD
The above screenshot shows the strings value change to Nepali when the Language of the device
is changed to Nepali. This was the coffee ordering app from the Udacity Android Tutorial: User
Input which is a really good tutorial for android.

Hope this was helpful !!!

SHARE THIS:

 Twitter  Facebook

Like
2 bloggers like this.

← PR EVI OUS

Importing contacts from CSV le into


Gmail Contacts

ONE THOUGHT ON “ADDING NEPALI LOCALIZATION TO


ANDROID APPS”

pramesh
FEBRUARY 2, 2017 AT 2:42 PM

Grt tutorial….thnx for doing this

 Like REPLY

LEAVE A REPLY

Enter your comment here...

REPORT THIS AD
FOLLOW BLOG VIA EMAIL RECENT POSTS FOLLOW ME ON TWITTER

Join 2 other followers


Adding Nepali Localization to Android apps Tweets by @irhspur
Enter your email address Importing contacts from CSV le into Gmail Contacts Rup Shri Retweeted

nepalindata
@nepalindata
FOLLOW Nepal Electricity Authority (NEA) sales a large quantity of
electricity primarily to domestic and industrial purposes.
The provisional figure shows that #NEA has sold 2509.29
GWh alone to domestic and 2010 GWh to Industrial
purpose in 2018.

Feb 22, 2019

Embed View on Twitter

Create a free website or blog at WordPress.com.

REPORT THIS AD
Since 2008

Get Current Location in Android


Last modified on October 24th, 2014 by Joe.

This android tutorial is to help learn location based service in android


platform. Knowing the current location in an android mobile will pave
the way for developing many innovative Android apps to solve peoples
daily problem. For developing location aware application in android, it
needs location providers. There are two types of location providers,

1. GPS Location Provider


2. Network Location Provider

Any one of the above providers is enough to get current location of the
user or user’s device. But, it is recommended to use both providers as
they both have different advantages. Because, GPS provider will take
time to get location at indoor area. And, the Network Location
Provider will not get location when the network connectivity is poor.

Network Location Provider vs GPS Location


Provider
Network Location provider is comparatively faster than the GPS
provider in providing the location co-ordinates.
GPS provider may be very very slow in in-door locations and will
drain the mobile battery.
Network location provider depends on the cell tower and will
return our nearest tower location.
GPS location provider, will give our location accurately.

Steps to get location in Android


1. Provide permissions in manifest file for receiving location update
2. Create LocationManager instance as reference to the location
service
3. Request location from LocationManager
4. Receive location update from LocationListener on change of
location

Provide permissions for receiving location update


To access current location information through location providers, we
need to set permissions with android manifest file.
<manifest ... >
<uses-permission android:name="android.permission.ACCESS_FINE
<uses-permission android:name="android.permission. ACCESS_COAR
<uses-permission android:name="android.permission.INTERNET" />
</manifest>

ACCESS_COARSE_LOCATION is used when we use network location


provider for our Android app. But, ACCESS_FINE_LOCATION is
providing permission for both providers. INTERNET permission is
must for the use of network provider.

Create LocationManager instance as reference to


the location service
For any background Android Service, we need to get reference for
using it. Similarly, location service reference will be created using
getSystemService() method. This reference will be added with the
newly created LocationManager instance as follows.

locationManager = (LocationManager) getSystemService(Context.LOCA

Request current location from LocationManager


After creating the location service reference, location updates are
requested using requestLocationUpdates() method of
LocationManager. For this function, we need to send the type of
location provider, number of seconds, distance and the
LocationListener object over which the location to be updated.

locationManager.requestLocationUpdates(LocationManager.GPS_PROVID

Receive location update from LocationListener on


change of location
LocationListener will be notified based on the distance interval
specified or the number seconds.

Sample Android App: Current Location Finder


This example provides current location update using GPS provider.
Entire Android app code is as follows,

package com.javapapers.android.geolocationfinder;

import android.os.Bundle;
import android.app.Activity;
import android.content.Context;
import android.location.Location;
import android.location.LocationListener;
import android.location.LocationManager;
import android.widget.TextView;

import android.util.Log;

public class MainActivity extends Activity implements LocationLis


protected LocationManager locationManager;
protected LocationListener locationListener;
protected Context context;
TextView txtLat;
String lat;
String provider;
protected String latitude,longitude;
protected boolean gps_enabled,network_enabled;

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
txtLat = (TextView) findViewById(R.id.textview1);

locationManager = (LocationManager) getSystemService(Context.LOCA


locationManager.requestLocationUpdates(LocationManager.GPS_PROVID
}
@Override
public void onLocationChanged(Location location) {
txtLat = (TextView) findViewById(R.id.textview1);
txtLat.setText("Latitude:" + location.getLatitude() + ", Longitud
}

@Override

XML files for layout and android manifest are as shown below

<RelativeLayout xmlns:android="http://schemas.android.com/apk/res
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity" >

<TextView
android:id="@+id/textview1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:layout_centerVertical="true"
android:text="@string/hello_world" />

</RelativeLayout>

<?xml version="1.0" encoding="utf-8"?>


<manifest xmlns:android="http://schemas.android.com/apk/res/andro
package="com.javapapers.android.geolocationfinder"
android:versionCode="1"
android:versionName="1.0" >

<uses-sdk
android:minSdkVersion="8"
android:targetSdkVersion="17" />
<uses-permission android:name="android.permission.ACCESS_FINE
<application
android:allowBackup="true"
android:icon="@drawable/ic_launcher"
android:label="@string/app_name"
android:theme="@style/AppBaseTheme" >
<activity
android:name="com.javapapers.android.geolocationfinde
android:label="@string/app_name" >
<intent-filter>
<action android:name="android.intent.action.MAIN"

<category android:name="android.intent.category.L
</intent-filter>
</activity>
</application>
</manifest>

Android Output

Note: If you are running this Android app with emulator, you need to
send the latitude and longitude explicitly for the emulator.

How to send latitude and longitude to android


emulator
Open DDMS perspective in Eclipse (Window -> Open
Perspective)
Select your emulator device
Select the tab named emulator control
In ‘Location Controls’ panel, ‘Manual’ tab, give the Longitude and
Latitude as input and ‘Send’.

Popular Articles
Find Places Nearby in Google Maps using Google Places API–Android
App
Android Get Address with Street Name, City for Location with
Geocoding
Android Location Fused Provider

Comments on "Get Current Location in Android"

ANUJ says: 06/05/2013 at 12:09 pm


nice tutorial sir………..

s.selvakumar says: 06/05/2013 at 4:50 pm

Hi Sir,

I am a java developer but not a Android developer. I would like to develop one simple
contact save application in android. Could you please publish step by step development
procedure for contact book application with sqlite db interaction.

Thanks,
s.selvakumar

Naiyer azam says: 06/05/2013 at 9:10 pm

What are all requirement to run anroid applicatio

Ravi says: 10/05/2013 at 1:03 pm

Very nice article. Steps are nicely explained. Thank you Sir

Durai says: 23/05/2013 at 4:51 pm

Hi,
How can i scan barcodes in android application!

prasannakumar says: 03/06/2013 at 12:53 pm

yes u can use barcode scan u will find source code for barcode scan check it in github
repository once i used zxing(a barcode scanner Engine)

Show Map in Android says: 13/06/2013 at 11:38 pm

[…] we studied about how to get current geographic location using an android
application. That Android app will return latitude and longitude pair to represent current
location. Instead of […]

Sachchit says: 04/07/2013 at 12:42 am

Does this app need any type of internet ???

Anonymous says: 16/07/2013 at 12:20 pm

I run the above code.but only hello world is displayed.no location data.Please specify
the code for activity main.xml to display the location

Celso says: 17/07/2013 at 6:21 am

Simple and Efficient..

indrajeet kumar says: 19/07/2013 at 12:55 pm

nice tutorial….

Anonymous says: 21/07/2013 at 5:34 pm

i am new to android ,i tried the same but when i run it shows me “unfortunately app has
stopped working” please help me fix this.

Archie Jain says: 25/07/2013 at 2:21 pm

Very helpful!
parag says: 29/07/2013 at 3:17 pm

Very nice tutorial.Please tell me how can we use these coordinates to locate this
position in the map .thank you

Joe says: 29/07/2013 at 9:28 pm

First you need to display the map fragment (https://javapapers.com/android/show-


map-in-android/), then you need to tile these coordinates on top of it by using
location service / activity.

I will post a tutorial for this exact topic very soon.

Anonymous says: 31/07/2013 at 2:10 am

hi..!!!
its working fine when im using emulator,but it is showing nothing when im uing it in my
phone..!!
just hello world is outputed on the screen..!!

Arefin says: 02/08/2013 at 3:08 am

Hi,
How can i get output in phone instead of emulator because it works in emulator.

jitendra singh yadav says: 03/08/2013 at 11:19 am

i am new to android ,i tried the same but when i run it shows me “unfortunately app has
stopped working” please help me fix this

Tanuja says: 07/08/2013 at 11:23 am

thanks alot for helping by this tutorial..


can u tell me hopw to find the direction and KMS using android google maps

Saket says: 11/08/2013 at 9:08 pm

I’m new to android as well and was getting the same error. What solved it was that I
tried it on device and not the emulator. The emulator kept giving me error even after
sending values through DDMS.

second mistake i made was i changed the name of the package and forgot to change
the names in manifest. Just make sure you are not doing the same.

Roopesh says: 13/08/2013 at 9:13 pm

Worked instantly.. Thanks for the tutorial..

Narendra says: 14/08/2013 at 5:47 pm

how to get date and time from internet in android

Yajneshwar Mandal says: 18/08/2013 at 8:30 am

Very good tutorial …

kumar says: 21/08/2013 at 1:21 pm

Hi,
The latitude and longitude is shown, can you add the direction to the latitude and
longitude like

Latitude 37.42 North


Longitude 122.56 East
Because i needed this for astrology app

Arindam says: 23/08/2013 at 2:56 pm

Thanks Joe,for the post and for GPS Location Provider is works perfectly.

shubham says: 25/08/2013 at 3:15 pm

how to develop a music player stand alone application in java……?

Anonymous says: 10/09/2013 at 12:31 pm

very good tutorial…thanks sir

divya says: 12/09/2013 at 4:28 pm

its working fine in emulator kindly intimate how to display in tab

Harshal says: 13/09/2013 at 12:00 pm

Have you posted any tutorial for network based on network location service yet?

Please give URL,

Thank You

pankaj says: 13/09/2013 at 8:58 pm

very simple dear


1-install eclipse.
2-After debug the program u will the .apk file
inside Bin folder.
3-Copy the .apk file and put inside ur mobile and run it…

Anonymous says: 16/09/2013 at 11:02 am

dear Joe when i install apk file in Phone only


hello world message is display not lat n long find

i an new in android apps please help and suggest how to call .NET web service in this
with post and get method

Ramasamy says: 25/09/2013 at 11:49 am

I think you are testing code by emulator. If yes, you need to set lattitude and longtitude
manually. Its also said by author at the end of this tutorial. Please look at that and then
try. Its running good for me.

Indra says: 25/09/2013 at 2:07 pm

great job amazing work simple easy yet effective

karthik says: 01/10/2013 at 7:20 pm

Nice one.
But This one is not working inside the room.we need the display the current location
using network provider.

leo says: 01/10/2013 at 11:33 pm

Thanks, great tutorial, best i found and works like a charm.


Can you publish one explaning GoogleMaps ?

Regards

L
Prasoon says: 08/10/2013 at 10:13 pm

hi , i manually added langitute and logitute .but nothing showing on my emulator..


it says only
latitute=Location not available
longitude=Location not available

Dilip says: 09/10/2013 at 1:11 pm

This code is not working inside the room.we need the display the current location using
network provider.

Bryn says: 09/10/2013 at 4:58 pm

Great tutorial. I keep having the same problem with certain apps I write – in that they
work on the emulator, but give an error on my cell phone.

App starts up then suddenly gives the message “Unfortunately, GPSapp has stopped”

Any ideas would be greatly appreciated

Galaxy S4 cell, 4.2.2

Bryn says: 09/10/2013 at 5:01 pm

Apologies – I see this question has been raised a few times before. I have tried the
suggestions above (eg make sure package name is same in program and manifest etc)
but still no joy. Thanks Bryn

Joe says: 09/10/2013 at 6:47 pm

Bryn,

There can be numerous reasons to a crash. Best way to find the reason is to get
the LogCat logs when it crashes. Plug the phone in USB and set it as target device.
Then launch the app in phone and you can get the logs on this error.

grimes says: 13/10/2013 at 4:12 pm

all are incredible !

Jan Zitniak says: 13/10/2013 at 8:52 pm

For anyone who shows “Hello World” on the mobile phone instead current location
change MainActivity.java to following:

package com.javapapers.android.geolocationfinder;

import android.os.Bundle;
import android.app.Activity;
import android.content.Context;
import android.location.Location;
import android.location.LocationListener;
import android.location.LocationManager;
import android.widget.TextView;

import android.util.Log;

public class MainActivity extends Activity implements LocationListener {


// The minimum distance to change Updates in meters
private static final long MIN_DISTANCE_CHANGE_FOR_UPDATES = 10; // 10 meters
// The minimum time between updates in milliseconds
private static final long MIN_TIME_BW_UPDATES = 1000 * 60 * 1; // 1 minute
protected LocationManager locationManager;
protected Context context;
protected boolean gps_enabled, network_enabled;
TextView txtLat;

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
txtLat = (TextView) findViewById(R.id.textview1);

locationManager = (LocationManager)
getSystemService(Context.LOCATION_SERVICE);
// getting GPS status
gps_enabled = locationManager
.isProviderEnabled(LocationManager.GPS_PROVIDER);
// getting network status
network_enabled = locationManager
.isProviderEnabled(LocationManager.NETWORK_PROVIDER);

if (gps_enabled) {
locationManager.requestLocationUpdates(
LocationManager.GPS_PROVIDER, MIN_TIME_BW_UPDATES,
MIN_DISTANCE_CHANGE_FOR_UPDATES, this);
} else if (network_enabled) {
locationManager.requestLocationUpdates(
LocationManager.NETWORK_PROVIDER, MIN_TIME_BW_UPDATES,
MIN_DISTANCE_CHANGE_FOR_UPDATES, this);
};
}

@Override
public void onLocationChanged(Location location) {
txtLat = (TextView) findViewById(R.id.textview1);
txtLat.setText(“Latitude:” + location.getLatitude() + “, Longitude:”
+ location.getLongitude());
}

@Override
public void onProviderDisabled(String provider) {
Log.d(“Latitude”, “disable”);
}

@Override
public void onProviderEnabled(String provider) {
Log.d(“Latitude”, “enable”);
}

@Override
public void onStatusChanged(String provider, int status, Bundle extras) {
Log.d(“Latitude”, “status”);
}
}

Shashank Dixit says: 19/11/2013 at 1:45 pm

In my code, onLocationChanged() method is not getting called while I am running my


app on mobile. What may be the reason behind it?

Kannan says: 02/12/2013 at 1:39 pm

How to send latitude and longitude to android emulator

Open DDMS perspective in Eclipse (Window -> Open Perspective)


Select your emulator device
Select the tab named emulator control
In ‘Location Controls’ panel, ‘Manual’ tab, give the Longitude and Latitude as input and
‘Send’.

For this steps we simple load a .kml file to emulator that will take a automatically from
the LOCATION CONTOLS…

Edith says: 16/12/2013 at 4:16 pm

Thanks for a nice tutorial.Kindly help me on how to serve the obtained


coordinates(latitudes&longitudes) to MySQL database.

Isuru Chathuranga says: 17/12/2013 at 1:52 pm

Thank you sir…


this is very important to me. Pleas carry on your tutorials ahead.

Khalid Sweeseh says: 20/12/2013 at 11:42 am


I have seen some applications being able to retrieve the VLR Global title , something
like the Node Number you are latched on , for example in Jordan it will be something
like 96279123456 , but from the API i am not able to get to that level , any idea how
could that have been done ?

Edith says: 20/12/2013 at 11:53 am

Can anyone please help me on automatic storage of the obtained coordinates (latitude
&longitude…from this tutorial)in mySQL database, this should be done without requiring
the phone user to press a send button.
May God bless you for your kind assistance.
Happy coding!!

Edith says: 28/12/2013 at 2:40 pm

Friends,is there no one who can give a hint on this??? Plz your assistance or some
links/books to refer to will help me a lot.

sachin thawari says: 31/12/2013 at 1:28 pm

Hi Sir

I want to find my friend location trough googlemap


So what kind of permission i have to use in my manifest file ,is this possible??

i am working on one friend Find locater .


reply

Ashik says: 09/01/2014 at 2:50 pm

Sir this code running well in emulator but not in phone device and itz by default shows
hello world.. how can i fix it?and as like many people are geeting same problem.. pls gv
us proper solution.

Deepa says: 16/01/2014 at 3:50 pm

Its is really awesome tutorial Thank You :-)

Vellaiyappan says: 25/01/2014 at 10:54 am

Sir,
i am using GPS Location find in my application and how to store in sqlite database so i
am confuesd please help me sir.

i need source code

Android Location using GPS, Network Provider says: 30/01/2014 at 12:00 pm

[…] said all the above, I just noticed that I have written an Android GPS tutorial already.
Though I feel like a buffoon, somehow I have to manage now. Its okay, it will do no harm
[…]

Tom says: 06/02/2014 at 3:01 am

Thank you

Sreejith says: 07/02/2014 at 6:16 pm

Thank you sir..


it working well..

pratahm says: 11/02/2014 at 12:54 pm

gives me error saying “textview1 cannot be resolved or is not a field”.


thanks.
soheil says: 12/02/2014 at 5:28 pm

hello
nice tutorial
you put all your code but why you did not put android project here? thanks alot

robin says: 13/02/2014 at 5:48 pm

Hi,
I want to put a marker to the current location and keep updating the marker to new
position.i am using
marker1 = mGoogleMap.addMarker(new MarkerOptions()
.position(latLng)
.title(“San Francisco”)
.snippet(“Population: 776733”));

using this everytime a new marker is added rather updating the same marker to the new
location.please help me achieve this

juliya says: 21/02/2014 at 5:30 pm

hi,, i tried ur code.but i was confused in output.


the output shows nothing (blank screen).wat to do??
plzz help me.

Markers–Google Maps Android API v2 says: 25/02/2014 at 6:07 pm

[…] add a marker to the current location, we need to know the LatLng. Refer the get
current location in Android tutorial to find the current LatLng and using that we can add
the marker easily as shown […]

Manoj says: 27/02/2014 at 6:36 pm

Nice Tutorial , it really helps me …thanks.

ashish says: 04/03/2014 at 11:37 pm

thanks Sir,

Noha says: 08/03/2014 at 3:28 am

Can i try ready code from you?

or if I can contact you to help me to discover the error in my code

namrata says: 10/03/2014 at 3:43 pm

hello sir,
i m getting error nullpointer exception on location service plese can u help mi

Anand Kumar says: 11/03/2014 at 1:06 pm

Superb Tutorial Sir. Thanks a lot.

Prakash says: 17/03/2014 at 12:05 pm

We need physical android mobile to read the position or AVD is enough?

sandeep singh says: 02/04/2014 at 5:15 pm

i am new to android ,i tried the same but when i run it shows me only simple map.But i
want to show also marker co-ordinate..

Stephen Garside says: 04/04/2014 at 11:33 am


Hi, I am new to android development and was looking for a simple way to get the device
location. The official google example is complete overkill, and your example if perfect –
nice and easy to follow and understand without loads of code bloat! – Thanks for
sharing

Anonymous says: 08/04/2014 at 4:42 am

WOW!!! It really works! Thanks man!

VEry good job!

sandeep says: 23/04/2014 at 7:37 pm

i need to develop android-app user activate gps and we get user address and we have
store list and search nearest

location of user and show result.can some one tell me for this app what step i have to
do.

fadha says: 16/05/2014 at 6:54 pm

hi sir nice tutorial its working but sir i want to get current location name as well .how it
could be possible

fadha says: 16/05/2014 at 6:55 pm

hi sir i need current location name to be shown if it gets current longitude and latitude.
how it is possible???

vshan says: 18/05/2014 at 1:50 am

hi i need to implement OnclickListener along with locationlistner.i am a newbie.i am


getting the following error when i try to run:

Error:(23, 8) java: com.example.sid.MainActivity is not abstract and does not override


abstract method onProviderDisabled(java.lang.String) in
android.location.LocationListener

my syntax:public class MainActivity extends Activity implements


OnClickListener,LocationListener

can anyone please help??


thanks in advance.

kadev says: 27/05/2014 at 2:56 pm

thank you sir it made my day

Dhruvang Joshi says: 05/06/2014 at 1:09 pm

Its working fine,but we need only city list within 100km from current location. So how to
get that. Can u help me????

sun says: 01/07/2014 at 2:45 pm

the application is stopped its not even opening

Anonymous says: 04/07/2014 at 3:24 pm

nice tutorial sir


it will help me lot

Angad says: 28/07/2014 at 2:55 pm

I could not get the 2 parameters (time and distance) used in requestLocationUpdates
method. What I could understand is that after x time or x meters request for the user
location but why both?
And if we are giving both parameter then, which would be used(time interval or
distance)?

Also OnLocationChanged method would be called only when we requestLocation or will


it be triggered every time user changes his location without even calling requestLocation
method?

Please clarify my doubts.

Anonymous says: 13/08/2014 at 12:23 pm

on launching application helloworld only displaying

Anonymous says: 13/08/2014 at 12:27 pm

onlocation changed method is nt calling when i run this application i kept debug
statements to confirm that pls give solution

Heir says: 22/08/2014 at 5:16 pm

Helpful man and thx

Elvis says: 23/08/2014 at 4:57 pm

I am install the app on my phone, but only it only displaying –hello word– no latitude, no
longetutide

Shanmuga Sundaram says: 23/08/2014 at 7:53 pm

Get Current Location in Android is not working…


Please help me sir !

The Above code which you uploaded at the beginning is not working now a days please
reupload new code for getting current latitude & longtitude….

Shanmuga Sundaram says: 24/08/2014 at 7:53 am

Sir, It works…

By sitting in my room it don’t work …


Just move around by installing the app it work and also takes some time to connect with
the GPS …

Thank you Once again…

Jitu Varghese says: 01/09/2014 at 7:26 pm

how can we detect a friend(other user) location from our app ?


any suggestion…..

Jitendra kushvaha says: 19/09/2014 at 10:38 am

Thanks you sir

Navin says: 30/09/2014 at 8:05 pm

i cant able to use this app in my phone. when i install and open it is showing a error
message like: unexpected error occurred. what want to do for it. help me in this

PARTH PATEL says: 13/10/2014 at 6:14 am

Hi,

This tutorial is nice. App is working well in my mobile. But when I send coordinates from
DDMS app is not reacting.

-Parth
kavinraj says: 15/10/2014 at 10:07 am

its working fine in emulator but it not working in my phone.only displaying hello world.
how to solve this problem.

Android Get Address with Street Name, City for Location with Geocoding says:

16/10/2014 at 4:47 pm

[…] mobile location. We have got GPS or network provider in the Android device and we
can use that to get the current location in terms of latitude and longitude. Using the
latitude and longitude we can get the address by […]

Comments are closed for "Get Current Location in Android".

Subscribe:
Enter your email here Subscribe

Javapapers Facebook Page

© 2008 - 2019 Javapapers


Anybody can ask a question
Physics Stack Exchange is a question and
answer site for active researchers, academics
and students of physics. It only takes a minute
to sign up. Anybody can answer

Sign up to join this community The best answers are voted up and
rise to the top

Calculate speed from accelerometer Ask Question

Asked 6 years, 4 months ago Active 4 years, 10 months ago Viewed 44k times

I am trying to use my accelerometer on my mobile device (smart watch to be specific) to calculate a persons arm
swing speed.
11
The data returned from the accelerometer is in m/s2 .

Since the acceleration of a persons arm is not constant I cannot use the equation v0 + at to calculate the velocity.

10 I have never been good at physics so how do I calculate speed with varied acceleration?

acceleration velocity speed

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Share Cite Improve this question Follow edited Dec 18 '14 at 14:45 asked Dec 14 '14 at 1:20
DarioP tyczj
4,997 1 12 40 213 1 2 5

2 You have to calculate a numerical integral. In it's most simple form that's just a sum of the acceleration samples multiplied with
the time step. Having said that, your accelerometer measures acceleration along three axes, which are rotating relative to an actual
physical reference frame. In addition, the accelerometer can not distinguish between gravity (−gz ) and an acceleration of 1g
towards the floor. Compensating for these things and the error components of the accelerometer is a hard computational problem.
– CuriousOne Dec 14 '14 at 1:29

@CuriousOne I realize there is going to be margins of error so I am not looking for anything extremely accurate, I am going to let
the user know that it is an estimated speed – tyczj Dec 14 '14 at 1:34

The problem is not one of small errors, it's one of a ten times larger signal dependent on the orientation of your device being
overlaid to what you are trying to measure. Unless you already have code that corrects for that, the problem is quite hard. –
CuriousOne Dec 14 '14 at 1:44

@CuriousOne - is g a problem? - in a lift when I go down my stomach feels the downward acceleration even when it is less than g.
I guess the accelerometer in the phone should be able to feel that as well. What do you think? – tom Dec 14 '14 at 3:25

@tom: The problem is not that the accelerometer can not measure gravitational acceleration, the problem is that it can't tell the
difference between gravitational acceleration and an actual acceleration in the inertial system that you want the measurement to
be done in. The deeper reason for that is, that the surface of the earth is NOT an inertial system, we just like to pretend that it is.

And if you look at the error propagation of a one g constant acceleration overlaid vectorially on a small (0.01-0.1g) acceleration
that we typically are interested in, the resulting errors are huge. – CuriousOne Dec 14 '14 at 3:29

Show 4 more comments

2 Answers Active Oldest Votes

You need to integrate acceleration to get the velocity.

t
8 v(t) = ∫ a. dt
t=0

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
There are a number of ways of doing this numerically.

I assume that you get these readings regularly with a spacing of δt , for example δt = 100ms or something like that.

About the simplest way to do it is

v(t) = v(0) + ∑ a × δt

where v(t) is the velocity at time t . but there are more sophisticated ways of doing it - I will not repeat them here, but
you might want to look at using Simpson's rule, which is described here.

The problem is complicated by velocity being three dimensional - so you need to integrate in each of the three
dimensions x, y and z separated.

It depends how the phone gives you the information about the acceleration, but if you get ax , ay and az at regular
intervals then you can do the following...

vx += ax * dt;
vy += ay * dt;
vz += az * dt;

if you get accleration as a raw number and angle then you will have to convert from I guess polar coordinates to xyz
components to be able to add them up.
−−−−−−−−−−
Total speed, |v| is, of course, given by |v| 2 2
= √vx + vy + vz
2

I would, of course, try to start at v = 0

Curious One, raises a really interesting point about g - the best way to test this is to code it and try it - shake the
phone and see if the velocity returns to zero when it is at rest after shaking it or moving it....

... can you post your results if you do this and try it out?

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Another issue is twisting the phone and twisting the accelerometer - this would require you to think about angular
acceleration etc., but the basic principles outlined here would be the same if you needed to think about angles.

Share Cite Improve this answer Follow edited Jun 15 '16 at 21:25 answered Dec 14 '14 at 3:55
tom
6,707 1 19 30

Yes the information comes in very fast, about 20 samples every 1000ms. The information from the sensor is in the form of ax , ay ,
az . I have already done some testing to see how much drift there is in the sensor when still and there is a fair amount of drift, I am
going to put in a calibration of sorts to try to filter out the noise – tyczj Dec 14 '14 at 4:52

Add a comment

That's why when you want to integrate numerically the vertical acceleration you have to subtract 1g from the vertical
acceleration you measure from the accelerometer
2
Share Cite Improve this answer Follow answered Jul 29 '15 at 16:00
Simone
21 1

Why do you need to subtract 1g? – Kyle Kanos Jul 29 '15 at 16:09

Because 1g does not produce any "visible" velocity, but as said above, the accelerometer can't distinguish between gravity and other
accelerations. So you subtract 1g knowing that any other acceleration seen apart from that one will have produced an actual
velocity pointing in some direction: if the acceleration measured is less than 9.8, then the velocity is negative so pointing towards
the floor, if it's more than 9.8 it will be a positive velocity pointing towards the ceiling/sky. (Conventionally the 1g vector points
from the floor to the ceiling). hope i've been clear enough! – Simone Jul 30 '15 at 10:54

Add a comment

Your Answer

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Sign up or log in
Sign up using Google

Sign up using Facebook

Sign up using Email and Password

Post as a guest
Name

Email
Required, but never shown

Post Your Answer

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy

Not the answer you're looking for? Browse other questions tagged acceleration velocity speed or ask your own
question.

Featured on Meta

New onboarding for review queues

Linked

3 How to calculate speed and distance from


gyro, orientation and acceleration data?

0 Calculate distance an accelerometer


moved

Related

2 Calculate vertical distance using


acceleration

2 How to calculate the horizontal


acceleration?

3 How to calculate speed and distance from


gyro, orientation and acceleration data?

1 How to calculate acceleration from discrete


samples of velocity?

5 Why an accelerometer shows zero force


while in free-fall

0 Calculate distance an accelerometer


moved

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
0 Calculating Velocity from Acceleration
(Accelerometer)

0 Calculation of speed from accelerometer


data

1 Accelerometer and Static Force

Hot Network Questions


What can the ISS do that a SpaceX Starship could
not?

Implement Minceraft

Replacing duplicates instead of deleting them

How can a butterfly dodge the windshield of a fast


moving car?

What does TRUE mean in JUSTIFIED TRUE


BELIEF?

Why does "ls" take extremely long in a small


directory that used to be big? How to fix this?

Reducing complex arithmetic to a three-variable


polynomial

If Statements in Plain TeX

Why did John Wells need three lexical sets--


NORTH, FORCE and THOUGHT--for the same
vowel /ɔː/?

What's the best way to remove a white


background when the Magic Wand tool doesn't cut
it?

How to represent the Logistic Estimator in matrix


form?

What's the safest way to carry and throw marble


sized high-explosive grenades?

How to implement gain and offset in a single


opamp?

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Plain TeX \loop \repeat tracing output is incorrect

what's :eval (colon eval) in defcustom?

Why do flying wires hum louder in a sideslip?

Does arXiv prevent your work from getting


scooped, or the opposite?

Are there any actual uses of isodiaphers?

Get 5 to 10 characters and flip their case

Speed up averaging of numerically solved


differential equations

What's the political reason that acknowledging


genocide is such a 'big deal'?

How would I say, “a story about the time that...” in


German?

Is there any automatic way to spot contradictory


constraints in linear programming?

Passing variable to URI filepath

Question feed

PHYSICS COMPANY STACK EXCHANGE Blog Facebook Twitter LinkedIn Instagram


NETWORK
Tour Stack Overflow
Technology
Help For Teams
Life / Arts
Chat Advertise With Us
Culture / Recreation
Contact Hire a Developer
Science
Feedback Developer Jobs
Other
Mobile About
Disable Responsiveness Press
Legal

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Privacy Policy
Terms of Service
Cookie Settings
site design / logo © 2021 Stack Exchange Inc; user
Cookie Policy contributions licensed under cc by-sa. rev 2021.4.23.39140

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
About Products For Teams Search… Log in Sign up

Home
How to set timer in android? Ask Question

PUBLIC Asked 10 years, 3 months ago Active 2 months ago Viewed 540k times

Questions
Can someone give a simple example of updating a textfield every second or so?
Tags The Overflow Blog

Users 347 I want to make a flying ball and need to calculate/update the ball coordinates every second, that's
Podcast 332: Non-fungible Talking
why I need some sort of a timer.
FIND A JOB The Loop: Our Community & Public
I don't get anything from here. Platform Roadmap for Q2 2021
Jobs

Companies 162 Featured on Meta


android timer

TEAMS New onboarding for review queues


Share Improve this question Follow edited Oct 30 '19 at 7:50 asked Jan 4 '11 at 19:45
Stack Overflow for Downvotes Survey results
Teams – Collaborate Awais Aslam SERG
and share knowledge 65 1 9 3,543 7 34 67 Should questions about obfuscated code
with a private group. be off-topic?

14 This class may help: developer.android.com/reference/android/os/CountDownTimer.html – Paramvir Singh


Nov 18 '13 at 9:05

This will help. sampleprogramz.com/android/chronometer.php – Ashokchakravarthi Nagarajan Nov 16 '15 at


14:19

Add a comment
Create a free Team

What is Teams? 22 Answers Active Oldest Votes

ok since this isn't cleared up yet there are 3 simple ways to handle this. Below is an example
showing all 3 and at the bottom is an example showing just the method I believe is preferable. Also
481 remember to clean up your tasks in onPause, saving state if necessary.

Linked
import java.util.Timer;
0 Using a timer - the right way
import java.util.TimerTask;
import android.app.Activity;
1 Countdown timer loop with an action
import android.os.Bundle;
import android.os.Handler;
1 how to use an android timer
import android.os.Message;
import android.os.Handler.Callback; 0 Run a method within specific times in
import android.view.View; android
import android.widget.Button;
import android.widget.TextView; 1 Timertask change TextView setText()?

public class main extends Activity { 173 How to set a timer in android
TextView text, text2, text3;
long starttime = 0; 50 Android timer updating a textview (UI)
//this posts a message to the main thread from our timertask
//and updates the textfield 18 java.util.Timer: Is it deprecated?
final Handler h = new Handler(new Callback() {
19 Pause the timer and then continue it
@Override
public boolean handleMessage(Message msg) { 10 How to display a timer in a TextView in
long millis = System.currentTimeMillis() - starttime; Android?
int seconds = (int) (millis / 1000);
int minutes = seconds / 60; See more linked questions
seconds = seconds % 60;

text.setText(String.format("%d:%02d", minutes, seconds)); Related


return false;
} 1329 Strange OutOfMemory issue while loading
}); an image to a Bitmap object
//runs without timer be reposting self
Handler h2 = new Handler(); 1994 How to lazy load images in ListView in
Android
Runnable run = new Runnable() {
1127 Fling gesture detection on grid layout
@Override
4015 How do you close/hide the Android soft
the main thing to remember is that the UI can only be modified from the main ui thread so use a keyboard programmatically?
handler or activity.runOnUIThread(Runnable r);
2995 How to stop EditText from gaining focus at
Activity startup in Android
Here is what I consider to be the preferred method.
2882 Is there a unique Android device ID?

import android.app.Activity; 834 What is the simplest and most robust way
to get the user's current location on
import android.os.Bundle;
Android?
import android.os.Handler;
import android.view.View; 2082 What is 'Context' on Android?
import android.widget.Button;
import android.widget.TextView; Proper use cases for Android
3750
UserManager.isUserAGoat()?
public class TestActivity extends Activity {

TextView timerTextView; Hot Network Questions


long startTime = 0;
What's the safest way to carry and throw marble
sized high-explosive grenades?
//runs without a timer by reposting this handler at the end of the runnable
Handler timerHandler = new Handler(); Wolstenholme numbers
Runnable timerRunnable = new Runnable() {
Must observables be Hermitian only because we
want real eigenvalues, or is more to that?
@Override
Join Stack Overflow to learn, share knowledge, and build your career. Sign up with email Sign up with Google Sign up with GitHub Sign up with Facebook
public void run() { Sci-fi short story about why you only find one shoe
long millis = System currentTimeMillis() startTime; on the road
long millis = System.currentTimeMillis() - startTime; on the road
int seconds = (int) (millis / 1000);
About Products For Kids' book where poor boy finds magic
Log instone Sign
in up
intTeams Search…/ 60;
minutes = seconds field and swallows it, becoming a dragon
seconds = seconds % 60; Which syntax to use to render calligraphic symbols
like 𝓔 and 𝓑?
timerTextView.setText(String.format("%d:%02d", minutes, seconds)); recordor + genitive?

timerHandler.postDelayed(this, 500); Would a duct tape spacesuit be practical?


}
Why is "lion" the answer to "Take the first left turn
};
at the zoo?"

@Override How to implement gain and offset in a single


public void onCreate(Bundle savedInstanceState) { opamp?
super.onCreate(savedInstanceState); what's :eval (colon eval) in defcustom?
setContentView(R.layout.test_activity);
How can I universally disable titles on hover in
Safari?
Share Improve this answer Follow edited Dec 2 '13 at 22:09 answered Jan 4 '11 at 21:43
ZX81 screen corrupted
Dave.B
6,462 1 17 20 Why is the mid-deck on the Space Shuttle named
as it is?

@Dave.B, thanks for the great example. Are there any advantages/disadvantages to using one method vs Are there any actual uses of isodiaphers?
1
the others you have outlined? – Gautam Sep 16 '12 at 4:45
What is a word/expression for useless advice?
2 @Gautam I believe the all the methods above perform about the same. I personally prefer the handler Replacing duplicates instead of deleting them
method described above with the run Runnable and h2 Handler as it is the one prescribed by the android
developer site and in my opinion also the most elegant. – Dave.B Sep 17 '12 at 19:23 How important is it that your PhD advisor is an
experienced, well-known faculty member?
7 It would be nice to have your preferred method separated from the rest of the code. Like you could have one
example showing your preferred way and another showing the alternatives. Having all three methods How to tag both the align and individual equations
together makes it harder to understand what's going on (especialy for an android newbie like me). Probably within it?
asking too much though :) – Jesse Aldridge Oct 10 '13 at 22:30 How to reply to students email who wants a reply
instantly?
5 @JesseAldridge Good idea. I went ahead and added code with the preferred method only. – Dave.B Oct 17
'13 at 17:29 In the theory of special relativity speed is relative
so who decides which observer’s time moves
1 @bluesm I honestly just didn't think about it but yes that would work fine. – Dave.B Dec 5 '13 at 18:41 slower?

Why did Jesus create Judas only for the


Show 13 more comments predestined purpose of being doomed to
destruction - based on [Colossians 1:15-20] +
[John 13:27, 17:12]?
It is simple! You create new timer.
How Would I Justify Keeping Most Magic Away
from the General Public if it is Technically
86 Timer timer = new Timer();
Learnable by Everyone?

How can I make an object visible, only where it


intersects with other objects?
Then you extend the timer task
Question feed

class UpdateBallTask extends TimerTask {


Ball myBall;

public void run() {


//calculate the new position of myBall
}
}

And then add the new task to the Timer with some update interval

final int FPS = 40;


TimerTask updateBall = new UpdateBallTask();
timer.scheduleAtFixedRate(updateBall, 0, 1000/FPS);

Disclaimer: This is not the ideal solution. This is solution using the Timer class (as asked by OP). In
Android SDK, it is recommended to use the Handler class (there is example in the accepted
answer).

Share Improve this answer Follow edited Nov 14 '13 at 20:21 answered Jan 4 '11 at 19:56
fiction
10.7k 5 45 72

1 if you read the post above you'll see why this isn't an ideal solution – Dave.B Jan 4 '11 at 20:06

3 Of course. The OP wanted to do it with TimerTask, which I will not recommend to be used in game. – fiction
Jan 4 '11 at 20:09

4 Huh? The OP didn't specify how they wanted it done. They linked to an article that used TimerTask, but they
didn't request that it be done that way. – ToolmakerSteve Sep 12 '14 at 16:22

1 Helped alot, Thanks @fiction – Naveed Ahmad Oct 20 '15 at 12:04

1 great answer simple to follow. – Maduro Oct 25 '15 at 20:31

Show 2 more comments

If you also need to run your code on UI thread (and not on timer thread), take a look on the blog:
http://steve.odyfamily.com/?p=12
70
public class myActivity extends Activity {
private Timer myTimer;

/** Called when the activity is first created. */


@Override
public void onCreate(Bundle icicle) {
super.onCreate(icicle);
setContentView(R.layout.main);

myTimer = new Timer();


myTimer.schedule(new TimerTask() {
Join Stack Overflow to learn, share knowledge, and build your career. Sign up with email Sign up with Google Sign up with GitHub Sign up with Facebook
@Override
public void run() {
pub c o d u () {
TimerMethod();
About Products } For Teams Search… Log in Sign up

}, 0, 1000);
}

private void TimerMethod()


{
//This method is called directly by the timer
//and runs in the same thread as the timer.

//We call the method that will work with the UI


//through the runOnUiThread method.
this.runOnUiThread(Timer_Tick);
}

private Runnable Timer_Tick = new Runnable() {


public void run() {

//This method runs in the same thread as the UI.

Share Improve this answer Follow answered Sep 24 '12 at 15:16


Meir Gerenstadt
3,499 1 20 20

For the sake of completeness you could perhaps mention what to do to stop the timer, and maybe restart it. (I
found the necessary info here: stackoverflow.com/questions/11550561/… ) – RenniePet Apr 25 '14 at 14:39

4 Is there any reason you cant just call runOnUIThread directly from TimerTask run method? Seems to work
fine and removes another level of nesting. – RichieHH Jul 19 '14 at 22:06

Sure, it is only a didactic method to understand all the steps. I suggest this standard to have a readable code.
– Meir Gerenstadt Jul 20 '14 at 11:58

Add a comment

If one just want to schedule a countdown until a time in the future with regular notifications on
intervals along the way, you can use the CountDownTimer class that is available since API level 1.
45
new CountDownTimer(30000, 1000) {
public void onTick(long millisUntilFinished) {
editText.setText("Seconds remaining: " + millisUntilFinished / 1000);
}

public void onFinish() {


editText.setText("Done");
}
}.start();

Share Improve this answer Follow edited Nov 27 '18 at 9:09 answered Aug 25 '14 at 14:16
Ahmed Hegazy
11.4k 5 34 61

3 CountDownTimer only makes sense if you know that you want it to go away after several executions. This is
not a typical, nor particularly flexible, approach. More common is the timer that repeats forever (which you
cancel when no longer needed), or the handler that runs once, and then starts itself again if will be needed
again. See other answers. – ToolmakerSteve Sep 12 '14 at 15:42

1 You are totally right. From the class name it provides one time count down timer ticking until finish and of
course it uses Handler in its implementation. – Ahmed Hegazy Sep 13 '14 at 13:51

How to show milliseconds also? In the format SS:MiMi ? Thanks – Ruchir Baronia Dec 7 '15 at 12:38

Add a comment

This is some simple code for a timer:

28 Timer timer = new Timer();


TimerTask t = new TimerTask() {
@Override
public void run() {

System.out.println("1");
}
};
timer.scheduleAtFixedRate(t,1000,1000);

Share Improve this answer Follow edited Oct 10 '18 at 8:56 answered Jun 23 '16 at 13:33
Pochmurnik Jevgenij Kononov
708 5 15 30 956 11 10

what about if we want it only run at 04:00 using that Timer object? – gumuruh Jan 14 at 22:37

Add a comment

I think you can do it in Rx way like:

13 timerSubscribe = Observable.interval(1, TimeUnit.SECONDS)


.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(new Action1<Long>() {
@Override
Join Stack Overflow to learn, share knowledge, and public void
build your call(Long aLong)
career. Sign up {with email Sign up with Google Sign up with GitHub Sign up with Facebook
//TODO do your stuff
}
}
});
About Products For Teams Search… Log in Sign up

And cancel this like:

timerSubscribe.unsubscribe();

Rx Timer http://reactivex.io/documentation/operators/timer.html

Share Improve this answer Follow edited Dec 28 '15 at 11:11 answered Dec 27 '15 at 17:13
Will
141 1 6

Add a comment

Because this question is still attracting a lot of users from google search(about Android timer) I
would like to insert my two coins.
11
First of all, the Timer class will be deprecated in Java 9 (read the accepted answer).

The official suggested way is to use ScheduledThreadPoolExecutor which is more effective and
features-rich that can additionally schedule commands to run after a given delay, or to execute
periodically. Plus,it gives additional flexibility and capabilities of ThreadPoolExecutor.

Here is an example of using plain functionalities.

1. Create executor service:

final ScheduledExecutorService SCHEDULER = Executors.newScheduledThreadPool(1);

2. Just schedule you runnable:

final Future<?> future = SCHEDULER.schedule(Runnable task, long delay,TimeUnit

3. You can now use future to cancel the task or check if it is done for example:

future.isDone();

Hope you will find this useful for creating a tasks in Android.

Complete example:

ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);


Future<?> sampleFutureTimer = scheduler.schedule(new Runnable(), 120, TimeUnit.SECO
if (sampleFutureTimer.isDone()){
// Do something which will save world.
}

Share Improve this answer Follow edited May 23 '17 at 12:34 answered Mar 10 '17 at 4:43
Community ♦ szholdiyarov
1 1 370 2 11

Add a comment

I'm surprised that there is no answer that would mention solution with RxJava2. It is really simple
and provides an easy way to setup timer in Android.
8
First you need to setup Gradle dependency, if you didn't do so already:

implementation "io.reactivex.rxjava2:rxjava:2.x.y"

(replace x and y with current version number)

Since we have just a simple, NON-REPEATING TASK, we can use Completable object:

Completable.timer(2, TimeUnit.SECONDS, Schedulers.computation())


.observeOn(AndroidSchedulers.mainThread())
.subscribe(() -> {
// Timer finished, do something...
});

For REPEATING TASK, you can use Observable in a similar way:

Observable.interval(2, TimeUnit.SECONDS, Schedulers.computation())


.observeOn(AndroidSchedulers.mainThread())
.subscribe(tick -> {
// called every 2 seconds, do something...
}, throwable -> {
// handle error
});

Schedulers.computation() ensures that our timer is running on background thread and


.observeOn(AndroidSchedulers.mainThread()) means code we run after timer finishes will be
done on main thread.

To avoid unwanted memory leaks, you should ensure to unsubscribe when Activity/Fragment is
Join Stack Overflow to learn, share knowledge, and build your career.
destroyed.
Sign up with email Sign up with Google Sign up with GitHub Sign up with Facebook
About Share Improve this
Products Foranswer
Teams Follow edited Feb 12 '20 at 10:55
Search… answered Mar 31 '18 at 20:32 Log in Sign up

Micer
7,328 3 67 60

4 This is the cleanest approach! – Constantin May 15 '18 at 18:43

how does one cancel these ? i.e. when the user presses the [STOP] button on the UI and the Completable is
canceled before executing. – Someone Somewhere Jun 11 '19 at 15:40

@SomeoneSomewhere Just save the Subscription returned by .subscribe() method in the


variable and then call subscription.unsubscribe() when you want to stop the timer. – Micer Jun 19
'19 at 7:24

Add a comment

for whom wants to do this in kotlin:

4 val timer = fixedRateTimer(period = 1000L) {


val currentTime: Date = Calendar.getInstance().time
runOnUiThread {
tvFOO.text = currentTime.toString()
}
}

for stopping the timer you can use this:

timer.cancel()

this function has many other options, give it a try

Share Improve this answer Follow edited Jun 30 '20 at 13:28 answered Jun 30 '20 at 12:46
Amir
133 1 10

Add a comment

You want your UI updates to happen in the already-existent UI thread.

2 The best way is to use a Handler that uses postDelayed to run a Runnable after a delay (each run
schedules the next); clear the callback with removeCallbacks.

You're already looking in the right place, so look at it again, perhaps clarify why that code sample
isn't what you want. (See also the identical article at Updating the UI from a Timer).

Share Improve this answer Follow edited Dec 18 '16 at 5:50 answered Jan 4 '11 at 20:26
Risinek Liudvikas Bukys
286 1 15 5,390 3 23 36

Unfortunately, your link is dead. I cannot quickly find the correct article back. – Lekensteyn Nov 26 '13 at
11:38

Working link here – Risinek Dec 18 '16 at 1:52

Add a comment

He're is simplier solution, works fine in my app.

1 public class MyActivity extends Acitivity {

TextView myTextView;
boolean someCondition=true;

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.my_activity);

myTextView = (TextView) findViewById(R.id.refreshing_field);

//starting our task which update textview every 1000 ms


new RefreshTask().execute();

//class which updates our textview every second

class RefreshTask extends AsyncTask {

@Override
protected void onProgressUpdate(Object... values) {
super.onProgressUpdate(values);
String text = String.valueOf(System.currentTimeMillis());
myTextView.setText(text);

@Override
protected Object doInBackground(Object... params) {
while(someCondition) {
try {
Join Stack Overflow to learn, share knowledge, and build your career. Sign up with email Sign up with Google Sign up with GitHub Sign up with Facebook
Share Improve this answer Follow answered May 30 '13 at 15:22
p a s e ed ay 30 3 at 5
Rodion Altshuler
About Products For Teams Search… Log in Sign up
1,553 14 29

Add a comment

Here is a simple reliable way...

1 Put the following code in your Activity, and the tick() method will be called every second in the UI
thread while your activity is in the "resumed" state. Of course, you can change the tick() method to
do what you want, or to be called more or less frequently.

@Override
public void onPause() {
_handler = null;
super.onPause();
}

private Handler _handler;

@Override
public void onResume() {
super.onResume();
_handler = new Handler();
Runnable r = new Runnable() {
public void run() {
if (_handler == _h0) {
tick();
_handler.postDelayed(this, 1000);
}
}

private final Handler _h0 = _handler;


};
r.run();
}

private void tick() {


System.out.println("Tick " + System.currentTimeMillis());
}

For those interested, the "_h0=_handler" code is necessary to avoid two timers running
simultaneously if your activity is paused and resumed within the tick period.

Share Improve this answer Follow edited Oct 11 '13 at 15:23 answered Oct 11 '13 at 15:18
Adam Gawne-Cain
965 10 11

2 Why do this awkward _h0 approach, instead of removeCallbacks in onPause , like everyone else? –
ToolmakerSteve Sep 12 '14 at 15:50

Add a comment

You can also use an animator for it:

1 int secondsToRun = 999;

ValueAnimator timer = ValueAnimator.ofInt(secondsToRun);


timer.setDuration(secondsToRun * 1000).setInterpolator(new LinearInterpolator());
timer.addUpdateListener(new ValueAnimator.AnimatorUpdateListener()
{
@Override
public void onAnimationUpdate(ValueAnimator animation)
{
int elapsedSeconds = (int) animation.getAnimatedValue();
int minutes = elapsedSeconds / 60;
int seconds = elapsedSeconds % 60;

textView.setText(String.format("%d:%02d", minutes, seconds));


}
});
timer.start();

Share Improve this answer Follow answered Nov 10 '15 at 11:36


TpoM6oH
7,211 2 29 65

Add a comment

For those who can't rely on Chronometer, I made a utility class out of one of the suggestions:

1 public class TimerTextHelper implements Runnable {


private final Handler handler = new Handler();
private final TextView textView;
private volatile long startTime;
private volatile long elapsedTime;

public TimerTextHelper(TextView textView) {


this.textView = textView;
}

@Override
public void run() {
long millis = System.currentTimeMillis() - startTime;
int seconds = (int) (millis / 1000);
Join Stack Overflow to learn, share knowledge, and build
int minutes = your career.
seconds / 60; Sign up with email Sign up with Google Sign up with GitHub Sign up with Facebook
seconds = seconds % 60;
About Products textView.setText(String.format("%d:%02d",
For Teams Search… minutes, seconds)); Log in Sign up

if (elapsedTime == -1) {
handler.postDelayed(this, 500);
}
}

public void start() {


this.startTime = System.currentTimeMillis();
this.elapsedTime = -1;
handler.post(this);
}

public void stop() {


this.elapsedTime = System.currentTimeMillis() - startTime;
handler.removeCallbacks(this);
}

to use..just do:

TimerTextHelper timerTextHelper = new TimerTextHelper(textView);


timerTextHelper.start();

.....

timerTextHelper.stop();
long elapsedTime = timerTextHelper.getElapsedTime();

Share Improve this answer Follow edited Jan 10 '20 at 13:55 answered Aug 3 '16 at 15:24
Jon Alécio Carvalho
3,268 2 12 22 12.7k 5 63 70

Add a comment

You need to create a thread to handle the update loop and use it to update the textarea. The tricky
part though is that only the main thread can actually modify the ui so the update loop thread needs
0 to signal the main thread to do the update. This is done using a Handler.

Check out this link: http://developer.android.com/guide/topics/ui/dialogs.html# Click on the section


titled "Example ProgressDialog with a second thread". It's an example of exactly what you need to
do, except with a progress dialog instead of a textfield.

Share Improve this answer Follow answered Jan 4 '11 at 19:54


Nick
7,815 2 36 62

Don't do this. There is a simple timer class that does this all for you. And this question has nothing at all to do
with progressdialogs, or dialogs at all. – Falmarri Jan 4 '11 at 20:04

Did you look at the section of the link I posted or did you just see the word dialog and assume? The code
there is 100% relevant. Also FYI, if you use timer, you are still creating a thread to handle the update loop.
You'll still need to use the Handler as described in the link I posted. – Nick Jan 4 '11 at 21:08

Unfortunately, the linked page no longer contains a section with the title mentioned. When linking to code,
one should always include the key snippet directly within your answer. – ToolmakerSteve Sep 12 '14 at 16:20

Add a comment

void method(boolean u,int max)


{
0 uu=u;
maxi=max;
if (uu==true)
{
CountDownTimer uy = new CountDownTimer(maxi, 1000)
{
public void onFinish()
{
text.setText("Finish");
}

@Override
public void onTick(long l) {
String currentTimeString=DateFormat.getTimeInstance().format(new Da
text.setText(currentTimeString);
}
}.start();
}

else{text.setText("Stop ");
}

Share Improve this answer Follow edited Dec 12 '12 at 14:10 answered Dec 12 '12 at 13:42
WEFX kamilia jaber
7,629 8 59 93 1

2 Maybe some code indentation and code explanations would be useful. – Raul Rene Dec 12 '12 at 14:03

Add a comment

If anyone is interested, I started playing around with creating a standard Sign


object to run on an
Join Stack Overflow to learn, share knowledge, and build your career. Sign up with email up with Google Sign up with GitHub Sign up with Facebook
activities UI thread. Seems to work ok. Comments welcome. I'd love this to be available on the
0
0 layout designer as a component to drag onto an Activity. Can't believe something like that doesn't
About Products
already exist. For Teams Search… Log in Sign up

package com.example.util.timer;

import java.util.Timer;
import java.util.TimerTask;

import android.app.Activity;

public class ActivityTimer {

private Activity m_Activity;


private boolean m_Enabled;
private Timer m_Timer;
private long m_Delay;
private long m_Period;
private ActivityTimerListener m_Listener;
private ActivityTimer _self;
private boolean m_FireOnce;

public ActivityTimer() {
m_Delay = 0;
m_Period = 100;
m_Listener = null;
m_FireOnce = false;
_self = this;
}

public boolean isEnabled() {


return m_Enabled;
}

public void setEnabled(boolean enabled) {


if (m_Enabled == enabled)
return;

// Disable any existing timer before we enable a new one

In the activity, I have this onStart:

@Override
protected void onStart() {
super.onStart();

m_Timer = new ActivityTimer();


m_Timer.setFireOnlyOnce(true);
m_Timer.setActivity(this);
m_Timer.setActionListener(this);
m_Timer.setDelay(3000);
m_Timer.start();
}

Share Improve this answer Follow answered May 5 '13 at 5:11


James Barwick
420 3 8

1 dude whats wrong with the ActivityTimerListener? My ADT Bundle said that there is no such class. –
Sorokin Andrey Oct 7 '13 at 10:42

Add a comment

import java.text.SimpleDateFormat;
import java.util.Calendar;
0 import java.util.Timer;
import java.util.TimerTask;

import android.os.Bundle;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;
import android.widget.CheckBox;
import android.widget.TextView;
import android.app.Activity;

public class MainActivity extends Activity {

CheckBox optSingleShot;
Button btnStart, btnCancel;
TextView textCounter;

Timer timer;
MyTimerTask myTimerTask;

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
optSingleShot = (CheckBox)findViewById(R.id.singleshot);
btnStart = (Button)findViewById(R.id.start);
btnCancel = (Button)findViewById(R.id.cancel);
textCounter = (TextView)findViewById(R.id.counter);

btnStart.setOnClickListener(new OnClickListener(){

@Override
public void onClick(View arg0) {

.xml
Join Stack Overflow to learn, share knowledge, and build your career. Sign up with email Sign up with Google Sign up with GitHub Sign up with Facebook

<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
ea ayout s a d o d ttp //sc e as a d o d co /ap / es/a d o d
xmlns:tools="http://schemas.android.com/tools"
About Products For Teams Search…
android:layout_width="match_parent" Log in Sign up

android:layout_height="match_parent"
android:paddingBottom="@dimen/activity_vertical_margin"
android:paddingLeft="@dimen/activity_horizontal_margin"
android:paddingRight="@dimen/activity_horizontal_margin"
android:paddingTop="@dimen/activity_vertical_margin"
android:orientation="vertical"
tools:context=".MainActivity" >

<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_horizontal"
android:autoLink="web"
android:text="http://android-er.blogspot.com/"
android:textStyle="bold" />
<CheckBox
android:id="@+id/singleshot"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Single Shot"/>

Share Improve this answer Follow answered Jun 18 '14 at 10:48


AndroidGeek
30.6k 14 212 262

I see that you've added this several years after the original question & answers. Please add explanation of
how this answer compares to other answers that were already there. Why did you add another - what benefit
/ when useful / what shortcoming did you see in other answers? – ToolmakerSteve Sep 12 '14 at 15:45

I just share a code which is doing same work with a different approach. But whenever you want to update any
view's data. You must use handler for it. because many time i have notice that using a timertask to update a
view doesn't work .. @Dave.B method is more correct to my knowledge. – AndroidGeek Sep 15 '14 at 6:58

Add a comment

If you have delta time already.

0 public class Timer {


private float lastFrameChanged;
private float frameDuration;
private Runnable r;

public Timer(float frameDuration, Runnable r) {


this.frameDuration = frameDuration;
this.lastFrameChanged = 0;
this.r = r;
}

public void update(float dt) {


lastFrameChanged += dt;

if (lastFrameChanged > frameDuration) {


lastFrameChanged = 0;
r.run();
}
}
}

Share Improve this answer Follow answered Dec 24 '16 at 6:39


Matthew Hooker
329 2 10

Add a comment

I Abstract Timer away and made it a separate class:

0 Timer.java

import android.os.Handler;

public class Timer {

IAction action;
Handler timerHandler = new Handler();
int delayMS = 1000;

public Timer(IAction action, int delayMS) {


this.action = action;
this.delayMS = delayMS;
}

public Timer(IAction action) {


this(action, 1000);
}

public Timer() {
this(null);
}

Runnable timerRunnable = new Runnable() {

@Override
public void run() {
Join Stack Overflow to learn, share knowledge,if
and(action != career.
build your null) Sign up with email Sign up with Google Sign up with GitHub Sign up with Facebook
action.Task();
ti H dl tD l d(thi d l MS)
timerHandler.postDelayed(this, delayMS);
}
About Products For Teams Search… Log in Sign up
};

public void start() {


timerHandler.postDelayed(timerRunnable, 0);
}

And Extract main action from Timer class out as

IAction.java

public interface IAction {


void Task();
}

And I used it just like this:

MainActivity.java

public class MainActivity extends Activity implements IAction{


...
Timer timerClass;
@Override
protected void onCreate(Bundle savedInstanceState) {
...
timerClass = new Timer(this,1000);
timerClass.start();
...
}
...
int i = 1;
@Override
public void Task() {
runOnUiThread(new Runnable() {

@Override
public void run() {
timer.setText(i + "");
i++;
}
});
}
...
}

I Hope This Helps 😊👌


Share Improve this answer Follow answered Oct 14 '19 at 20:55
Yashar Aliabbasi
2,114 1 17 34

Add a comment

I use this way:

0 String[] array={
"man","for","think"
}; int j;

then below the onCreate

TextView t = findViewById(R.id.textView);

new CountDownTimer(5000,1000) {

@Override
public void onTick(long millisUntilFinished) {}

@Override
public void onFinish() {
t.setText("I "+array[j] +" You");
j++;
if(j== array.length-1) j=0;
start();
}
}.start();

it's easy way to solve this problem.

Share Improve this answer Follow answered Oct 25 '19 at 12:26


Mori
57 4

Add a comment

enter code here


Thread th=new Thread(new Runnable() {
0 @Override
public void run() {
Join Stack Overflow to learn, share knowledge, and build your career. Sign up with email Sign up with Google Sign up with GitHub Sign up with Facebook
try { for(int i=0;i<5;i++) {
b1 setText(""+i);
b1.setText( +i);
Thread.sleep(5000);
About Products For Teams Search…Runnable() {
runOnUiThread(new Log in Sign up

@Override
public void run() {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
pp();

}
}
});
}} catch (InterruptedException e) {
e.printStackTrace();
}
}
});
th.start();

Share Improve this answer Follow answered Jan 29 at 4:58


amra ram
1 1

Add a comment

Your Answer

Sign up or log in Post as a guest


Name
Sign up using Google

Sign up using Facebook Email


Required, but never shown

Sign up using Email and Password

Post Your Answer

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy

Not the answer you're looking for? Browse other questions tagged android timer or ask your own
question.

STACK OVERFLOW PRODUCTS COMPANY STACK EXCHANGE Blog Facebook Twitter LinkedIn Instagram
NETWORK
Questions Teams About
Technology
Jobs Talent Press
Life / Arts
Developer Jobs Directory Advertising Work Here
Culture / Recreation
Salary Calculator Enterprise Legal
Science
Help Privacy Policy
Other
Mobile Terms of Service
Disable Responsiveness Contact Us
Cookie Settings
site design / logo © 2021 Stack Exchange Inc; user contributions
Cookie Policy licensed under cc by-sa. rev 2021.4.23.39140

Join Stack Overflow to learn, share knowledge, and build your career. Sign up with email Sign up with Google Sign up with GitHub Sign up with Facebook
www.lawcommission.gov.np

The Crime Victim Protection Act, 2075 (2018)


Date of Authentication:
2075/06/02 (18 September 2018)

Act Number 22 of the year 2075


An Act Made to Provide for the Protection of the Crime Victims
Preamble:
Whereas, it is expedient to make necessary provisions on the protection of the
rights and interests of the victims, by making provisions also for compensation to the
victims for damage sustained as a result of an offence, and reducing adverse effects
caused to the victims of crimes, for getting information related to the investigation and
proceedings of the cases in which they have been victimized, for getting justice along
with social rehabilitation and compensation pursuant to law, while ensuring the right of
crime victims to justice conferred by the Constitution of Nepal, which remains as an
integral part of the process of offender justice;
Now, therefore, be it enacted by the Federal Parliament.

Chapter-1
Preliminary
1. Short title and commencement: (1) This Act may be cited as the "Crime Victim
Protection Act, 2075 (2018).”
(2) This Act shall commence immediately.
2. Definitions: Unless the subject or the context otherwise requires, in this Act, -
(a) "Court" means a court that is authorized by the prevailing law to try and
settle any offence, and this also includes such other judicial authority or
body authorized by law to try and settle any specific type of case.
(b) "Offence" means an offender offence in which the government is plaintiff
pursuant to law, and the victim has died or has to bear damage.
(c) "Offender" means a person who is convicted by the court of an offence.
(d) "Fund" means the Victim Relief Fund established pursuant to law.
(e) "Prescribed" or "as prescribed" means prescribed or as prescribed in the
rules framed under this Act.

1
www.lawcommission.gov.np

(f) "Victim of second grade" means a person who has not been involved in the
offence that has been committed or is being committed against the victim
of first grade but who has to bear damage because of being an eyewitness
of such offence, and this expression also includes the guardian of the minor
victim of first grade who has not been involved in the offence but who has
to bear damage because of having information about, or being an
eyewitness of, the offence, and any of the following persons who have to
bear damage because of having knowledge as to the offence committed
against the victim of first grade:
(1) Guardian of the victim of first grade,
(2) Where the victim of first grade is a minor, and
(3) Where the person who has to bear such damage is not involved in
the offence.
(g) "Minor" means a person who has not attained the age of eighteen years.
(h) "Victim of first grade" means a person who has died or has sustained
damage as a direct result of an offence that has been committed against the
victim, irrespective of whether the perpetrator does not have to bear
criminal liability on the ground of his or her age, mental unsoundness,
diplomatic immunity or position or whether the identity of the perpetrator
remains untraced or whether charge has not been made against the
perpetrator or whether the case related to the offence has been withdrawn
or whether the sentence imposed on the offender is pardoned or whether
the perpetrator has not been convicted of the offence or irrespective of the
family relation of the perpetrator with the victim, and this phrase also
includes a person who has not been involved in the offence but has died or
sustained damage in any of the following circumstances:
(1) While preventing the person who is committing the offence from
committing it,
(2) While extending reasonable support and rescuing with the purpose
of saving any person where an offence is being committed against
such a person,
(3) While trying to arrest the person who is committing or has
committed the offence or extending support to the competent
authority in the course of arresting the suspect, accused or offender.
2
www.lawcommission.gov.np

(i) "Family victim" means the victim’s mother, father, husband, wife living in
the undivided family of the victim or other member of the undivided family
dependent on the victim, who is not involved in the offence against the
victim of first grade who has died as a direct result of the offence.
(j) "Victim" means an individual who is the victim of first grade, victim of
second grade and family victim.
(k) "Victim Protection Suggestion Committee" means the Victim Protection
Suggestion Committee referred to in Section 44.
(l) "Guardian" means the guardian of a victim who remains as such or is
appointed pursuant to the prevailing law.
(m) "Damage" means the following damage caused to the victim as a direct
result of the offence:
(1) Grievous hurt,
(2) Pregnancy occurred due to rape,
(3) Contracting any communicable disease recognized by medical
sciences that causes adverse impact on the physical or mental health
or life of the victim,
(4) Mental anxiety, emotional trauma or damage identified by the
medical doctor,
(5) Destruction of physical, intellectual, sexual or reproductive capacity
or serious damage caused to such capacity,
(6) Adverse impact caused on the social, cultural or family prestige of
the victim due to rape,
(7) Psychological or psychiatric damage,
Explanation: For the purposes of this sub-clause, the term
"psychological or psychiatric damage" means the effect detected by
the medical test, which is not recovered or reduced in short period
and which inflicts negative effect upon the health of the victim.
(8) Financial or physical damage,
(9) Making physical beauty of the victim ugly.
3. Not to be deemed victim: (1) Notwithstanding anything contained elsewhere in
this Act, where a person has sustained damage or died in the following
circumstance, the person who has so sustained damage or died or his or her family
member shall not be deemed to be a victim for the purposes of this Act:
3
www.lawcommission.gov.np

(a) While doing any act in the course of saving the body, life, property
or chastity of his or her own or anyone else under the private defense
pursuant to the prevailing law,
(b) While doing any act by a security employee who has been deputed
or deployed by the order of the competent authority in the course of
performing his or her duties pursuant to the prevailing law,
(c) While doing any act by the investigating authority having authority
to investigate pursuant to the prevailing law, in the course of making
investigation, subject to his or her jurisdiction,
(d) Any act done in a situation where the criminal liability need be
borne pursuant to the prevailing law,
Provided that even if the criminal liability of the perpetrator need
not be borne as a result of the perpetrator's age, mental unsoundness,
diplomatic immunity or immunity enjoyable on the basis of position, it
shall be deemed, for the purposes of this Act, that such a person has
committed the offence, and the concerned person shall be deemed to be a
victim due to the offence.
(2) Notwithstanding anything contained in sub-section (1), nothing
contained in this Act shall prevent the Government of Nepal from providing relief
to a person who has sustained damage or died due to the circumstance set forth in
that sub-section.

Chapter-2
Rights and Duties of the Victims in Criminal Justice Process
4. Right to get fair treatment: The victim shall have the right to enjoy decent, fair,
dignified and respectful treatment during the criminal justice process.
5. Right against discrimination: No discrimination shall be made on the ground of
the victim’s religion, colour, gender, caste, ethnicity, origin, language, marital
status, age, physical or mental unsoundness, disability or ideology or similar other
ground.
Provided that where the particular need of the victim who is a minor, senior
citizen or a person with physical or mental disability is to be considered in the
course of criminal justice process, it shall not be deemed to prevent from according
a special treatment to such a victim as far as possible.
4
www.lawcommission.gov.np

6. Right to privacy: (1) The victim shall have the right to privacy in the course of
investigation, enquiry, prosecution and court proceedings of the following
offences:
(a) Rape,
(b) Incest,
(c) Human trafficking,
(d) Sexual harassment,
(e) Such other criminal offence as prescribed by the Government of
Nepal by publishing a notice in the Nepal Gazette.
(2) No person shall disclose the identity of the victim in any manner, in
the offences referred to in sub-section (1).
(3) Where it is required to have any deed executed by, take statement or
deposition of, the victim in the course of investigation, enquiry and court
proceedings of the offences referred to in sub-section (1), it shall be done as
follows, if the victim so desires:
(a) By presenting the victim, without disclosing his or her identity,
(b) By making the victim change his or her actual voice,
(c) By using the audio-visual dialogue technology in such a way that
the accused cannot see and hear,
(d) By making provision so that the accused cannot see or her or can
only hear.
7. Right to information relating to investigation: (1) Where the victim so demands,
the investigating authority or body shall provide him or her with information on
the following matters as soon as possible:
(a) Medical, psychological, psychiatric, social, legal or any other
service or counseling to be received by the victim pursuant to this
Act or the prevailing law,
(b) Name and full address of the prosecuting body,
(c) Name, office and telephone number of the investigation authority,
(d) Progress report of investigation and enquiry,
(e) Name, age, address and complexion of the suspect,
(f) Where the suspect is arrested, description thereof,
(g) Matters expressed in relation to the offence by the suspect or any
other person before the investigating authority,
5
www.lawcommission.gov.np

(h) Where the suspect has absconded from the custody of the
investigating authority or has been arrested again, description
thereof,
(i) Where the investigating authority has released a person remanded
in custody or arrested in the course of investigation, upon
considering that it is not necessary to keep that person in custody,
description,
(j) General information about the investigation and enquiry processes
to be carried out with respect to the offence pursuant to the
prevailing law.
(2) Notwithstanding anything contained in clauses (d), (e), (f), (g) and (h)
of sub-section (1), in cases where it is likely to adversely affect the investigation
into the offence or to pose threat to body, life and property of the suspect or any
person associated with him or her if such information is provided to the victim,
the investigating authority shall not be compelled to provide such information to
the victim, and the authority shall give information thereof, along the reasons why
information could not be so provided, to the victim.
8. Right to information relating to prosecution: The prosecuting body or authority
shall provide the victim with the following information as to the offence as soon
as possible if the victim so demands:
(a) Where decision has been made not to institute the case, the ground
and reason for making such decision not to institute the case,
(b) Where decision has been made to institute the case against any
person but not to institute the case in the case of any person, the
name, surname and address of the person against whom the decision
has been made not to institute the case, and the ground and reason
for making decision not to so institute the case,
(c) Where decision is made to institute the case, a certified copy of
the charge-sheet,
(d) General information relating to the court proceedings that take place
pursuant to the prevailing law,
(d) Where any additional claim has been made pursuant to the
prevailing law with respect to the person against whom the case has
been instituted or the person against whom the case has not been
6
www.lawcommission.gov.np

instituted for the time being, description thereof and the order made
by the case trying authority in that respect.
(e) Where the victim is also an eyewitness of the offence, information
relating to the role to be played by him or her as a witness,
(f) Where the accused who has absconded at the time of filing the
charge-sheet is arrested in pursuance of the order of the case trying
authority or voluntarily appears, description thereof,
(g) Where the Government of Nepal has decided to withdraw the case
filed in the court in relation the offence, description thereof.
9. Right to information relating to judicial proceedings: The prosecuting body or
authority or court or the concerned body shall provide the victim with the
following information as soon as possible if the victim so demands:
(a) Where the accused has to remain in detention for trial, description
thereof,
(b) Where the accused is not required to remain in detention for trial or
the accused wo has been detained is released from detention,
description thereof,
(c) Date, venue and time of hearing to be held by the court,
(d) Where the accused has made an application that he or she be
released on bail, guarantee or on the condition of making appearance
on the appointed date pursuant to the prevailing law, information
related thereto and the content of the order made on such
application,
(e) Description of the terms and conditions set by the case trying
authority while releasing the accused on bail, guarantee or on the
condition of making appearance on the appointed date or for the
safety of the victim or close relative of the victim,
(f) Where the accused has filed a petition to the appellate level against
the order made by the court of first instance pursuant to the
prevailing law that he or she should be released on bail, guarantee
or on the condition of making appearance on the appointed date, the
notice of the petition and description of the order made on such a
petition,

7
www.lawcommission.gov.np

(g) Where the accused held in detention for trial escapes from the
detention has been rearrested or voluntarily appears, description
thereof,
(h) Where the accused or offender has been released from detention or
prison on the condition of supervision, the conditions of
supervision, and where such conditions are altered, the details
relating to the altered conditions and the date on which such
alterations come into force,
(i) Whether the accused or offender released from detention on the
condition of supervision has complied with the conditions of
supervision or not,
(j) Where the accused or offender has been transferred from the prison
pursuant to the prevailing law, description relating thereto,
(k) The punishment imposed on the offender and in the case of the
sentence of imprisonment, the period when the service of the
imprisonment completes,
(l) Where the offender has absconded prior to the service of the
sentence of imprisonment or has been rearrested, description
thereof,
(m) Where the punishment sentenced to the offender is pardoned,
postponed, charged or reduced or where the offender gets clemency
from the punishment under any legal provision prior to the service
of the sentence of imprisonment, description thereof,
(n) Where the perpetrator against whom the case has not been instituted
or who has not been sent to prison or who has been released from
detention on the condition of remaining under supervision pursuant
to the prevailing law violates the terms and conditions of
supervision, the body to which the victim may make a complaint
against it and the manner of making such a complaint,
(o) Name and address of the prison where the offender is serving the
sentence,
(p) Where the offender has got probation, parole or community service
or open prison or any other facility of similar type, description
relating to this,
8
www.lawcommission.gov.np

(q) Whether the Government of Nepal has made an appeal or not against
the decision made in relation to the offence,
(r) Where order has been made to summon the presence of the
respondent on the appeal, if any, made by the defendant against the
judgment, description thereof,
(s) Decision of the appellate level on the appeal made against the
judgment, and its consequence,
(t) Where the offender has been put under supervision and an
application is made by the offender or anyone else to change the
terms and conditions of supervision or to revoke the order of
supervision pursuant to the prevailing law, the decision made on that
application,
(u) Where the accused or offender has died while in detention or prison,
description thereof,
(v) Where the Government of Nepal sends back a foreign accused or
offender out of the territory of the State of Nepal pursuant to the
prevailing law or deports him or her to a foreign state or
government, description thereof.
10. Right to become safe: The victim shall have the right to become safe from attack,
damage, fears, intimidation or threat likely to be made or exerted by the suspect,
accused, offender or person related to him or her or the witness of the accused
against the victim or close relative of the victim and person dependent on the
victim.
11. Right to express opinion: (1) The victim shall be entitled to express his or her
opinion before the concerned authority on the following matters:
(a) While making a charge against the suspect for the offence
concerned,
(b) Where it is required to make decision for not instituting the case in
relation to the suspect,
(c) Where it is required to make agreement with the accused by way of
plea bargaining as to the charge pursuant to the prevailing law,
(e) Where request is to be made to the case trying authority for a
clemency in the punishment imposable pursuant to the prevailing
law,
9
www.lawcommission.gov.np

(f) Where additional claim is to be made to the charge-sheet filed before


the case trying authority pursuant to the prevailing law,
(g) Where a pre-sentencing report is to be prepared before specification
of the sentence for the offender pursuant to the prevailing law,
(h) While specifying sentence for the offender pursuant to the
prevailing law,
(i) Where investigation is to be carried out pursuant to the prevailing
law as to whether the accused has mental or physical capacity to
commit the offence,
(j) Where decision is to be made to send him or her to the service of
diversion program in the case of the accused or offender,
(k) Where decision is to be made to provide probation, parole,
suspended sentence, open prison, community service or any other
service of similar type to the offender pursuant to the prevailing law,
(l) While conducting hearing as to whether or not consent is to be
granted for withdrawing the case related to the offence that is sub
judice in the court pursuant to the prevailing law.
(2) For expressing an opinion pursuant to sub-section (1), the concerned
authority shall provide the victim with a reasonable time.
12. Right to appoint legal practitioner: The victim may appoint a separate legal
practitioner in the criminal justice process if he or she so wishes.
13. Right of attendance and participation in hearing: (1) Except as otherwise
ordered by the court, the victim shall have the right to attend and put forward his
or her opinion in the proceedings relating to hearing by the court in relation to the
offence.
Provided that where the victim is also a witness of the case, the court may
prevent him or her from attending the particular proceeding until he or she makes
deposition as the witness.
(2) The court shall make order or decision, also upon considering the
statement expressed by the victim pursuant to sub-section (1).
14. Right to stay in separate chamber in the course of hearing: (1) In the course of
the hearing of the offence, the court may provide a separate chamber for the victim
so that he or she can stay separately from the accused, person related to the accused
and witness of the accused.
10
www.lawcommission.gov.np

(2) Where it is not possible and practical to provide a separate chamber


pursuant to sub-section (1), the court shall make necessary arrangement for the
safety and interest of the victim so that the accused, person related to the accused
and witness of the accused cannot contact the victim, except as otherwise ordered
by the court.
15. Right to have property returned: (1) The concerned investigating authority shall
return the property of the victim taken under control in the course of investigation
or for evidence, immediately after the completion of investigation.
(2) Where the property taken under control pursuant to sub-section (1)
is to be submitted to the court for evidence or there is a dispute as to the ownership
or possession of the property, the property shall not be returned before the dispute
is settled.
(3) Notwithstanding anything contained in sub-section (2), the court
may, if it so thinks necessary, make an order to return such property before the
dispute is settled.
16. To hold discussion as to the case related to offence: In the following
circumstances, the court may, with the consent of both the victim and the accused,
hold discussion between the victim and the accused on any matter related to the
offence:
(a) Where the court is satisfied that such discussion would assist in the
settlement of the dispute,
(b) Where the discussion is held under the supervision of the court,
(a) Where holding discussion is not prejudicial to public interest and
justice.
17. Right to make written application: (1) Where the Government of Nepal has the
right to make application or appeal against any order or decision of the court if it
is not satisfied with such order of decision, the victim may make a written
application to the concerned body or authority, requesting that application or
appeal be made against that order or decision.
(2) The application referred to in sub-section (1) has to be made within
fifteen days from the date of receipt of information of such an order or decision.
(3) The concerned body or authority that has the right to make application
or appeal against the decision, order or decision referred to in sub-section (1) shall
make decision by considering such an application.
11
www.lawcommission.gov.np

(4) Information of the decision referred to in sub-section (3) shall be given


to the victim.
18. Right to get information as to compensation: (1) Where the victim is entitled to
obtain compensation pursuant to this Act or other prevailing law and the victim
seeks information with respect to it, the prosecuting authority shall give the victim
information about the action required to be taken in order to obtain compensation.
(2) Where the prosecuting authority has the authority to take action relating
to compensation on behalf of the victim pursuant to the prevailing law, the
prosecuting authority shall, at the request by the victim, take such necessary action
as to be taken on behalf of the victim.
19. Right of compensation and social rehabilitation: (1) The victim shall have the
right to obtain compensation for the damage he or she has sustained, pursuant to
this Act.
(2) For the social rehabilitation of the victim, the Government of Nepal,
Provincial Government and Local Level may, with mutual coordination, conduct
necessary plan and program based on the available resources and means.
20. Right to make application or appeal: (1) Where the concerned victim is not
satisfied with the order or decision made by the court on any offence, the victim
may make application or appeal if such application or appeal can be made against
such order or decision pursuant to the prevailing law, setting out the ground and
reason.
(2) Where no period is specified in the concerned law for making the
application or appeal referred to in sub-section (1), such application or appeal has
to be made within fifteen days from the date of receipt of information of the order
or decision.
(3) The concerned authority has to make decision upon considering the
ground and reason mentioned in the application referred to in sub-section (1), and
give information of such decision to the applicant as well.
21. Duties of the victim: For the purposes of this Act, the duties of the victim shall
be as follows:
(a) To make or give inform or notice as to the offence on time to the
competent body or authority pursuant to the prevailing law,
(b) To assist the investigating or prosecuting authority in the course of
investigation and prosecution of the offence,
12
www.lawcommission.gov.np

(c) To refrain from failing to appear before the investigating authority


or court in order to save the person involved in the offence, or to
refrain from making statement, deposition or submitting any
evidence for that purpose even upon being in appearance,
(d) To provide his or her own real name, surname, address, telephone
number, email address and provide information of the change, if
any, made therein, as soon as possible.
22. To respect the right: The authorities who are involved in the process of
investigation, prosecution, enquiry of the offence and dispensation of justice shall
pay proper attention to respecting and implementing the rights of the victim
conferred pursuant to this Act and the prevailing Nepal law.
23. Application may be made for the enforcement of rights: (1) For the
enforcement of the rights conferred by this Chapter, the victim may make an
application to High Court concerned.
(2) Where it appears, from the application made pursuant to sub-section
(1), that the right of the victim has been encroached or infringed, the High Court
may issue an appropriate order for the enforcement of such right.
(3) While issuing an order pursuant to sub-section (2), the High Court may
write to the concerned body or authority to take departmental action against the
official who has deliberately encroached, infringed or curtailed the rights of the
victim, pursuant to the prevailing Nepal law relating to the conditions of his or her
service.
(4) Where a correspondence is received pursuant to sub-section (3), the
concerned authority shall take departmental action against such official pursuant
to the prevailing law.
24. Action not be invalid: Any decision, order or act already made or done pursuant
to the prevailing law, this Act or the Rules framed under this Act shall not be void
or invalid for the sole reason that the rights of the victim could not be enjoyed by
the victim or have been violated or rejected.

Chapter-3
Victim Impact Report
25. Victim impact report may be submitted: (1) The victim may, if he or she so
desires, submit a victim impact report to the prosecuting authority in such format
13
www.lawcommission.gov.np

and setting out such descriptions as prescribed, mentioning the damage or impact
directly caused to or upon him or her from the offence, prior to the filing of the
charge sheet of the offence in the court.
(2) Where the victim himself or herself is not able to submit the report
referred to in sub-section (1) because of the victim being a minor or a person who
needs guardianship legally or for any other reasonable reason, his or her guardian
or the representative under law may submit such a report on behalf of the victim.
(3) Notwithstanding anything contained in sub-section (1), where the
victim is not able to submit the victim impact report prior to the filing of the charge
sheet in the court, due to a force majeure event, such a report, such a report,
accompanied by the evidence of the occurrence of such an event, may be submitted
to the authority filing the charge sheet within one month from the date on of filing
of the charge sheet in the court.
(4) Where the victim wishes to keep confidential the victim impact report
referred to in sub-section (1) or (2), he or she shall also set out in the report the
content that he or she intends to keep confidential and the reason for it.
(5) The prosecuting authority shall submit to the concerned court the victim
impact report submitted pursuant to sub-section (1) along with the charge sheet,
and the victim impact report submitted pursuant to sub-section (3), within three
days from the date of receipt.
26. Duplicate copy may be demanded: (1) The accused or offender who desires to
receive a duplicate copy of the victim impact report submitted to the court pursuant
to sub-section (5) of Section 25 may get the duplicate copy of such a report from
the court.
(2) Notwithstanding anything contained in sub-section (1), the court may
refuse to issue a duplicate copy of the victim impact report in following conditions:
(a) Where the accused is absconding,
(b) Where the issuance of the duplicate copy would be prejudicial to the
safety and privacy of the victim,
(c) Where the victim desires to keep the victim impact report
confidential.
27. Victim impact report may be taken as the basis: (1) The court may also take
the victim impact report as the basis while determining the sentence for the
offender.
14
www.lawcommission.gov.np

(2) Notwithstanding anything contained in sub-section (1), while


determining the sentence punishment, the court shall not take as the basis that part
of which duplicate copy has been refused to be issued or that part of the report
which has been kept confidential.
28. Not to make presumption that less damage has been caused from the offence:
No presumption shall, by the sole reason that the victim has not submitted the
victim impact report pursuant to this Chapter, be made that less damage or impact
has been caused from the offence to or upon the victim.

Chapter-4
Compensation
29. Power to make order for interim compensation: (1) Where it is required to have
treatment of the victim or provide compensation or any kind of relief amount
immediately, the court may make an order for getting such a person medically
treated or providing compensation or relief amount in an interim manner.
(2) Where the order referred to in sub-section (1) is made, the victim shall
be provided with compensation or relief amount from the Fund.
(3) Where the accused person is convicted of the offence upon judgment
by the court, the court shall order such an offender to pay the amount of
compensation or relief amount provided pursuant to sub-section (2) to the Fund
within thirty-five days of the date on which the judgment was made.
(4) Where so ordered by the court pursuant to sub-section (3), such an
offender shall pay to the Fund the amount of compensation or relief, and where he
or she does not pay such amount within that period, it shall be recovered from any
assets belonging to such an offender as government arrears, within sixty days of
the date on which the judgment was made.
30. To get compensation recovered from offender himself or herself: (1) The court
may, while making final settlement of the case, make an order that a reasonable
amount be paid, as compensation, by the offender to the victim.
(2) While making order for the payment of the compensation pursuant to
sub-section (1), the court shall ascertain as to whether the victim has obtained the
interim compensation or not pursuant to Section 29.
(3) Where the court makes an order pursuant to sub-section (1) that
compensation be paid by the offender to the victim who has already obtained
15
www.lawcommission.gov.np

interim compensation pursuant to Section 29, only the amount that remains after
returning the amount of interim compensation obtained by the victim to the Fund
shall be provided to the victim.
(4) Notwithstanding anything contained elsewhere in this Section, where it
appears that the victim cannot get compensation because the offender has no
property or where the offence is established but the offender cannot be held to be
convicted or where the case related to the offence is withdrawn pursuant to the
prevailing law, the court may make an order that appropriate amount be paid as
compensation to the victim from the Fund.
(5) The amount of compensation shall be provided to the victim from the
Fund within thirty-five days from the receipt of the order pursuant to sub-section
(4).
31. Bases to be taken while determining the amount of compensation: While
determining the amount of compensation to be provided to the victim, the court
may take any or all of the following matters as the basis:
(a) Reasonable expenses borne or to be borne by the victim for medical,
psychological or psychiatric counseling,
(b) Expenses of medical treatment borne or to be borne by the
victim,
(c) Unexpected travel expenses borne by the victim,
Explanation: For the purpose of this clause, "unexpected travel
expenses" means the reasonable expenses incurred in transport
while traveling more than ten kilometers for receiving counselling
or treatment service which the victim requires immediately to lessen
the damage caused to the victim as a direct result of the offence
because such service is not available within the distance of ten
kilometers from the victim’s place of settlement or workplace or the
scene of crime.
(d) Expenses for legal practitioner borne by the victim,
(e) Damage caused to the personal capacity of the victim as a direct
result of the offence,
(f) Financial loss borne or to be borne by the victim,

16
www.lawcommission.gov.np

Provided that where the victim has obtained or is obtain


compensation for such financial loss from the insurance pursuant to
law, compensation shall not be provided pursuant to this clause.
(g) Expenses incurred or to be incurred in repairing or maintaining the
damaged personal goods or purchasing new ones,
(h) The victim's income generation capacity lost or damaged as a direct
result of the offence,
(i) Negative effect caused to the physical beauty of the victim,
(j) Damage caused to physical, intellectual, sexual or reproductive
capacity of the victim,
(k) In the case of the offence of rape, negative effect caused from such
offence to the social, cultural or family prestige or relationship of
the victim,
(l) Where the victim becomes pregnant due to rape, expenses incurable
in abortion or giving birth to and nurturing the baby,
(m) Medical treatment expenses in the case of abortion caused from the
offence,
(n) Reasonable expenses spent by the victim in good faith to become
safe from additional offence that is likely to be committed against
him or her, where the special condition is attracted,
Explanation: For the purposes of this Section "special
condition" means the condition where the victim has sustained or
has to sustain unnatural impact or effect as a direct result of the
offence committed against the victim, by taking undue advantage of
the physical or mental condition of, or the place of residence,
workplace of, the victim or special location of the scene of crime at
the time of the commission of the offence.
(o) Mental or emotional damage borne by the victim,
(p) Other appropriate grounds according to the nature and effect of the
damage,
(q) In the case of the victim whom special condition is applicable to,
reasonable expenses incurred by the victim in good faith to save the
victim of first grade from additional offence,
(r) Guardian's patronage lost by the minor children.
17
www.lawcommission.gov.np

32. To consider group of offences as one offence: For the purpose of providing
compensation pursuant to this Act, compensation shall be provided by considering
a group of offences as one offence.
Explanation: For the purposes of this Section "group of offences" means
two or more than two offences that are connected for the following reasons:
(1) Having been committed by the same person or group of persons
against the same person in the same incident, or having the same
characteristics between these offences for any other reasons, and
(2) Death of the victim or damage caused to the victim from the offence.
33. Compensation not available in more than one status: No person may receive
the compensation referred to in this Act as the victim of first grade, victim of
second grade and family victim or in more than one form or status in any other
form.
34. Compensation not to be provided: Notwithstanding anything contained
elsewhere in this Act, the following victims shall not be provided with
compensation pursuant to this Act:
(a) One who commits the offence in relation to which compensation is
to be received, attempts to commit it, entices or conspires to commit,
or assists in the commission of, or is an accomplice involved in, the
offence,
(b) One who makes claims for compensation referred to in this Act in
the capacity of the victim of first grade where the offence has been
committed against him or her when he or she was involved in any
other offence or due to that reason,
(c) A family victim of the person who has died when he or her was
going to commit an offence against any one or due to that reason,
(d) A person who is entitled to receive compensation pursuant to the
prevailing law under the insurance provision of third party with
respect to the damage caused due to a motor vehicle accident,
Provided that nothing herein contained shall bar the
provision of compensation pursuant to this Act in cases where such
a person was killed or injured by using a motor vehicle with the
intention of killing or injuring.

18
www.lawcommission.gov.np

(e) A victim of second grade or family victim who has information that
the victim of first grade has been involved in any other offence or
has reasonable reason to receive such information,
Provided that this provision shall not be applicable to a
person who is a witness at the time of the commission of the offence
for which compensation is to be received.
(f) A person who is victim of an offence and whose treatment has been
made free on behalf of the government or whose treatment
expenditure has been borne by the government and there is a
possibility that the victim may get recovery,
Provided that nothing herein contained shall bar the
provision of compensation in the case of a damage other than the
expenses for medical counseling or medical treatment.
(g) A victim prisoner who is in detention upon being sentenced to
imprisonment pursuant to the prevailing law and has suffered mental
injury due to the offence committed against him or her while in
detention,
Provided that nothing herein contained shall bar the
provision of compensation also for the mental injury caused from
being imprisoned for the sole reason of not being able to pay the
fine imposed on him or her pursuant to the prevailing law.
(h) A person who has been convicted of the offence against the State
under the prevailing law,
(i) A person who has been convicted of any organized crime under the
prevailing law,
(j) Except for a victim who is a minor or of unsound mind, a person
who has become victim of an offence committed against him or her
due to provocation by him or her to commit the offence against him
or her or due to the conduct of the victim,
(k) A person who does not make information or complaint in relation to
the investigation of, court proceedings on, the offence, who makes
a false information or complaint, who does not assist the
investigating or prosecuting authority or who makes a statement,
deposition or submits evidence with the objective of saving the
19
www.lawcommission.gov.np

person involved in the offence, or who, for that purpose, makes such
a statement or deposition in the court that is contrary to the
statement made before the investigating authority,
(l) A person who has received, or appears to receive, financial support
or compensation from any other source of the Government of Nepal
with respect to the offence for which he or she is entitled to obtain
compensation,
(m) A person who appears to be unjust for being provided with
compensation from the perspective of justice,
(n) A person who makes an application to the effect that he or she does
not wish to obtain compensation,
(o) A person who is yet to pay such fine, claimed amount or any other
amount as ordered by the order of the court or such revenue or other
amount payable by the victim to the Government of Nepal,
(p) Where it is held that a false complaint has been made,
(q) Such a victim in cases where the perpetrator is likely to receive the
benefit of compensation because of the fact that both the victim and
the perpetrator are both the members of an undivided family at the
time of the commission of the offence,
Provided that nothing herein contained shall bar the provision of
compensation to the victim pursuant to this clause in the following
conditions:
(1) Where the perpetrator is not bound to bear the criminal
liability pursuant to the prevailing law because of his or her
age or mental unsoundness,
(2) Where there is no legal provision entitling the victim to
compensation from the offender in such an offence, or even
if it exists such a provision, it does not appear that the victim
will be able to obtain compensation from the perpetrator
because there is no property in the name of the perpetrator or
the undivided family or for any other reason but it is proved
that the victim has lived apart upon separating the bread and
board from the undivided family consisting of the perpetrator
after the offence has been committed, or
20
www.lawcommission.gov.np

(3) A woman who is a victim of rape or a child born from her.


35. Compensation amount to get first priority: Notwithstanding anything contained
in the prevailing law, where the offender has also to pay compensation to the
victim, in addition to the fine, government claimed amount, ten percent, twenty
percent fee, public claimed amount or any other amount, by a judgment of the
court, the first priority shall be given to the compensation to be received by the
victim pursuant to this Act from the amount recovered from the offender.
36. To be recovered as government arrears: Where the offender does not provide
the victim with the amount of compensation ordered by the court to be recoverable
to the victim pursuant to this Act, the court shall get it provided to the victim by
recovering it from the movable and immovable property of the offender as
government arrears.
37. To receive compensation by dependent child or guardian: Where the victim
dies before obtaining the compensation pursuant to this Act, his or her child
dependent on him or her or guardian shall be entitled to such amount of
compensation.
38. To deduct the amount received earlier for compensation: While making
payment of the amount of compensation to the victim pursuant to this Act, only
the amount that remains after deducting the amount received by him or her earlier
for interim compensation shall be provided.
39. To pay the amount of compensation to the Fund: If the victim does not appear
to receive the compensation until six months from the date on which information
as to his or her entitlement to compensation was given pursuant to this Act, the
amount of such compensation amount shall be paid to the Fund after that period.
40. No entitlement of any one else to the amount of compensation: Notwithstanding
anything contained in the prevailing law, no one else shall have entitlement to the
amount obtained as compensation pursuant to Section 29 or 30 of this Act except
where such amount is to be returned, deducted or recovered pursuant to this Act.

Chapter-5
Compensation Levy
41. Provisions relating to compensation levy: (1) The offender shall pay the
following amount to the Fund, as the compensation levy:

21
www.lawcommission.gov.np

(a) Two hundred rupees where punishment of imprisonment for less


than one year is imposed,
(b) Four hundred rupees where punishment of imprisonment for one
year to two years is imposed,
(c) Six hundred rupees where punishment of imprisonment for two
years to three years is imposed,
(d) Eight hundred rupees punishment of imprisonment for three years
to four years is imposed,
(e) One thousand rupees where punishment of imprisonment for four
years to five years is imposed,
(f) One thousand three hundred rupees where punishment of
imprisonment from five years to eight years is imposed,
(g) One thousand eight hundred rupees where punishment of
imprisonment from eight years to twelve years is imposed,
(h) Two thousand two hundred rupees where punishment of
imprisonment for above twelve years but below life imprisonment
is imposed,
(i) Two thousand eight hundred rupees where punishment of life
imprisonment is imposed.
(2) The offender who has been sentenced to a fine only but not to
imprisonment shall pay the compensation levy in such an amount as to be set by
four percent of the fine so imposed.
(3) Where the offender is sentenced to both punishments of imprisonment
and fine, he or she shall pay the compensation levy in such an amount which
becomes the higher, out of that to be set from the imprisonment and fine pursuant
to sub-section (1) or (2).
(4) The court shall determine the compensation levy pursuant to this
Section while making judgment on the offence concerned.
(5) The compensation levy referred to in this Section shall be credited to
the Fund.
42. Liability to pay compensation levy not to be deemed terminated: (1) Even if it
is required to pay a fine or bear any other pecuniary liability as well for the offence
in relation to which the compensation levy is to be paid pursuant to Section 41 or

22
www.lawcommission.gov.np

to pay compensation paid to the victim, the liability to pay the compensation levy
referred to in Section 41 shall not be deemed to have terminated.
(2) Even in cases where the sentence imposed on the offender is pardoned,
postponed, changed or lessened or remitted or suspended pursuant to the
prevailing law, the liability of the offender to pay the compensation levy referred
to in Section 41 shall not be deemed to have terminated.
43. Power to make order to lessen, or dispense with the requirement to pay, the
compensation levy: (1) If any offence is not able to pay the compensation levy
referred to in Section 41, he or she may make an application, along with the basis,
ground, reason therefor and evidence thereof, to the court concerned for an order
that the compensation be lessened or the requirement to pay it be dispensed with.
(2) While inquiring into the application made pursuant to sub-section
(1), where the court thinks that there is a reasonable condition that such an offender
cannot pay the compensation levy, the court may make an order that the
compensation levy referred to in Section 41 be lessened or the requirement to pay
it be dispensed with.

Chapter-6
Victim Protection Suggestion Committee
44. Victim Protection Suggestion Committee: (1) There shall be a Victim Protection
Suggestion Committee as follows, for making suggestions to the Government on
the protection of the rights and interests of the crime victims:
(a) Attorney General -Coordinator
(b) Chairperson, Nepal Law Commission -Member
(c) Secretary, Government of Nepal, Ministry of Finance
-Member
(d) Secretary, Government of Nepal, Ministry of
Law, Justice and Parliamentary Affairs -Member
(e) Inspector General of Police, Nepal Police -Member
(f) One expert designated by the Government of Nepal
from among the persons who have made significant
contribution in the field of victimology or criminal justice
-Member

23
www.lawcommission.gov.np

(2) The tenure of the member referred to in clause (f) of sub-section (1)
shall be of five years.
(3) Notwithstanding anything contained in sub-section (2), the Government
of Nepal may at any time remove the member referred to in clause (f) of sub-
section (1) if he or she has incompetence or bad conduct or fails to perform his or
her duties honestly.
Provided that prior to so removing from the office, he or she shall not be
deprived of an opportunity to submit his or her clarification.
45. Functions of the Victim Protection Suggestion Committee: (1) The functions
of the Victim Protection Suggestion Committee shall be as follows:
(a) To make suggestions to the Government of Nepal as to the
improvement and revision to be made in the existing law for the
protection of the rights and interests of the victims,
(b) To make suggestions to the Government of Nepal as to the policy
measures to be adopted by the Government of Nepal for the security
of the victims and mitigation of damage and adverse effects
sustained by the victims from the offence,
(c) Where Nepal is to become a party to an international treaty or
agreement related to the rights of the victims, to make
recommendation to the Government of Nepal to that effect, along
with the reason,
(d) To make suggestions to the Government of Nepal to operate such
particular service as is necessary upon identifying the needs of the
victims.
(2) Having regard also to the suggestions of the Victim Protection
Suggestion Committee, the Government of Nepal shall operate the services
including relief, social rehabilitation, counseling, financial, physical, social, legal
aid/support for the security, protection of the rights and interests of the crime
victims, and for mitigating the damage, negative impact and effect sustained or to
be sustained by the victims due to the offence.
46. Meeting allowance: The coordinator and members of the Victim Protection
Suggestion Committee shall get such meeting allowance as prescribed by the
Government of Nepal for participating in the meeting of the Committee.

24
www.lawcommission.gov.np

Chapter-7
Miscellaneous
47. To provide from the Fund: The victim shall be provided compensation in a
reasonable amount from the Fund for the damage sustained as a result of any
offence committed by a perpetrator who does not have to bear the criminal liability
due to his or her age, mental unsoundness, diplomatic immunity and any other
reason.
48. To claim for compensation: While making prosecution in any offence, the victim
of first grade, victim of second grade and family victim shall have to make an
explicit claim for compensation to be obtained by them.
49. To provide information: The concerned body or authority who is involved in the
proceedings of such matters or who maintains the records of such information or
who has the access to such information shall provide such information to the body
or authority who has the duty to provide information to the victim pursuant to this
Act.
50. To give a notice of final hearing: (1) Notwithstanding anything contained in the
prevailing law, the court shall give a notice of final hearing of the case related to
the offence to the concerned Government Attorney Office in advance of at least
seven days.
(2) After receiving information pursuant to sub-section (1), the
Government Attorney Office shall, as promptly as possible, give information of
final hearing to the concerned victim to the extent possible.
51. Modes of giving notice to the victim: The concerned body or authority who has
the duty to give a notice to the victim pursuant to this Act may give it in writing,
orally, by telephone or electronic means so that it will remain in the record, as
required.
52. Power to appoint representative: For the enjoyment or enforcement of the rights
of the victim conferred by this Act, the victim may appoint his or her
representative or attorney pursuant to the prevailing law, and when so appointed,
the victim shall be deemed to have enjoyed or enforced his or her rights through
such a representative or attorney.
53. Power to frame Rules: The Government of Nepal may, in consultation with the
Committee, frame necessary rules for the implementation of the objectives of this
Act.
54. Power to make directives: The Government of Nepal may, subject to this Act or
the Rules framed under this Act, make necessary directives in relation to the
provision of compensation to the victims.

25
Log in Cart (0)

PRODUCTS CONTENT CONTACT

10 Basic First Aid Training x

SAVE 10% for Any


Tips & Procedures
Emergency

Sign up now to unlock this discount and more!

Email Address SUBMIT


*Does not apply to all products, including but not limited to all Garmin products and sale items.

March 12, 2019

Injuries are practically inevitable in emergency situations. There’s a chance you get hurt by whatever’s causing the emergency; for instance,
you could get burned in a fire, or you could get struck by toppling debris during an earthquake. But injuries are also sustained during the
panic that ensues in an emergency. In the rush to get away from danger, you could sprain your ankle or suffer an open wound.

Here are 10 first aid “must-knows” that you can use to treat a broad array of injuries:

1. Remember the “Three P’s.”


2. Check the scene for danger before you provide help.
3. To treat cuts and scrapes, apply gentle pressure, disinfectant, and bandages.
4. To treat sprains, apply ice and compression at intervals and keep the limb elevated.
5. To treat heat exhaustion, use cool fluids, cool cloths, and shade.
6. To treat hypothermia; use warm fluids and warm covering.
7. To treat burns, determine the burn type and severity. Cover the wound with loose cloth to prevent infection.
8. Use an EpiPen to treat allergic reactions.
9. To treat fractures, keep the fractured area stable and immobilized, and apply a cold pack.

10. Perform CPR if an injured person stops breathing.

First Aid Checklist PDF

It’s important that you commit these 10 golden rules to memory. Even if you’re not injured, you might encounter someone who is, and who
needs treatment.

Always attempt to seek professional medical help for injured persons. First responders are not always readily available during emergency
Justuno™ situations, and if that’s the case, do your best to provide what treatment you can until help arrives. But never forget that serious injuries
always require more advanced treatment, and you should do your best to get the injured person to professional caregivers.

Nonetheless, these simple first aid procedures can go a long way in helping someone who’s injured, and all you need to do is use a few
materials in your survival kit and apply them in right manner. Read through these detailed guides on all 10 items.

1. The “Three P’s”

The “Three P’s” are the primary goals of first aid. They are:

Preserve life
Prevent further injury
Promote recovery

These goals might seem overly simple, but they’re simple on purpose. When someone is injured, it’s all-too-easy to panic and forget what
you need to do to provide assistance. The Three P’s remind you of the very basics: do what you can to save the person’s life; do what you
can to keep them from sustaining further injuries; do what you can to help them heal.

2. Check the Scene for Danger


Before you provide help to an injured person, it’s important that you check the scene for danger. You don’t want to get yourself injured, too.
This isn’t a cowardly precaution. The fact of the matter is this: if you get injured, you won’t be able to help someone else who’s injured. So
before you rush to help someone, take a moment to analyze the area and spot anything that could injure you.

For example, there might be a terrible storm outdoors, and you spot someone outside who’s injured and who can’t make it to shelter. Before
you go running outside to help them, look for hazards. Are strong winds hurling debris? Are there any trees or structures that look as if they’re
about to fall? Are there downed power lines? Is there floodwater?

Once you’ve assessed these dangers, you can better strategize how to reach and rescue the injured person.

3. Treating Cuts and Scrapes

Blood is a vital component of our bodies. When someone is bleeding, you want to prevent as much blood from leaving their body as possible.
Try and find a clean cloth or bandage. Then:

Apply gentle pressure for 20 to 30 minutes.


Clean the wound by gently running clean water over it. Avoid using soap on an open wound.
Apply antibiotic to the wound, like Neosporin.

Cover the wound with a bandage.

If someone has a nosebleed, have the person lean forward. Press a cloth against the nostrils until the blood flow stops.

The body is usually very quick at patching up small cuts and scrapes. But deeper wounds may require medical attention. With deep wounds:

Apply pressure.
Don’t apply ointments. Cover the area with loose cloth to prevent contaminants from infecting the wound.
Seek medical attention as soon as possible.

4. Treating Sprains

Sprains are usually an unalarming injury, and most of the time they’ll heal on their own. But there are steps you can take to ease the swelling.
Swelling is caused by blood flow to an injured area. You can reduce swelling by applying ice. Ice restricts the blood vessels, which reduces
blood flow.
Keep the injured limb elevated.
Apply ice to the injured area. Don’t apply ice directly to the skin. Wrap it in a cloth or put ice in a plastic bag.

Keep the injured area compressed. Put it in a brace or tightly wrap it. Don’t wrap it so tight that it’ll cut off circulation.
Ice for a while. Then compress. Repeat at intervals.

Make sure the injured person avoids putting weight on the injured limb.

5. Treating Heat Exhaustion

Heat exhaustion occurs due to prolonged exposure to high temperatures, especially when the person is doing strenuous activities or hasn’t
had enough water. Symptoms of heat exhaustion include:

Cool, moist skin


Heavy sweating

Dizziness
Weak pulse

Muscle cramps
Nausea
Headaches

To treat someone with heat exhaustion:

Get the person to a shaded area that’s out of the sun.

If there are no shaded areas available, keep the person covered by any available materials that can block sunlight.
Give the person water and keep them hydrated.

Place a cool cloth on their forehead to lower their body temperature.

6. Treating Hypothermia

Hypothermia is caused by prolonged exposure to cold temperatures. It begins to occurs when your body temperature drops below 95
degrees Fahrenheit.

Symptoms of hypothermia include:

Shivering
Slurred speech or mumbling
Week pulse

Weak coordination
Confusion
Red, cold skin

Loss of consciousness

To treat hypothermia:

Be gentle with the afflicted person. Don’t rub their body and don’t move their body in too jarring of a way; this could trigger cardiac arrest.
Move the person out of the cold, and remove any wet clothing.

Cover the person with blankets and use heat packs. Don’t apply heat directly to the skin because this could cause major skin damage.
Give the person warm fluids.
If you set the person on the ground, be aware that the ground may also be a cold source. Place warm materials on the ground that the
person is going to lay on.

7. Treating Burns

Before you apply treatment to burns, you need to identify the burn type and the severity of the burn. There are four kinds of burns:

First-degree burn: Only the outer layers of skin are burnt. The skin is red and swollen, and looks similar to a sunburn.
Second-degree burn: Some of the inner layer of skin is burnt. Look for blistering skin and swelling. This is usually a very painful type of
burn.

Third-degree burn: All of the inner layer of skin is burnt. The wound has a whitish or blackened color. Some third-degree burns are so
deep, there might not be any pain because the nerve endings are destroyed.
Fourth-degree burn: A burn that has penetrated all tissues up to the tendons and bones.

Additionally, there are two kinds of burn severities: a minor burn and a major burn.

Minor burn: First-degree burns and mild second-degree burns.


Major burn: Moderate second-degree burns to fourth-degree burns.

Minor burns don’t usually need extensive treatment, but you could:

Run cool water over the afflicted area (avoid icy or very cold water).

Don’t break any blisters.


Apply moisturizer over the area, like aloe vera.

Keep the burned person out of sunlight.


Have the burned person take ibuprofen or acetaminophen for pain relief.

Major burns are very serious injuries that require medical assistance. To help someone who has suffered from a major burn:

Do not apply ointments.


Cover wound with loose materials to prevent contaminants from infecting it.

8. Allergic Reactions

Allergic reactions occur when your body is hypersensitive to a foreign substance. Bee stings, certain foods, or drug ingredients can cause
allergic reactions. Anaphylaxis is a life-threatening allergic reaction that can be caused by all of those mention allergens.

The best way to treat an allergic reaction is to use an EpiPen. EpiPen, or “epinephrine autoinjector,” is a small and ergonomic needle that’s
used to inject epinephrine (adrenaline) into someone suffering greatly from an allergic reaction. The epinephrine usually subdues the effects
of the allergic reaction.

If someone is suffering from an allergic reaction:

Keep the person calm. Ask if they use an EpiPen and have one with them.

Have the person lie on their back. Keep their feet elevated 12 inches.
Make sure the person’s clothing is loose so they’re able to breathe.

Avoid giving them food, drink, or medicine.


If appropriate, use an EpiPen. Learn how to inject an EpiPen in someone having a reaction.
Wait 5-15 minutes after using an EpiPen. If the allergic reaction isn’t subdued, a second dose may be required.
9. Treating Fractures

Sometimes it’s very easy to tell if someone has suffered a fractured bone. But sometimes it’s not. If you suspect someone of having a
fracture:

Don’t try to straighten a fractured limb.


Use a splint or padding to stabilize the area and keep it from moving.

Apply a cold pack to the area. Don’t apply it directly to the skin. Wrap it in a cloth or put it in a plastic bag.
Keep the area elevated, if possible.
Give the person an anti-inflammatory drug, like ibuprofen.

10. Performing CPR

CPR stands for cardiopulmonary resuscitation. CPR is used to restore breathing and blood circulation to an unresponsive person. CPR is an
incredibly important procedure that can save lives. But learning CPR is an intensive procedure that requires some training, which is usually in
the form of a day-long class. The American Red Cross offers CPR certification classes across the nation. Go to Redcross.org for more
information.

Prepare Yourself with the Right Gear


The methods listed above are not very difficult to do and they don’t require medical training—but they can save someone’s life or prevent an
injured person from sustaining serious injuries or infections. Make sure that your stash of survival gear includes a first aid kit, and be sure to
refill your first aid kit every year as its supplies dwindle or expire.

The essential first aid kit should include:

Anti-bacterial wipes

Painkillers
Gauze pads
Sunscreen

Medical gloves
Medical instrument kit

Sling
Burn gel
Antibiotic ointment
Antiseptic wipes

First aid instructions


Tourniquet

Download our first aid checklist and make sure you’re ready for anything you may encounter.

First Aid Checklist PDF

So long as you have a functioning survival kit, you’ll be prepared to give treatment to yourself or others when an emergency situation causes
injury.

Share

The SEVENTY2™ Survival


System
SEE THE SYSTEM

HELP

SIGN UP FOR OUR NEWSLETTER


Stay connected with insightful survival tips, news and promotions.

Email Address SUBSCRIBE

© 2021 Uncharted Supply Company


home articles quick answers discussions features community help Search for articles, questions, tips

Articles / Artificial Intelligence / Tensorflow

Crash Detection Using the Accelerometer


Joel Ivory Johnson Rate me: 5.00/5 (5 votes)

11 Jan 2021 CPOL 2 min read

In the next entry of the AI Hazard Detection on Android series, we will look at crash detection and notifications.

Here we will make a class that will monitor readings from the accelerometer, and put a feature to lower the chances of an accidental emergency message being sent.

Download source - 53.8 MB

This is the fifth article in a series on making a real-time hazard detector with Android and Tensorflow Lite. In this part, we will add crash detection to the application
and give the application the ability to send notifications to an emergency contact.

Taking advantage of the features of most Android devices, we can add crash detection to the device. In most Android devices, the device has an accelerometer and
GPS. Together, we can use these to send a message if we think that a crash has occurred and send an emergency message with a location.

To detect crashes, I’ve created a new class named CrashDetector. The class will monitor readings from the accelerometer. When it receives a reading that is
above a certain level then it is assumed that the vehicle experienced an impact. The value that I’ve selected for indicating an impact here is a best estimate based on
reading about car crashes.

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Python Copy Code

var coolDownExpiry:Long = 0
val COOL_DOWN_TIME = 10000

fun resetCooldown() {
coolDownExpiry = Date().time + COOL_DOWN_TIME
}
fun hasCooldownExpired():Boolean {
val now = Date().time
return now > coolDownExpiry
}

fun alert(direction: Int) {


// using the when statement to filter out invalid
// values should they be passed to this function
if(hasCooldownExpired() && currentSpeedMPS>MIN_ALERT_MPS) {
when (direction) {
ALARM_CENTER, ALARM_RIGHT, ALARM_LEFT -> {
soundPlayer = MediaPlayer.create(context, direction)
soundPlayer!!.start()
}
}
}
resetCooldown()
}

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
There are some other conditions that could result in a sudden high reading from the accelerometer. It is possible that the device was dropped, that the user drove
over a pothole, or the user had a collision for which they do not want to send an alert. To lower the chances of an accidental emergency message being sent, there is
a delay before the message is sent. During this delay, the device shows a prompt allowing the user to either cancel the message or send it out. If nothing is selected
when the dialog expires, it will send out a message to the phone number that the user had selected as their emergency contact. If there is a last-known location for
the driver, it will be sent as a link to Google Maps.

Python Copy Code

fun sendEmergencyMessage() {
var msg = crashMessage
if(this.location != null) {
msg = msg + " https://www.google.com/maps/@${location!!.latitude},${location!!.longitude},15z"
}
val smsManager = SmsManager.getDefault() as SmsManager
smsManager.sendTextMessage(crashPhoneNumber, null,msg , null, null)
}

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Now that all of the major functions are built, we can get the last piece in place and have the application use the live video stream instead of the static images. In the
next part of this series, we’ll have the application process live data.

◁ Prev Realtime Hazard Detection on Android with Tensorflow Lite Next ▷

License
This article, along with any associated source code and files, is licensed under The Code Project Open License
(CPOL)

Share

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
About the Author
Joel Ivory Johnson
Software Developer
United States

I attended Southern Polytechnic State University and earned a Bachelors of Science in Computer Science and later returned to
earn a Masters of Science in Software Engineering. I've largely developed solutions that are based on a mix of Microsoft
technologies with open source technologies mixed in. I've got an interest in astronomy and you'll see that interest overflow into
some of my cod...
show more

Comments and Discussions

You must Sign In to use this message board.

Search Comments

Spacing Relaxed Layout Normal Per page 25 Update

-- There are no messages in this forum --

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Permalink Layout: fixed | fluid Article Copyright 2021 by Joel Ivory Johnson
Advertise Everything else Copyright © CodeProject, 1999-2021
Privacy
Cookies Web03 2.8.20210423.2
Terms of Use

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD

You might also like