You are on page 1of 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/280936482

A Technique for Measuring Data Persistence Using the Ext4 File System Journal

Conference Paper · July 2015


DOI: 10.1109/COMPSAC.2015.164

CITATIONS READS
11 571

1 author:

Kevin D. Fairbanks, PhD


Johns Hopkins University
25 PUBLICATIONS   255 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Kevin D. Fairbanks, PhD on 17 October 2015.

The user has requested enhancement of the downloaded file.


2015 IEEE 39th Annual International Computers, Software & Applications Conference

A Technique for Meassuring Data Persistence using the Ext4 File System
Journal
Kevin D. Fairbanks
Electrrical and Computer Engineering Department
United States Naval Academy
Annapolis, MD

Abstract—In this paper, we propose a meth hod of measuring standpoint, the complex nature of modern
m information systems
data persistence using the Ext4 journal. Digiital Forensic tools and the many layers of abstraction that exist between an end
and techniques are commonly used to extract data from media.
uccessful recovery of all or
user and a storage solution make su
A great deal of research has been dedicated tto the recovery of
deleted data; however, there is a lack off information on
part of this data depend on many factors that have varying
quantifying the chance that an investigator wiill be successful in weights of importance.
this endeavor. To that end, we suggest the file system journal be
used as a source to gather empirical evidence of data persistence, Currently, it cannot be stated with mathematical certainty
which can later be used to formulate th he probability of that a deleted file that possesses a specific set of attributes,
recovering deleted data under various conditioons. Knowing this such as size and elapsed time sin nce deletion, has a certain
probability can help investigators decide wheere to best invest chance for full or partial recovery. This uncertainty is due to
their resources. We have implemented a proof of concept system the complex nature of contemporarry storage devices and the
that interrogates the Ext4 file system journal and logs relevant
many layers of abstraction between a user application and the
data. We then detail how this information can n be used to track
the reuse of data blocks from the examinatiion of file system
data that is being saved. In [1], Fairrbanks and Garfinkel posit
metadata structures. This preliminary design ccontributes a novel that data can experience a decay ratee and list many factors that
method of tracking deleted data persistence th hat can be used to can affect this rate. This paper seeks to extend that premise by
generate the information necessary to form mulate probability proposing a method to observe daata decay in the Ext4 file
models regarding the full and/or partial recoverry of deleted data. system. The proposed method makes use of the file system
journal to determine when bloccks of data have been
Keywords-Ext4; File System Forensics; D
Digital Forensics; overwritten as an alternative to diffeerential analysis techniques
Journal; Data Persistence; Data Recovvery; Persistence that make use of disk images beforee and after a set of changes
Measurement has occurred as mentioned in [12]. Once
O proper measurements
of data decay can be consistently taken,
t then experiments to
I. INTRODUCTION gather empirical data can be designed. Thus, we view the
development of the proposed meth hod as a foundational step
The field of Digital Forensics regularrly requires the
toward solving the larger prob blem of quantifying the
extraction of data from media. Once thee data has been
probability of data persistence after it has been deleted.
extracted different methods of analyzing tthe data may be
initiated and, in fact, lead to further data exttraction. From a
general standpoint, the targeted data can bbe split into two II. BACKGRO
OUND
categories: allocated and unallocated. Under normal A. Layers of Data Abstraction
circumstances, it can be assumed allocatedd data is always
present and persists until it is reclassified as uunallocated and/or
purposefully overwritten. It is when dataa is classified as
unallocated that the value of its persistence raapidly approaches
zero from the standpoint of a normal informattion system.

In this paper, we define persistence as a property of data


that depends on a variety of factors includiing its allocation
status. In the scope of this paper, we focus oon measuring the
persistence of unallocated data rather than the recovery of
data. Depending upon the circumstance, the recovery of
unallocated or deleted data can be of the utm most importance.
For example, if an employee is suspected of leaking very Fig. 1: Data Abstracttion Layers
important documents from a company usinng digital media
such as an external hard disk or thumb drive, finding evidence As noted in [6], when analyzin ng digital data, there are
of the data leakage in the form of file fraggments or unused multiple layers of abstraction to con
nsider. This is depicted in
directory entries can help to determine the scope and breadth Fig. 1, which focuses on the analysis of persistent data from
of their malicious activities. A less securitty-based example storage media rather than memory y or network sources. The
involves the accidental deletion of data by a user. From the analysis of application data involv ves looking into targeted
user standpoint, the recovery of this data (e.gg. digital pictures, files, understanding the formats asssociated with those files,
term papers, etc.) is critical. From a data extraction and how a particular application n or set of applications

0730-3157 2015 18
U.S. Government Work Not Protected by U.S. Copyright
DOI 10.1109/COMPSAC.2015.164
interprets and manipulates those files. This includes the
analysis of JPEG, MP3, and even Sqlite3 files. While
application data is usually what end users interact with the
most, the applications typically rely on a file system for data
storage. File system analysis makes use of the data structures
that allow applications to manipulate files. Although the file
system layer is primarily concerned with the storage and
retrieval of application data, in the process a great deal of Fig. 2: File System Layout1
metadata is created about the application data, and even the
file system itself, that can be used for data recovery and The file system super block contains metadata about the
analysis. overall file system such as the amount of free blocks and
inodes, the block size, and the number of blocks and inodes
File systems typically reside in one or more volumes. per a block group. Each block group has a descriptor that
Volumes are used to organize physical media. They may be contains information like the allocation status of data blocks
used to partition physical media into several areas or combine and inodes as well as the offset of important blocks in the
several physical media devices into a single or multiple logical block group. The inode table is where the inode data
volumes. Examples of volume organization include RAID structures are actually stored. An inode is a file system data
and LVM. Analysis at the physical layer often involves the structure that contains file metadata such as timestamps and
interpretation of bytes, sectors, and/or pages depending upon most importantly the location of data blocks associated with a
the physical medium. Each layer in this model affects the file. Ext2 and Ext3 use a system of direct and indirect block
persistence of data. For example, a word processing pointers to provide a mapping of the data blocks, while Ext4
application may be used to create and modify a file. From the employs extents for this purpose. File names are contained in
perspective of the file system, each time a modification takes directories, which are a special type of file. Each filename is
place a new updated version of the file may be created and the associated with an inode number through the use of a directory
old one deleted. This process generally leaves the old deleted entry structure. This arrangement creates the ability to
data on the physical media until it is overwritten. Beneath the associate a single inode with more than one filename. For
file system, the file data could be striped or duplicated across convenience, Table 2 in the Appendix section describes the
multiple volumes. Also, one or more volumes may reside on data structure of a directory entry.
physical media that inherently makes duplicate copies of data
to address error or lengthen the lifetime of the media device by C. File System Journals
distributing usage across the entire device. Many current operating systems make use of journaling file
systems. This includes Microsoft Windows use of NTFS [5];
The preceding example illustrates the complex nature of Apple’s OSX use of HFS+; and many Linux distributions use
data storage and the challenge of measuring data persistence in of Ext3, Ext4, and/or XFS. File system journals are normally
modern information systems. While the data is not truly used in situations where a file system may have been
irrecoverable until it has been overwritten on the physical unmounted uncleanly, leaving it in an inconsistent state.
media, analysis at that level should not be considered trivial Events such as power failures, operating system crashes, and
and requires specialized equipment in certain situations. Our the removal of the volume containing the file system before all
proposed approach works at the file system level of data has been flushed to the volume and it can be safely
abstraction by observing of one of its crash recovery removed can lead to the inconsistent state. In these situations,
mechanisms, the file system journal, and using it as a vector to the file system journal can be replayed to bring the file system
detect when data has been potentially overwritten. back to a consistent state in less time than it would take to
perform a full file system check. Generally speaking, the use a
B. The Ext4 File System journal does not guarantee that all user data can be recovered
The features and data structures of the Ext4 file system and after a system failure. As its main purpose is to restore
its predecessors are detailed in [2], [3], and [4]. The purpose consistency, it just ensures that a file system transaction, such
of this section is to provide a high-level overview that as a set of write operations, has taken place fully or not at all.
facilitates comprehension of the forthcoming results. This property is referred to as atomicity. Although our
proposed technique focuses on the Ext4 file system due to its
The Ext family of file systems (Ext2, Ext3, and Ext4) open source nature, in theory it can be generalized and applied
generally divides a partition into evenly sized block groups to several other file systems.
with the potential exception of the last block group. While
each block group has a bitmap for data blocks and inodes to In [2], Ext4 file system structures are examined from a
denote their respective allocation status, only a fraction of the Digital Forensic perspective and comparisons are drawn
block groups contain backup copies of the file system super between Ext4 and its predecessor Ext3. Although the overall
block and group descriptors. structure of the file system is important, we shall focus on the
Ext4 journal.

1
Appears in [2] and is adapted from [4]

19
D. The Ext4 Journal created. Revoke blocks nullify all uncommitted transactions
with a sequence number less than th hem. In Fig. 3, transaction
55 has fully completed. If a system m crash occurs before the
metadata can be written to the file system
s area of the volume,
the metadata from the journal will be
b copied to the correct file
system blocks when the journal is replayed. However,
transaction 56 is not committed. Wh hen the journal is replayed,
the revoke block with sequence nu umber 57 will nullify this
transaction.

As noted in [3], the Ext4 journaal was modified from the


Ext3 to support both 32-bit and 64 4-bit file systems. Also, to
increase reliability, checksumming was added to the journal
due to a combination of the importaance of the metadata that is
contained within it as well as the frequency with which the
Fig. 3: Journal Blocks journal is accessed and written. Iff this feature is enabled, a
checksum is included in the transaaction commit block. The
Fig. 3, taken from [2], summarizes the m major block types
addition of the transaction checksu ums makes it possible to
contained in the Ext4 file system journal. The journal is a
detect when blocks are not written to
t the journal. A benefit to
fixed-size reserved area of the disk. Althoughh it does not have
this approach is that while the original JBD used a two-stage
to reside on the same device as the fille system being
commit process, JBD2 can write a complete transaction at
journaled, it commonly does. Also, becauuse space for the
once.
journal is usually allocated when the file systtem is created, its
blocks are normally contiguous. The Ext4 jouurnal operates in a
Like its predecessor Ext3, Ext4 can
c use one of three modes
circular fashion. Thus when the end of the journal area has
of journaling: Journal, Ordered, andd Writeback. Journal mode
been reached, new transactions committedd to the journal
is the safest mode. It writes all of file
f data, metadata, and file
overwrite the data at the beginning of the joournal area. The
system metadata to the journal area first. The data is then
journal uses a Journal Super Block to indicatte the start of the
copied to the actual file system blo ocks in the target volume.
journal.
Thus the safety of Journal mode com mpromises performance as
every write to the file system willl cause two writes. Both
The Ext4 journaling mechanism, the Journnal Block Device
Ordered and Writeback mode only write file and file system
2 (JBD2), is a direct extension of the Ext3 JB BD [3]. Like the
metadata to the journal area. The major difference between
original JBD, JBD2 is not directly tied to thhe Ext4. The fact
the two modes is that Ordered mod de ensures that data is first
that other file systems can make use of itt as a journaling
written to the file system area beforre updating the file system
mechanism, is important to note as this explaains why JBD2 is
journal transaction as complete. In Writeback mode, file
not file system aware. When JBD2 is givenn a block of data
system area writes and journal area writes can be interspersed.
that is to be journaled, it does not know if thhat block contains
Writeback mode maximizes perfformance, while Ordered
actual data or metadata about a set of files oor the file system.
mode minimizes the risk of file sysstem corruption when only
The only information that it has and really neeeds for recovery
journaling metadata. Many Linux x distributions operate in
situations is the file system block numbeer to which the
Ordered Mode by default.
journaled block corresponds.
III. HYPOTHESIS
Ext4 groups sets of write operations to thee file system into
transactions. Each transaction is then passeed to JBD2 to be We propose continuously monito oring the Ext4 journal as a
recorded. Every transaction has a unique ssequence number method to measure the persistence of data. In particular, we
that is used in descriptor, commit, and revvoke blocks. The suggest monitoring journal descripttor blocks as they contain
descriptor block marks the beginning of a transaction and the file system block numbers of th he data being written to the
contains a list of the file system blocks that aare being updated journal. Depending on the mode in which the journal is
in the transaction. The commit block is usedd to mark the end operating, the file system blocks that are recorded in the
of a transaction. This means that the file syystem area of the journal can be analyzed to reveal further information about
disk as been successfully updated. If a transacction has not been data persistence in the Ext4 file system.
completed and a system crash occurs, a revoke block is

Fig. 4: Journal Blocks

20
To test this hypothesis, a series of python scripts has been using the vim text editor and saved. Next, text was added to
implemented that makes use of the Ext2-4 debugging utilities. the file and it was saved again. The cp command was then
The scripts use both the dump and logdump commands of used to make a duplicate of the file, Test2.txt. Finally the
the debugfs program to regularly gather the contents of the original file was deleted using the rm command. Throughout
file system journal. The contents are then parsed and inserted this series of operations the file system journal was active and
into a Sqlite3 database, which can be queried offline to gather operating in Ordered Mode. From the perspective of usable
persistence measurements. Sample output from this system is data logged to the journal, Ordered and Writeback are
displayed using an Sqlite3 database browser in Fig. 4. As our expected to behave similarly; therefore, a separate experiment
goal is to test the suitability of the Ext4 journal as a using Writeback Mode was not conducted.
mechanism to measure data persistence, all blocks written to
C. Journal Mode Test
the journal were recorded for analysis.
The major focus of this research is the study of data
As the mechanism to gather data from the journal runs persistence. In this context, it is reasonable to sacrifice file
continuously, the circular nature of the journal does not inhibit system performance in order to gain insight into the potential
the ability collect to persistence measurements. As to recover deleted data. With this in mind, the loop device
summarized in Fig. 4, each JBD2 descriptor and commit block was unmounted and the journal was set to operate in Journal
pair (denoted by block types of 1 and 2 respectively) contains Mode. The procedure from the Ordered Mode Experiment
increasing matching transaction sequence number timestamps. was repeated, only altering the name of Test.txt to Test3.txt
Fig. 4 also denotes the journal and file system blocks used by and Test2.txt to Test4.txt.
a transaction. While the journal blocks will eventually be
reused, the transaction sequence numbers and commit times V. RESULTS & ANALYSIS
are sufficient to ensure proper order when tracking file system A. Ordered Mode Results
block reuse.
TABLE 1: BLOCK DESCRIPTIONS
IV. EXPERIMENTAL PROCEDURE
File System Block Number Description
A. Test Environment Setup 673 Group 0 Inode Table
The experiments used for this preliminary research were 657 Group 0 Inode Bitmap
conducted using an Ubuntu 14.04.1 LTS Linux environment. 1 Group Descriptors
The 3.8.0 version of the Linux kernel was used throughout the 8865 1st block of root directory
experiments. The 1.42.9 version of e2fsprogs (the Ext2, 0 File System Super Block
Ext3, and Ext4 file system utilities) was used to confirm our 642 Group 1 Bitmap
analysis of the Ext4 journal. All data gathered from the
journal was also verified using hex editor to ensure the proper A review of the logged journal data after the Ordered Mode
functioning of the data collection scripts. experiment was conducted revealed several blocks being
modified during this process. They are summarized in Table 1
In order to obtain consistent results, a 10 GB file was along with a description of the contents. Each block was
created using the dd command. This was formatted using the recorded entirely in the file system journal and is consequently
mkfs.ext4 tool with all of its default behaviors and then accessible for the study of data persistence.
mounted as a loop device. It is believed that this setup is ideal
for determining the validity of a persistence measurement The inode and data block bitmaps, blocks 657 and 642
technique. If the file system volume on which the operating respectively, can be used to determine changes in the
system and the majority of the binary application files reside is allocation status of the respective structures they denote over
used as a measurement target, then the journal shall capture time. In Fig. 4, it can be seen that block 657 is updated in
the effects of various logging and background mechanisms several transactions. From block 673, entire inodes can be
executing on the measurement target. Although it is important extracted. It has been noted in [2] that Ext4 inodes make use
to understand the effect normal operating system behavior will of both inode-resident extents as well as extent trees when
have on data persistence, this should be decoupled from the necessary. Furthermore, it was demonstrated that when extent
development and testing of persistence measurement trees are created, the blocks that make up the tree would be
techniques as much as possible. This approach allows recorded in the file system journal. Therefore, it is entirely
researchers to separate measurement technique limitations and possible to determine the file system data blocks that are
idiosyncrasies from artifacts produced by a particular modified by analyzing journal transactions and extracting the
operating system and its associated applications. necessary information from the inode structure. An example of
this is displayed in Fig. 5 where the file system block number
B. Ordered Mode Experiment of the data is highlighted on the line beginning at offset
0x0045B20.
This test of the proposed measurement strategy consisted of
several simple operations. First a file, Test.txt, was created

21
looking past the last valid entry in this directory it can be seen
that the file Test.txt~ was at one point associated with inode
12. The “.swx” which follows the filename entry is residue
from an earlier version of the directory data block.

Finally, Fig. 6-c exhibits the final state of the block after
Test.txt has been deleted. The major difference is the between
Fig. 5: Directory Changes Fig. 6-c and Fig. 6-b is that the record length field of the
Test2.txt file entry at offset 0x0060 has been changed to
0x0fd4 (4052) to reflect that any entry after it is invalid.
During the experiment, the ls –l command was used to
retrieve the inode numbers associated with the Test.txt and B. Journal Mode Results
Test2.txt files. It was observed that the Test.txt file was The results of the Journal Mode Experiment closely
initially associated with inode 12 after creation. After the file mirrored that of the Ordered Mode Experiment. The primary
was manipulated, the inode was changed to 14. The creation difference was the expected inclusion of file data blocks in the
of the Test2.txt file via the cp command then associated it journal enabling greater detail to be gained from the journal
with inode 12. monitoring technique directly. As none of the data blocks
were reused, we were able to handily retrieve the data from
the file system area and verify that it matched what was
committed in the journal. While this is not novel from a data
recovery standpoint, from a persistence measurement
viewpoint this could be very important. Since newer versions
of the data blocks are recoverable from the journal, it may be
(a) Before the Test.txt is saved possible to determine the persistence of data when blocks are
partially overwritten. Thus the mode of journaling will yield
greater resolution when measuring data persistence.

VI. RELATED WORK


Using the file system journal to gather data is not unique
from a Digital Forensics perspective as noted in [7] and [8].
(b) After Test2.txt is created In [9], a method of continuously monitoring the journal to
detect the malicious modification of timestamps is examined.
In the context of computer security persistence as been studied
using virtualization technology such as in [10]. Due the nature
memory, understanding how data persists is important to
memory forensics and resulted in research such as that
conducted in [11]. For detecting changes after a set of events
(c) After deletion of Test.txt
has occurred, differential analysis has been employed in
various forms [12]. However, this type of analysis typically
makes use of multiple disk images or virtual machine
Fig. 6: Inode Structure Retrieved from Journal
snapshots. To our knowledge, using the file system journal as
a vector to measure data persistence is a novel approach.
In this experiment, different versions of block 8865 were
revealed to contain a wealth of information including the VII. LIMITATIONS
names and inode numbers of temporary files created by the
text editor. This is captured in Fig. 6. Table 2 is provided to The proposed method to measure data persistence is reliant
aid in the analysis of the Fig. 6. Fig. 6-a displays the data on the file system journal. As such, it cannot reliably
from the directory file while Test.txt was being edited with determine if data persists on the physical media (at a lower
vim. In this subfigure it can be seen that .Test.txt.swp points level of abstraction) due to operations such as wear leveling.
to inode 14 while .Test.txt.swx, which is no longer valid due This method of monitoring is also ineffective on file systems
to the record length of previous entry, pointed to inode 13. that do not employ a journal such as FAT. Ext4 does not flush
data to the file system immediately when a write operation is
Fig. 6-b shows the state of the directory after the data has performed. Instead it buffers data for a period to permit the
been added to Test.txt and Test2.txt created by using cp. This block allocator time to find contiguous blocks. A tradeoff is
subfigure shows that Test2.txt is now associated with inode 12 that files with a short lifetime may not be written to the file
while Test.txt is associated with inode 14. It is worth noting system at all. In these situations, that data may not appear in
that while “Test2.txt.swp” appears to be a filename, the name the file system journal. As this data is ephemeral in nature, its
length field at offset 0x0062 (0x09) associated with the impact on persistence may be minimal. This warrants further
directory entry cause the “.swp” portion to be invalid. By study.

22
VIII. CONCLUSIONS AND FUTURE WORK background activity, such as defragmentation; then the time
We have proposed a method to measure data persistence by since file deletion may be the most important factor.
continuously monitoring the file system journal and
demonstrated that depending upon the method of journaling APPENDIX
that is employed, the level of detail that can be readily
gathered varies. The technique employed takes advantage of TABLE 2: DIRECTORY ENTRY FORMAT2
the primary purpose of the file system journal as a recovery Size Description
mechanism and extracts key information, such as the list of 32b Inode Number
file system blocks that have been updated, from the journal 16b Record Length
descriptor blocks. This method is proposed as an alternative 8b Name Length
to using differential analysis techniques that detect file system 8b File Type
block changes between different disk images after a set of Varies File Name
operations has been performed. While the monitoring
technique will impose overhead, it collects data on
ACKNOWLEDGEMENT
incremental file system changes that may be lost when
performing disk image differencing. This work was supported, at least in part, as a Naval
Academy Research Council project.
Another tradeoff is that the proposed method is influenced
by the method of journaling employed by the target file REFERENCES
system. If the journal is operating in Ordered or Writeback [1] K. Fairbanks and S. Garfinkel, “Column: Factors Affecting Data
mode, this technique will be able to track inode, and block Decay,” Journal of Digital Forensics, Security and Law, p. 7, 2012.
allocation changes handily. Furthermore, it can be used to [2] K. Fairbanks, “An analysis of Ext4 for digital forensics,” Digital
Investigation, vol. 9, pp. S118–S130, Aug. 2012
directly track the persistence of data in directory files. The
[3] A. Mathur, M. Cao, S. Bhattacharya, A. Dilger, A. Tomas, and L.
limitation of using the journal as a mechanism to track Vivier, “The new ext4 filesystem: current status and future plans,” in
persistence in these modes of operation is that the content of a Proceedings of the Linux Symposium, 2007, vol. 2, pp. 21–33.
normal file system block cannot be viewed. This creates a [4] D. Bovet and M. Cesati, Understanding the Linux Kernel. 3rd ed.
limit on the resolution of persistence tracking making the size Sebastopol, CA: O’Reilly Media, Inc.; 2005.
of a file system block the smallest unit of measurement. If the [5] “NTFS Technical Reference”, March 28, 2003. Available Online:
http://technet.microsoft.com/en-us/library/cc758691(v=ws.10).aspx
file system journal is operating in Journal Mode, where all file [6] B. Carrier, File system forensic analysis. Addison-Wesley, 2005.
data is recorded to the journal before being written to the file [7] C. Swenson, R. Phillips, S. Shenoi. File system journal forensics. In:
system area, then all of the benefits of Ordered Mode are Advances in digital forensics III. IFIP international federation for
gained as well as insight into file content persistence. This information processing, vol. 242. Boston: Springer; 2007. p. 231–44
will allow research to be conducted into instances where a file [8] Eckstein, K., "Forensics for advanced UNIX file systems," Information
Assurance Workshop, 2004. Proceedings from the Fifth Annual IEEE
system block is only partially overwritten. As our long-term SMC , vol., no., pp.377,385, 10-11 June 2004
goal is to measure persistence under varying circumstances in doi: 10.1109/IAW.2004.1437842
order to generate data that will aide in the formation of [9] Fairbanks, K.D.; Lee, C.P.; Xia, Y.H.; Owen, H.L., "TimeKeeper: A
probability models, we will have full control over the method Metadata Archiving Method for Honeypot Forensics," Information
Assurance and Security Workshop, 2007. IAW '07. IEEE SMC , vol., no.,
of journaling employed in future experiments. pp.114,118,20-22 June 2007
[10] J. Chow, B. Pfaff, T. Garfinkel, and M. Rosenblum. “Shredding your
The proposed method of gathering evidence of data garbage: reducing data lifetime through secure
persistence captures many of the changes to the file system in deallocation.”In Proceedings of the 14th conference on USENIX
Security Symposium - Volume 14 (SSYM'05), Vol. 14. USENIX
an incremental manner. It also addresses issues of missing Association, Berkeley, CA, USA, 22-22. 2005.
data that would arise by the circular nature of the journal. The [11] A Schuster, “The impact of Microsoft Windows pool allocation
trade off is that the journal must be constantly monitored. strategies on memory forensics” Digital Forensics Research Workshop
Other methods of monitoring system changes may be more 2008.
efficient or could be used in conjunction with the proposed [12] Garfinkel S, Nelson A, Young J. A general strategy for differential
forensic analysis. Digital Investigation Aug, 2012;9:S50–9.
method. Future work includes studying these techniques and
performing a comparative analysis of the different ways in
which persistence can be measured at different layers of
abstraction. Finally, when measurements of data persistence
are taken, research must be done in order to understand the
best way to quantify persistent data and the rate at which
unallocated data is overwritten. For example, if a file system
is quiescent when it is not being actively written to by an
application, data persistence most likely depends upon the
number of user-initiated operations since file deletion.
However, if in the absence of user activity the file system,
operating system, or media device begins to perform a
2
All multiple byte values are little endian encoded

23

View publication stats

You might also like