Digital Forensics
Digital Forensics
UNIT I:
Digital Forensics- Introduction, Objective and Methodology, Rules of Digital Forensics; Good Forensic
Practices, Daubert‟s Standards, Principles of Digital Evidence. Overview of types of Computer Forensics –
Network Forensics, Mobile Forensics, Social Media Forensics and E mail Forensics. Services offered by
Digital Forensics. First Responder – Role, Toolkit and Do‟s and Don‟ts
UNIT II:
Introduction to Cyber Crime Investigation, Procedure for Search and seizure of digital evidences in cyber
crime incident- Forensics Investigation Process- Pre-search consideration, Acquisition, Duplication &
Preservation of evidences,Examination and Analysis of evidences, Storing of Evidences, Documentation
and Reporting, Maintaining the Chain of Custody.
UNIT III:
Data Acquisition of live system,Shutdown Systems and Remote systems, servers. E-mail Investigations,
Password Cracking. Seizing and preserving mobile devices. Methods of data acquisition of evidence from
mobile devices. Data Acquisition and Evidence Gathering from Social Media. Performing Data Acquisition
of encrypted systems. Challenges and issues in cyber crime investigation.
UNIT IV:
Search and Seizure of Volatile and Non-volatile Digital Evidence, Imaging and Hashing of Digital
Evidences, Introduction to Deleted File Recovery, Steganography and Steganalysis, Data Recovery Tools
and Procedures, Duplication and Preservation of Digital Evidences, Recover Internet Usage Data, Recover
Swap files/Temporary Files/Cache Files. Software and Hardware tools used in cyber- crime investigation
– Open Source and Proprietary tools. Importance of Log Analysis in forensic analysis. Understanding
Storage Formats for Digital Evidences – Raw Format, Proprietary Formats, Advanced Forensic Formats.
UNIT V:
Windows Systems Artifacts: File Systems, Registry, Event logs, Shortcut files, Executables. Alternate Data
Streams (ADS), Hidden files, Slack Space, Disk Encryption, Windows registry, startup tasks, jump-lists,
Volume Shadow, shell-bags, LNK files, Recycle Bin Forensics (INFO, $i, $r files). Forensic Analysis of the
Registry – Use of registry viewers, Regedit. Extracting USB related artifacts and examination of protected
storage. Linux System Artifact: Ownership and Permissions, Hidden files, User Accounts and Logs.
TEXTBOOK:
1. Nina Godbole and SunitBelapore; “Cyber Security: Understanding CyberCrimes, Computer Forensics
and Legal Perspectives”, Wiley Publications,2011.
2. Bill Nelson, Amelia Phillips and Christopher Steuart; “Guide to ComputerForensics and Investigations”
– 3rd Edition, Cengage, 2010 BBS.
3. Shon Harris; “All in One CISSP Guide, Exam Guide Sixth Edition”,McGraw Hill, 2013.
REFERENCE BOOKS:
1. LNJN National Institute of Criminology and Forensic Science, “A ForensicGuide for Crime
Investigators – Standard Operating Procedures”, LNJNNICFS, 2016.
4. Anthony Reyes, Jack Wiles; “The Best Damn Cybercrime and DigitalForensic Book”, Syngress, USA,
2007.
5. Cory Altheide and HalanCarvey; “Digital Forensics with Open SourceTools”, Syngress Publication.
___________________________________________________________________________________________________
Objective :
While performing investigation on reported cyber crime, cyber expert had to perform various tasks
categorized under following phases:
Evaluation: At this stage, the computer forensics team receives their instructions about the Cyber-attack
they are going to investigate. This involves the following:
The allocation/assignment of roles and resources which will be devoted throughout the course of the
entire investigation;
Any known facts, details, or particulars about the Cyber-attack which has just transpired; The
identification of any known risks during the course of the investigation.
Collection: This component is divided into two distinct sub phases:
Acquisition: This involves the actual collection of the evidence and the latent data from the
computer systems and another part of the business or corporation which may have also been impacted by
the Cyber attack. Obviously, there are many tools and techniques which can be used to collect this
information, but at a very high level, this sub phase typically involves the identification and securing of
the infected devices, as well as conducting any necessary, face to face interviews with the IT staff of the
targeted entity. Typically, this sub phase is conducted on site.
Collection: This is the part where the actual physical evidence and any storage devices which are
used to capture the latent data are labeled and sealed in tamper resistant bags. These are then
transported to the forensics laboratory where they will be examined in much greater detail. As described
before, the chain of custody starts to become a critical component at this stage.
Analysis: This part of the computer forensics investigation is just as important as the previous step. It is
here where all of the collected evidence and the latent data are researched in excruciating detail to
determine how and where the Cyber attack originated from, whom the perpetrators are, and how this
type of incident can be prevented from entering the defense perimeters of the business or corporation in
the future. Once again, there are many tools and techniques which can be used at this phase, but the
analysis must meet the following criteria:
It must be accurate;
Every step must be documented and recorded;
It must be unbiased and impartial;
As far as possible, it must be completed within the anticipated time frames and the resources
which have been allocated to accomplish the various analyses functions and tasks. The tools and the
techniques which were used to conduct the actual analyses must be justifiable by the forensics
team.Presentation: Once the analyses have been completed, a summary of the findings is then presented
to the IT staff of the entity which was impacted by the Cyber-attack. Probably one of the most important
components of this particular document is the recommendations and strategies which should be
undertaken to mitigate any future risks from potential Cyber-attacks. Also, a separate document is
composed which presents these same findings to a court of law in which the forensic evidence is being
presented.
METHODS OF COLLECTION :
Freezing the Scene – Taking a snapshot of the system and its compromised state. Recover data, extract
information and analyze it.
Honeypotting – Create a replica system and attract the attacker for further monitoring.
STEPS TO COLLECTION
RULES OF EVIDENCE :
1. Admissible – Evidence must be able to be used in court.
An accepted best practice in digital evidence collection - modified to incorporate live volatile
data collection :
1. Photograph the computer and scene
2. If the computer is off do not turn it on
3. If the computer is on photograph the screen
4. Collect live data - start with RAM image (Live Response locally or remotely via F- Response) and then
collect other live data "as required" such as network connection state, logged on users, currently
executing processes etc.
5. If hard disk encryption detected (using a tool like Zero-View) such as full disk encryption i.e. PGP Disk
— collect "logical image" of hard disk using dd.exe, Helix - locally or remotely via F-Response
6. Unplug the power cord from the back of the tower - If the computer is a laptop and does not shut
down when the cord is removed then remove the battery
7. Diagram and label all cords8. Document all device model numbers and serial numbers
8. Disconnect all cords and devices
9. Check for HPA then image hard drives using a write blocker, Helix or a hardware imager
10. Package all components (using anti-static evidence bags)
11. Seize all additional storage media (create respective images and place original devices in anti-static
evidence bags).
12. Keep all media away from magnets, radio transmitters and other potentially damaging elements
Collect instruction manuals, documentation and notes.
13. Document all steps used in the seizure.
Daubert’s Standards :
The Daubert standard consists of what is known as the “Daubert trilogy”. This trilogy consists of the
following three cases:
Daubert v. Merrell Dow Pharm., Inc.
509 U.S. 579 (1993).
establishes reliability test; rejects Frye general acceptance test.
General Elec. Co. v. Joiner
522 U.S. 136 (1997).
appellate review of Daubert issues: abuse of discretion.
Kumho Tire Co. v. Carmichael
526 U.S. 137 (1999).
Daubert applies to “technical” evidence (i.e., all experts)
In addition to the Daubert trilogy, the judge is required to consider a range of factors under the
Daubert standard. These factors include:
1. whether the theory or technique in question can be tested
2. whether it has been tested
3. whether it has been subjected to peer review and publication
4. its known or potential error rate
5. the existence and maintenance of standards controlling its operation, and
6. whether it has attracted widespread acceptance within a relevant scientific community (Heilbrun et
al., 2009, p. 47)
The Daubert standard has had an influence on forensic assessment in several ways. Lastly, the
Daubert standard is important in relation to forensic assessments because it is the standard that allows
most forensic mental health assessments to be admissible in trial when they may not have been
otherwise allowed under the Frye standard.
2. If it is not charged, do not charge it. For mobile phones, if the device is ON, power it down to prevent
remote wiping or data from being overwritten.
3. Ensure that you do not leave the device in an open area or other unsecured space.
4. Document where the device is, who has access, and when it is moved.
5. Do not plug anything to the device, such as memory cards, USB thumb drives, or any other storage
media that you have, as the data could be easily lost.
6. Do not open any applications, files, or pictures on the device. You could accidentally lose data or
overwrite it.
8. Preserve any and all digital evidence that you think could be useful for your case.
9. Take a picture of the piece of evidence (front, back, etc.) to prove its condition.
11. Last but not least, do not trust anybody without forensics training to investigate or view files on the
original device. They might cause the deletion of data or the corruption of important information.
1. E-Discovery
7. Web Tracking
8. Data Recovery
1. Digital Forensics
2. Open Computer Forensics Architecture
3. Caine
4. X-Ways Forensics
5. EnCase
6. Registry Recon
7. Volatility and many more…
8. These tools can be further classified into:
9. Disk and Data Capture Tools
10. Database Forensics Tools
11. File Viewers
12. Network Forensics Tools
13. File Analysis Tools
14. MacOS Analysis Tools
15. Internet Analysis Tools
16. Mobile Devices Analysis Tools
17. Email Analysis Tools
18. Registry Analysis Tools
(b) by any means to usurp the normal operation of the computer, computer system, or computer
network;
(ii) ―computer data-base means a representation of information, knowledge, facts, concepts or
instructions in text, image, audio, video that are being prepared or have been prepared in a formalised
manner or have been produced by a computer, computer system or computer network and are intended
for use in a computer, computer system or computer network;
(iii) ―computer virus means any computer instruction, information, data or program that destroys,
damages, degrades or adversely affects the performance of a computer resource or attaches itself to
another computer resource and operates when a program, data or instruction is executed or some other
event takes place in that computer resource;
(iv) ―damage means to destroy, alter, delete, add, modify or rearrange any computer resource by any
means.
(v) ―computer source code means the listing of program, computer commands, design and layout and
program analysis of computer resource in any form.
Investigation:
WHAT IS INCIDENT RESPONSE?
Incident response is a coordinated and structured approach to go from incident detection to
resolution. Incident response may include activities that:
1. Confirm whether or not an incident occurred
2. Provide rapid detection and containment
3. Determine and document the scope of the incident
4. Prevent a disjointed, noncohesive response
5. Determine and promote facts and actual information
6. Minimize disruption to business and network operations
7. Minimize the damage to the compromised organization
8. Restore normal operations
9. Manage the public perception of the incident
10. Allow for criminal or civil actions against perpetrators
11. Educate senior management
12. Enhance the security posture of a compromised entity against future incidents
METHODS OF COLLECTION :
1. Freezing the Scene – Taking a snapshot of the system and its compromised state. Recover data, extract
information and analyze it.
2. Honeypotting – Create a replica system and attract the attacker for further monitoring.
STEPS TO COLLECTION :
1. Find the evidence where it is stored.
2. Find relevant data using appropriate recovery methods and tools.
3. Create order of volatility.
4. Remove external avenues of change and do not tamper the device.
5. Collect evidence by using various tools
6. Good documentation of all the documents.
How Digital Devices are Collected?
On the scene: As anyone who has dropped a cell phone in a lake or had their computer damaged
in a move or a thunderstorm knows, digitally stored information is very sensitive and easily lost. There
are general best practices, developed by organizations like SWGDE and NIJ, to properly seize devices and
computers. Once the scene has been secured and legal authority to seize the evidence has been confirmed,
devices can be collected. Any passwords, codes or PINs should be gathered from the individuals involved,
if possible, and associated chargers, cables, peripherals, and manuals should be collected. Thumb drives,
cell phones, hard drives and the like are examined using different tools and techniques, and this is most
often done in a specialized laboratory. First responders need to take special care with digital devices in
addition to normal evidence collection procedures to prevent exposure to things like extreme
temperatures, static electricity and moisture.
Seizing Mobile Devices
Devices should be turned off immediately and batteries removed, if possible. Turning off the phone
preserves cell tower location information and call logs, and prevents the phone from being used,
which could change the data on the phone. In addition, if the device remains on, remote destruction
commands could be used without the investigator’s knowledge. Some phones have an automatic timer
to turn on the phone for updates, which could compromise data, so battery removal is optimal.
If the device cannot be turned off, then it must be isolated from its cell tower by placing it in a
Faraday bag or other blocking material, set to airplane mode, or the Wi-Fi, Bluetooth or other
communications system must be disabled. Digital devices should be placed in antistatic packaging
such as paper bags or envelopes and cardboard boxes. Plastic should be avoided as it can convey static
electricity or allow a buildup of condensation or humidity. In emergency or life threatening
situations, information from the phone can be removed and saved at the scene, but great care must be
taken in the documentation of the action and the preservation of the data.
When sending digital devices to the laboratory, the investigator must indicate the type of information
being sought, for instance phone numbers and call histories from a cell phone, emails, documents and
messages from a computer, or images on a tablet. Seizing Stand Alone Computers and Equipment: To
prevent the alteration of digital evidence during collection, first responders should first document any
activity on the computer, components, or devices by taking a photograph and recording any
information on the screen. Responders may move a mouse (without pressing buttons or moving the
wheel) to determine if something is on the screen. If the computer is on, calling on a computer
forensic expert is highly recommended as connections to criminal activity may be lost by turning off
the computer. If a computer is on but is running destructive software (formatting, deleting, removing
or wiping information), power to the computer should be disconnected immediately to preserve
whatever is left on the machine. Office environments provide a challenging collection situation due to
networking, potential loss of evidence and liabilities to the agency outside of the criminal
investigation. For instance, if a server is turned off during seizure that is providing a service to outside
customers, the loss of service to the customer may be very damaging. In addition, office equipment
that could contain evidence such as copiers, scanners, security cameras, facsimile machines, pagers
and caller ID units should be collected. Computers that are off may be collected into evidence as per
usual agency digital evidence procedures.
How and Where the Analysis is Performed
Exploiting data in the laboratory: Once the digital evidence has been sent to the laboratory, a qualified
analyst will take the following steps to retrieve and analyze data:
1. Prevent contamination: It is easy to understand cross contamination in a DNA laboratory or at the
crime scene, but digital evidence has similar issues which must be prevented by the collection officer.
Prior to analyzing digital evidence, an image or work copy of the original storage device is created.
When collecting data from a suspect device, the copy must be stored on another form of media to keep
the original pristine. Analysts must use “clean” storage media to prevent contamination or the
introduction of data from another source. For example, if the analyst was to put a copy of the suspect
device on a CD that already contained information, that information might be analyzed as though it
had been on the suspect device. Although digital storage media such as thumb drives and data cards
are reusable, simply erasing the data and replacing it with new evidence is not sufficient. The
destination storage unit must be new or, if reused, it must be forensically “wiped” prior to use. This
removes all content, known and unknown, from the media.
2. Isolate Wireless Devices: Cell phones and other wireless devices should be initially examined in an
isolation chamber, if available. This prevents connection to any networks and keeps evidence as
pristine as possible. The Faraday bag can be opened inside the chamber and the device can be
exploited, including phone information, Federal Communications Commission (FCC) information,
SIM cards, etc. The device can be connected to analysis software from within the chamber. If an
agency does not have an isolation chamber, investigators will typically place the device in a Faraday
bag and switch the phone to airplane mode to prevent reception.
3. Install write-blocking software: To prevent any change to the data on the device or media, the
analyst will install a block on the working copy so that data may be viewed but nothing can be
changed or added.
4. Select extraction methods(Imaging / Cloning): Once the working copy is created, the analyst will
determine the make and model of the device and select extraction software designed to most
completely “parse the data,” or view its contents.
5. Submit device or original media for traditional evidence examination: When the data has been
removed, the device is sent back into evidence. There may be DNA, trace, fingerprint, or other
evidence that may be obtained from it and the digital analyst can now work without it. Learn more
about DNA, trace evidence, or fingerprints.
6. Proceed with investigation: At this point, the analyst will use the selected software to view data. The
analyst will be able to see all the files on the drive, can see if areas are hidden and may even be able to
restore organization of files allowing hidden areas to be viewed. Deleted files are also visible, as long
as they haven’t been over-written by new data. Partially deleted files can be of value as well. Files on a
computer or other device are not the only evidence that can be gathered. The analyst may have to
work beyond the hardware to find evidence that resides on the Internet including chat rooms, instant
messaging, websites and other networks of participants or information. By using the system of
Internet addresses, email header information, time stamps on messaging another encrypted data, the
analyst can piece together strings of interactions that provide a picture of activity.
EVIDENCE PRESERVATION
Handling of evidence is the most important aspect in digital forensics. It is imperative that nothing be
done that may alter digital evidence. This is known as preservation: the isolation and protection of digital
evidence exactly as found without alteration so that it can later be analyzed. Time is highly important in
preserving digital evidence. To preserve the evidence observe the underlying steps-
1. Do not turn ON a device if it is turned OFF.
2. If it is not charged, do not charge it. For mobile phones, if the device is ON, power it down to prevent
remote wiping or data from being overwritten.
3. Ensure that you do not leave the device in an open area or other unsecured space.
4. Document where the device is, who has access, and when it is moved.
5. Do not plug anything to the device, such as memory cards, USB thumb drives, or any other storage
media that you have, as the data could be easily lost.
6. Do not open any applications, files, or pictures on the device. You could accidentally lose data or
overwrite it.
7. Do not copy anything to or from the device.
8. Preserve any and all digital evidence that you think could be useful for your case.
9. Take a picture of the piece of evidence (front, back, etc.) to prove its condition.
10. Make sure you know the PIN/Password pattern of the device.
11. Last but not least, do not trust anybody without forensics training to investigate or view files on the
original device. They might cause the deletion of data or the corruption of important information.
The chain of custody in digital forensics can also be referred to as the forensic link, the paper trail,
or the chronological documentation of electronic evidence. It indicates the collection, sequence of control,
transfer, and analysis. It also documents each person who handled the evidence, the date/time it was
collected or transferred, and the purpose for the transfer.
Why Is It Important to Maintain the Chain of Custody?
It is important to maintain the chain of custody to preserve the integrity of the evidence and
prevent it from contamination, which can alter the state of the evidence. If not preserved, the evidence
presented in court might be challenged and ruled inadmissible.
Importance to the Examiner
Suppose that, as the examiner, you obtain metadata for a piece of evidence. However, you are
unable to extract meaningful information from it. The fact that there is no meaningful information
within the metadata does not mean that the evidence is insufficient. The chain of custody in this case
helps show where the possible evidence might lie, where it came from, who created it, and the type of
equipment that was used. That way, if you want to create an exemplar, you can get that equipment, create
the exemplar, and compare it to the evidence to confirm the evidence properties.
Importance to the Court
It is possible to have the evidence presented in court dismissed if there is a missing link in the chain of
custody. It is therefore important to ensure that a wholesome and meaningful chain of custody is
presented along with the evidence at the court.
What Is the Procedure to Establish the Chain of Custody?
In order to ensure that the chain of custody is as authentic as possible, a series of steps must be
followed. It is important to note that, the more information a forensic expert obtains concerning the
evidence at hand, the more authentic is the created chain of custody. Due to this, it is important to obtain
administrator information about the evidence: for instance, the administrative log, date and file info, and
who accessed the files. You should ensure the following procedure is followed according to the chain of
custody for electronic evidence:
1. Save the original materials: You should always work on copies of the digital evidence as opposed to
the original. This ensures that you are able to compare your work products to the original that you
preserved unmodified.
2. Take photos of physical evidence: Photos of physical (electronic) evidence establish the chain of
custody and make it more authentic.
3. Take screenshots of digital evidence content: In cases where the evidence is intangible, taking
screenshots is an effective way of establishing the chain of custody.
4. Document date, time, and any other information of receipt. Recording the timestamps of whoever has
had the evidence allows investigators to build a reliable timeline of where the evidence was prior to
being obtained. In the event that there is a hole in the timeline, further investigation may be
necessary.
5. Inject a bit-for-bit clone of digital evidence content into our forensic computers. This ensures that we
obtain a complete duplicate of the digital evidence in question.
6. Perform a hash test analysis to further authenticate the working clone. Performing a hash test
ensures that the data we obtain from the previous bit-by-bit copy procedure is not corrupt and
reflects the true nature of the original evidence. If this is not the case, then the forensic analysis may
be flawed and may result in problems, thus rendering the copy non-authentic.
The procedure of the chain of custody might be different. depending on the jurisdiction in which
the evidence resides; however, the steps are largely identical to the ones outlined above. What
Considerations Are Involved with Digital Evidence?A couple of considerations are involved when dealing
with digital evidence. We shall take a look at the most common and discuss globally accepted best
practices.
Never work with the original evidence to develop procedures: The biggest consideration with
digital evidence is that the forensic expert has to make a complete copy of the evidence for forensic
analysis. This cannot be overlooked because, when errors are made to working copies or comparisons are
required, it will be necessary to compare the original and copies.
Use clean collecting media: It is important to ensure that the examiner’s storage device is
forensically clean when acquiring the evidence. This prevents the original copies from damage. Think of a
situation where the examiner’s data evidence collecting media is infected by malware. If the malware
escapes into the machine being examined, all of the evidence can become compromised.
Document any extra scope: During the course of an examination, information of evidentiary value
may be found that is beyond the scope of the current legal authority. It is recommended that this
information be documented and brought to the attention of the case agent because the information may
be needed to obtain additional search authorities.
A comprehensive report must contain the following sections:
1. Identity of the reporting agency
2. Case identifier or submission number
3. Case investigator
4. Identity of the submitter
5. Date of receipt
6. Date of report
7. Descriptive list of items submitted for examination, including serial number, make, and model
8. Identity and signature of the examiner
9. Brief description of steps taken during examination, such as string searches, graphics image searches,
and recovering erased files
10. Results/conclusions
Consider safety of personnel at the scene. It is advisable to always ensure the scene is properly
secured before and during the search. In some cases, the examiner may only have the opportunity to do
the following while onsite:
1. Identify the number and type of computers.
2. Determine if a network is present.
Runtime Interrogation :
Runtime interrogation enables you to quickly sweep across an entire enterprise and check for
specific indicators in physical memory (instead of capturing a full memory dump from each system). You
typically execute this type of analysis in an automated capacity. Various commercial suites provide
enterprise-level capabilities for interrogating physical memory, such as F-Response, AccessData
Enterprise, and EnCase Enterprise.
Hardware Acquisition
Due to the limitations mentioned earlier, this book doesn’t cover hardware-based acquisitions in
depth. However, it’s worth mentioning that Volatility does support acquisition and interrogation of
memory over Firewire. You’ll need the libforensic1394 library
(https://freddie.witherden.org/tools/libforensic1394), the JuJu Firewire stack, and a special invocation of
Volatility. Note the -l instead of -f parameter:
$ python vol.py –l firewire://forensic1394/<devno> plugin [options]
The <devno> is the device number (typically 0 if you’re only connected to one Firewire device).
Use the imagecopy plugin to acquire memory or any other analysis plugin to interrogate the running
system, but be aware of the 4GB limit discussed previously.
Another use case for hardware-based memory analysis includes unlocking workstations. For
example, the Inception tool by Carsten Maartmann-Moe
(http://www.breaknenter.org/projects/inception) finds and patches instructions that allow you to log
into password-protected Windows, Linux, and Mac OS X computers even without the credentials.
However, as stated on the tool’s website, if the required instructions aren’t found in the lower 4GB of
memory, it might not work reliably
Tools used for Data Acquisition:
1. Volatility (FREE)
2. FTK – Imager (FREE)
3. UFED (PAID)
4. XRY (PAID)
5. EnCase (PAID)
Why Volatility?
Before you start using Volatility, you should understand some of its unique features. As previously
mentioned, Volatility is not the only memory forensics application it was specifically designed to be
different. Here are some of the reasons why it quickly became our tool of choice:
1. A single, cohesive framework. Volatility analyzes memory from 32- and 64-bit Windows, Linux,
Mac systems (and 32-bit Android). Volatility’s modular design allows it to easily support new
operating systems and architectures as they are released.
2. It is Open Source GPLv2. This means you can read the source code, learn from it, and extend it. By
learning how Volatility works, you will become a more effective analyst.
3. It is written in Python. Python is an established forensic and reverse engineering language with
loads of libraries that can easily integrate into Volatility.
4. Runs on Windows, Linux, or Mac analysis systems. Volatility runs anywhere Python can be
installed a refreshing break from other memory analysis tools that run only on Windows.
5. Extensible and scriptable application programming interface (API). Volatility gives you the
power to go beyond and continue innovating. For example, you can use Volatility to drive your
malware sandbox, perform virtual machine (VM) introspection, or just explore kernel memory in an
automated fashion.
6. Unparalleled feature sets. Capabilities have been built into the framework based on reverse
engineering and specialized research. Volatility provides functionality that even Microsoft’s own
kernel debugger doesn’t support.
7. Comprehensive coverage of file formats. Volatility can analyze raw dumps, crash dumps,
hibernation files, and various other formats (see Chapter 4). You can even convert back and forth
between these formats.
8. Fast and efficient algorithms. This lets you analyze RAM dumps from large systems in a fraction of
the time it takes other tools, and without unnecessary memory consumption.
9. Serious and powerful community. Volatility brings together contributors from commercial
companies, law enforcement, and academic institutions around the world. Volatility is also being built
on by a number of large organizations, such as Google, National DoD Laboratories, DC3, and many
antivirus and security shops.
10. Focused on forensics, incident response, and malware. Although Volatility and Windbg share
some functionality, they were designed with different primary purposes in mind. Several aspects are
often very important to forensics analysts but not as important to a person debugging a kernel driver
(such as unallocated storage, indirect artifacts, and so on).
What Volatility Is Not?
Volatility is a lot of things, but there are a few categories in which it does not fit. These categories are:
1. It is not a memory acquisition tool: Volatility does not acquire memory from target systems. You
acquire memory with one of the tools mentioned in Chapter 4 and then analyze it with Volatility. An
exception is when you connect to a live machine over Firewire and use Volatility’s imagecopy plugin
to dump the RAM to a file. In this case, you are essentially acquiring memory.
2. It is not a GUI: Volatility is a command line tool and a Python library that you can import from your
own applications, but it does not include a front-end. In the past, various members of the forensics
community developed GUIs for Volatility, but these are currently unsupported by the official
development team.
3. It is not bug-free: Memory forensics can be fragile and sensitive in nature. Supporting RAM dumps
from multiple versions of most major operating systems (that are usually running obscure third-party
software) comes with a cost: It can lead to complex conditions and difficult-to-reproduce problems.
Although the development team makes every effort to be bug free, sometimes it’s just not possible.
version of EnCase leverages similar code in its agent that allows remote interrogation of live systems
(see http://volatility-labs.blogspot.com/2013/10/sampling-ram-across-encase-enterprise.html).
8. Belkasoft Live RAM Capturer: A utility that advertises the ability to dump memory even when
aggressive anti-debugging and anti-dumping mechanisms are present. It supports all the major 32-
and 64-bit Windows versions and can be run from a USB thumb drive.
9. ATC-NY Windows Memory Reader: This tool can save memory in raw or crash dump formats and
includes a variety of integrity hashing options. When used from a UNIX-like environment such as
MinGW or Cygwin, you can easily send the output to a remote netcat listener or over an encrypted
SSH tunnel.
10. Winpmem: The only open-source memory acquisition tool for Windows. It includes the capability to
output files in raw or crash dump format, choose between various acquisition methods (including the
highly experimental PTE remapping technique), and expose physical memory through a device for
live analysis of a local system.
Acquiring Data with dcfldd in Linux :
The dd command is intended as a data management tool; it’s not designed for forensics
acquisitions. Because of these shortcomings, Nicholas Harbour of the Defense Computer Forensics
Laboratory (DCFL) developed a tool that can be added to most UNIX/Linux OSs. This tool, the dcfldd
command, works similarly to the dd command but has many features designed for forensics acquisitions.
The following are important functions dcfldd offers that aren’t possible with dd:
Specify hexadecimal patterns or text for clearing disk space.
Log errors to an output file for analysis and review.
Use the hashing options MD5, SHA-1, SHA-256, SHA-384, and SHA-512 with logging and the option of
specifying the number of bytes to hash, such as specific blocks or sectors.
Refer to a status display indicating the acquisition’s progress in bytes.
Split data acquisitions into segmented volumes with numeric extensions (unlike dd’s limit of 99).
Verify the acquired data with the original disk or media data.
When using dcfldd, you should follow the same precautions as with dd. The dcfldd command can
also write to the wrong device, if you aren’t careful. The following examples show how to use the dcfldd
command to acquire data from a 64 MB USB drive, although you can use the command on a larger media
device. All commands need to be run from a privileged root shell session. To acquire an entire media
device in one image file, type the following command at the shell prompt:
dcfldd if=/dev/sda of=usbimg.dat
If the suspect media or disk needs to be segmented, use the dcfldd command with the split
command, placing split before the output file field (of=), as shown here:
dcfldd if=/dev/sda hash=md5 md5log=usbimgmd5.txt bs=512 conv=noerror,sync split=2M of=usbimg
This command creates segmented volumes of 2 MB each. To create segmented volumes that fit on
a CD of 650 MB, change the split=2M to split=650M. This command also saves the MD5 value of the
acquired data in a text file named usbimgmd5.txt.
Capturing an Image with AccessData FTK Imager Lite :
The following activity assumes you have removed the suspect drive and connected it to a USB or
FireWire write-blocker device connected to your forensic workstation. The acquisition is written to a
work folder on your C drive, assuming it has enough free space for the acquired data. Follow these steps to
perform the first task of connecting the suspect’s drive to your workstation:
Document the chain of evidence for the drive you plan to acquire.
Remove the drive from the suspect’s computer.
For IDE drives, configure the suspect drive’s jumpers as needed. (Note: This step doesn’t apply to
SATA or USB drives.)
Connect the suspect drive to the USB or FireWire write-blocker device.
Create a storage folder on the target drive. For this activity, you use your work folder (C:\Work\
Chap03\Chapter), but in real life, you’d use a folder name such as C:\Evidence.
FTK Imager is a data acquisition tool included with a licensed copy of AccessData Forensic Toolkit.
Like most Windows data acquisition tools, it requires using a USB dongle for licensing. FTK Imager Lite,
Debian and Ubuntu x64 command-line interfaces, and macOS 10.5 and 10.6x command-line interfaces are
free and require no dongle license.
FTK Imager can make disk-to-image copies of evidence drives and enables you to acquire an
evidence drive from a logical partition level or a physical drive level. You can also define the size of each
disk-to-image file volume, allowing you to segment the image into one or many split volumes. For
example, you can specify 650 MB volume segments if you plan to store volumes on 650 MB CD-Rs or 2.0
GB volume segments so that you can record volumes on DVD-/+Rs. An additional feature of FTK Imager is
that it can image RAM on a live computer. The evidence drive you’re acquiring data from must have a
hardware write-blocking device or run from a Live CD, such as Mini-WinFE.
FTK Imager can’t acquire a drive’s HPA and device configuration overlay (DCO), however. In other
words, if the drive’s specifications indicate it has 11,000,000 sectors and the BIOS display indicates
9,000,000, a host protected area of 2,000,000 sectors might be assigned to the drive. If you suspect an
evidence drive has a host protected area, you must use an advanced acquisition tool to include this area
when copying data. With older MS-DOS tools, you might have to define the exact sector count to make
sure you include more than what the BIOS shows as the number of known sectors on a drive. Review
vendors’ manuals to determine how to account for a drive’s host protected area.
Validating dcfldd-Acquired Data :
Because dcfldd is designed for forensics data acquisition, it has validation options integrated: hash and
hashlog. You use the hash option to designate a hashing algorithm of md5, sha1, sha256, sha384, or
sha512. The hashlog option outputs hash results to a text file that can be stored with image files. To create
an MD5 hash output file during a dcfldd acquisition, you enter the following command (in one line) at
the shell prompt:
dcfldd if=/dev/sda split=2M of=usbimg hash=md5
hashlog=usbhash.log
To see the results of files generated with the split command, you enter the list directory (ls) command at
the shell prompt. You should see the following output:
usbhash.logusbimg.004 usbimg.010 usbimg.016 usbimg.022 usbimg.028
Note that the first segmented volume has the extension .000 rather than .001. Some Windows
forensics tools might not be able to read segmented file extensions starting with .000. They’re typically
looking for .001. If your forensics tool requires starting with an .001 extension, the files need to be
renamed incrementally. So segmented file.000 should be renamed .001, .001 should be renamed .002, and
so on.
Another useful dcfldd option is vf (verify file), which compares the image file with the original
medium, such as a partition or drive. The vf option applies only to a nonsegmented image file. To validate
segmented files from dcfldd, use the md5sum or sha1sum command described previously. To use the vf
option, you enter the following command at the shell prompt:
dcfldd if=/dev/sda vf=sda_hash.img
E-MAIL INVESTIGATION
E-mail has emerged as the most important application on Internet for communication of
messages, delivery of documents and carrying out of transactions and is used not only from computers
but many other electronic gadgets like mobile phones. Over a period of year’s e-mail protocols have been
secured through several security extensions and producers, however, cybercriminals continue to misuse
it for illegitimate purposes by sending spam, phishing e-mails, distributing child pornography, and hate
emails besides propagating viruses, worms, hoaxes and Trojan horses. Further, Internet infrastructure
misuse through denial of service, waste of storage space and computational resources are costing every
Internet user directly or indirectly. It is thus essential to identify and eliminate users and machines
misusing e-mail service. E-mail forensic analysis is used to study the source and content of e-mail
message as evidence, identifying the actual sender, recipient and date and time it was sent, etc. to collect
credible evidence to bring criminals to justice.
Email Architecture
When a user sends an email to a recipient, this email does not travel directly into the recipient’s mail
server. Instead it passes through several servers. The MUA is the email program that is used to compose
and read the email messages at the client end. There are multiple MUAs available such as Outlook
express, Gmail, and Lotus Notes. MTA is the server that receives the message sent from the MUA. Once
the MTA receives a message it decodes the header information to determine where the message is going,
and delivers the message to the corresponding MTA on the receiving machine. Every time when the MTA
receives the message, it modifies the header by adding data. When the last MTA receives the message, it
decodes it and sends to the receiver’s MUA, so the message can then be seen by the recipient. Therefore
an email header has multiple pieces of server information, including IP addresses.
Email Identities and Data: The primary evidence in email investigations is the email header. The email
header contains a considerable amount of information about the email. Email header analysis should
start from bottom to top, because the bottom-most information is the information from the sender, and
the top-most information is about the receiver. In the previous section it was shown that email travels
through multiple MTAs. These details can be found in the email header.
Email Forensic Investigation Techniques: Email forensics refers to analyzing the source and content
of emails as evidence. Investigation of email related crimes and incidents involve various approaches.
Header Analysis: Email header analysis is the primary analytical technique. This involves analyzing
metadata in the email header. It is evident that analyzing headers helps to identify the majority of email
related crimes. Email spoofing, phishing, spam, scams and even internal data leakages can be
identified by analyzing the header.
Bait Tactics
In bait tactic investigation an e-mail with http: “<img src>” tag having image source at some computer
monitored by the investigators is send to the sender of e-mail under investigation containing real
(genuine) e-mail address. When the e-mail is opened, a log entry containing the IP address of the
recipient (sender of the e-mail under investigation) is recorded on the http server hosting the image and
thus sender is tracked. However, if the recipient (sender of the e-mail under investigation) is using a
proxy server then IP address of the proxy server is recorded. The log on proxy server can be used to track
the sender of the e-mail under investigation. If the proxy server’s log is unavailable due to some reason,
then investigators may send the tactic e-mail containing a) Embedded Java Applet that runs on receiver’s
computer or b) HTML page with Active X Object. Both aiming to extract IP address of the receiver’s
computer and e-mail it to the investigators.
Server Investigation
This involves investigating copies of delivered emails and server logs. In some organizations they do
provide separate email boxes for their employees by having internal mail servers. In this case,
investigation involves the extraction of the entire email box related to the case and the server logs.
Network Device Investigation
In some investigations, the investigator requires the logs maintained by the network devices such as
routers, firewalls and switches to investigate the source of an email message. This is often a complex
situation where the primary evidence is not percent (when the ISP or proxy does not maintain logs or
lacks operation by ISP).
Software Embedded Analysis
Some information about the sender of the email, attached files or documents may be included with the
message by the email software used by the sender for composing the email. This information may be
included in the form of custom headers or in the form of MIME content as a Transport Neutral
Encapsulation Format (TNEF).
that can create a report that can be sent to the ISP of sender. The ISP can then takes steps to prosecuting
the account holder and help put a stop to spam.
EmailTracer – It is an Indian effort in cyber forensics by the Resource Centre for Cyber Forensics (RCCF)
which is a premier centre for cyber forensics in India. It develops cyber forensic tools based on the
requirements of law enforcement agencies. Among several other digital forensic tools, it has developed an
e-mail tracer tool named EmailTracer. This tool tracesthe originating IP address and other details from e-
mail header, generates detailed HTML report of email header analysis, finds the city level details of the
sender, plots route traced by the mail and display the originating geographic location of the e-mail.
Besides these, it has keyword searching facility on e-mail content including attachment for its
classification.
Adcomplain – It is a tool for reporting inappropriate commercial e-mail and usenet postings, as well as
chain letters and "make money fast" postings. It automatically analyses the message, composes an abuse
report, and mails the report to the offender's internet service provider by performing a valid header
analysis. The report is displayed for approval prior to mailing to U.S. Federal Trade Commission.
Adcomplain can be invoked from the command line or automatically from many news and mail readers.
Aid4Mail Forensic – It is e-mail investigation software for forensic analysis, e-discovery, and litigation
support. It is an e-mail migration and conversion tool, which supports various mail formats including
Outlook (PST, MSG files), Windows Live Mail, Thunderbird, Eudora, and mbox. It can search mail by date,
header content, and by message body content. Mail folders and files can be processed even when
disconnected (unmounted) from their email client including those stored on CD, DVD, and USB drives.
Aid4Mail Forensic can search PST files and all supported mail formats, by date range and by keywords in
the message body or in the headers. Special Boolean operations are supported. It is able to process
unpurged (deleted) e-mail from mbox files and can restore unpurged e-mail during exportation.
AbusePipe – It analyses abuse complaint e-mails and determines which of ESP’s customers is sending
spam based on the information in e-mailed complaints. It automatically generates reports reporting
customers violating ESP’s acceptable user policy so that action to shut them down can be taken
immediately. AbusePipe can be configured to automatically reply to people reporting abuse. It can assist
in meeting legal obligations such as reporting on the customers connected to a given IP address at a given
date and time.
AccessData’s FTK – It is standard court-validated digital investigations platform computer forensics
software delivering computer forensic analysis, decryption and password cracking within an intuitive
and customizable interface. It has speed, analytics and enterprise class scalability. It is known for its
intuitive interface, e-mail analysis, customizable data views and stability. It supports popular encryption
technologies, such as Credant, SafeBoot, Utimaco, EFS, PGP, Guardian Edge, Sophos Enterprise and
S/MIME. Its current supported e-mail types are: Lotus Notes NSF, Outlook PST/OST, Exchange EDB,
Outlook Express DBX, Eudora, EML (Microsoft Internet Mail, Earthlink, Thunderbird, Quickmail, etc.),
Netscape, AOL and RFC 833.
EnCase Forensic – It is computer forensic application that provides investigators the ability to image a
drive and preserve it in a forensic manner using the EnCase evidence file format (LEF or E01), a digital
evidence container vetted by courts worldwide. It contains a full suite of analysis, bookmarking and
reporting features. Guidance Software and third party vendors provide support for expanded capabilities
to ensure that forensic examiners have the most comprehensive set of utilities. Including many other
network forensics investigations, it also supports Internet and e-mail investigation. It included Instant
Messenger toolkit for Microsoft Internet Explorer, MozillaFirefox, Opera and Apple Safari. The e-mail
support includes for Outlook PSTs/OSTs, Outlook Express DBXs, Microsoft Exchange EDB Parser, Lotus
Notes, AOL, Yahoo, Hotmail, Netscape Mail and MBOX archives.
FINALeMAIL – It can recover the e-mail database file and locates lost e-mails that do not have data
location information associated with them. FINALeMAIL has the capability of restoring lost e-mails to
their original state, recover full e-mail database files even when such files are attacked by viruses or
damaged by accidental formatting. It can recover E- mail messages and attachments emptied from the
‘Deleted Items folder’ in Microsoft Outlook Express, Netscape Mail, and Eudora.
Sawmill-GroupWise – It is a GroupWise Post Office Agent log analyser which can process log files in
GroupWise Post Office Agent format, and generate dynamic statistics from them, analysing and reporting
events. It can parse these logs, import them into a MySQL, Microsoft SQL Server, or Oracle database (or its
own built-in database), aggregate them, and generate dynamically filtered reports, through a web
interface. It supports Window, Linux, FreeBSD, OpenBSD, Mac OS, Solaris, other UNIX, and several other
platforms.
Forensics Investigation Toolkit (FIT) – It is content forensics toolkit to read and analyse the content of
the Internet raw data in Packet CAPture (PCAP) format. FIT provides security administrative officers,
auditors, fraud and forensics investigator as well as lawful enforcement officers the power to perform
content analysis and reconstruction on pre-captured Internet raw data from wired or wireless networks.
All protocols and services analysed and reconstructed are displayed in readable format to the users. The
other uniqueness of the FIT is that the imported raw data files can be immediately parsed and
reconstructed. It supports case management functions, detailed information including Date-Time, Source
IP, Destination IP, Source MAC, etc., WhoIS and Google Map integration functions. Analysing and
reconstruction of various Internet traffic types which includes e-mail (POP3, SMTP, IMAP), Webmail
(Read and Sent), IM or Chat (MSN, ICQ, Yahoo, QQ, Skype Voice Call Log, UT Chat Room, Gtalk, IRC Chat
Room), File Transfer (FTP, P2P), Telnet, HTTP (Content, Upload/Download, Video Streaming, Request)
and Others (SSL) can be performed using this toolkit or E01), a digital evidence container vetted by courts
worldwide. It contains a full suite of analysis, bookmarking and reporting features. Guidance Software
and third party vendors provide support for expanded capabilities to ensure that forensic examiners have
the most comprehensive set of utilities. Including many other network forensics investigations, it also
supports Internet and e-mail investigation. It included Instant Messenger toolkit for Microsoft Internet
Explorer, Mozilla Firefox, Opera and Apple Safari. The e-mail support includes for Outlook PSTs/OSTs,
Outlook Express DBXs, Microsoft Exchange EDB Parser, Lotus Notes, AOL, Yahoo, Hotmail, Netscape Mail
and MBOX archives.
FINALeMAIL – It can recover the e-mail database file and locates lost e-mails that do not have data
location information associated with them. FINALeMAIL has the capability of restoring lost e-mails to
their original state, recover full e-mail database files even when such files are attacked by viruses or
damaged by accidental formatting. It can recover E- mail messages and attachments emptied from the
‘Deleted Items folder’ in Microsoft Outlook Express, Netscape Mail, and Eudora. Sawmill-GroupWise – It
is a GroupWise Post Office Agent log analyser which can process log files in GroupWise Post Office Agent
format, and generate dynamic statistics from them, analysing and reporting events. It can parse these
logs, import them into a MySQL, Microsoft SQL Server, or Oracle database (or its own built-in database),
aggregate them, and generate dynamically filtered reports, through a web interface. It supports Window,
Linux, FreeBSD, OpenBSD, Mac OS, Solaris, other UNIX, and several other platforms.
Forensics Investigation Toolkit (FIT) – It is content forensics toolkit to read and analyse the content of
the Internet raw data in Packet CAPture (PCAP) format. FIT provides security administrative officers,
auditors, fraud and forensics investigator as well as lawful enforcement officers the power to perform
content analysis and reconstruction on pre-captured Internet raw data from wired or wireless networks.
All protocols and services analysed and reconstructed are displayed in readable format to the users. The
other uniqueness of the FIT is that the imported raw data files can be immediately parsed and
reconstructed. It supports case management functions, detailed information including Date-Time, Source
IP, Destination IP, Source MAC, etc., WhoIS and Google Map integration functions. Analysing and
reconstruction of various Internet traffic types which includes e-mail (POP3, SMTP, IMAP), Webmail
(Read and Sent), IM or Chat (MSN, ICQ, Yahoo, QQ, Skype Voice Call Log, UT Chat Room, Gtalk, IRC Chat
Room), File Transfer (FTP, P2P), Telnet, HTTP (Content, Upload/Download, Video Streaming, Request)
and Others (SSL) can be performed using this toolkit.
Paraben (Network) E-mail Examiner – It has comprehensive analysis features, easy book marking and
reporting, advanced Boolean searching, searching within attachments, and full UNICODE language
support. It supports America On-line (AOL), Microsoft Outlook (PST, OST), Thunderbird, Outlook Express,
Eudora, E-mail file (EML), Windows mail databases and more than 750 MIME Types and related file
extensions. It can recover deleted e-mails from Outlook (PST), Thunderbird, etc. Network E-mail
Examiner [http://www.paraben.com/network-email-examiner.html], can thoroughly examine Microsoft
Exchange (EDB), Lotus Notes (NSF), and GroupWise e-mail stores. It works with E-mail Examiner and all
output are compatible and can easily be loaded for more complex tasks. According to Simson L. Garfinkel
[19] current forensic tools are designed to help examiners in finding specific pieces of evidence and are
not assisting in investigations. Further, these tools were created for solving crimes committed against
people where the evidence resides on a computer; they were not created to assist in solving typical crimes
committed with computers or against computers. Current tools must be re-imagined to facilitate
investigation and exploration. This is especially important when the tools are used outside of the law
enforcement context for activities such as cyber-defence and intelligence. Construction of a modular
forensic processing framework for digital forensics that implements the “Visibility, Filter and Report”
model would be the first logical step in this direction.
Email Recovery
Email is still popular among people. Many firms and businesses carry their regular operations through
emails. If these emails get lost such firms have to face many hardships, and need email recovery software
or tools to solve such problems. As with files on a hard drive, when a message is deleted from a folder it is
not physically erased. The data is just marked as having been deleted, so the email program doesn't
display it. The deleted email does not get overwritten until the mail program tidies up or compacts its
database. Therefore, a window of opportunity of uncertain duration, during which it is possible to recover
deleted emails intact. Depending on different ways of how and how long you've deleted or lost Emails on
Outlook or Outlook Express, the ways to recover deleted or lost emails are quite different. You may follow
below offer ways to recover deleted or lost Outlook emails on your own.
Recover Lost PST Files with Freeware
If you have downloaded the email database .pst/.ost files or saved received emails on a local drive of your
PC, and deleted them by mistake, don't worry. You can recover the deleted emails with a free file recovery
tool. Now, recover the whole PST file in detail:
Step 1. Select the drive partition where you used to save the Outlook PST files and then click "Scan".
Step 2. The software will start immediately for a quick and deep scan, to find as many lost files as possible.
To quickly locate the PST files, you have two options.
Use "Filter > Email" to find your lost PST files.
Search with ".pst" can also help you reach the PST file.
Step 3. Select the wanted PST files from the results, and click "Recover" to get them back all at once.
Recover Emails from Deleted Item folder (Manually) When you delete an email in an email program, it is
usually moved to a Deleted Items folder (as in Outlook Express) or similar. The Deleted Items folder is like
a Recycle Bin for emails and gives you a chance of restoring messages deleted in haste. The ways to
recover deleted or lost emails are quite different according to how long you have lost your emails.
1. Recover Emails from Deleted Item folder in 10 Days
Step 1: Login Outlook with your account, go to your email folder list and click "Deleted Items" at the left
pane.
Step 2: Find the exact email or messages, right-click on them and select "Move > Other Folder".
Step 3: Select "Index" to move email and messages to your index box, click "OK" to complete the recovery
process.
2. Recover Emails Within 30 days When an email is moved from Deleted Items, what happens is that the
message is copied to another folder - Recoverable Item and then deleted from the first. And the deleted
emails will be kept there within 30 days. Therefore, if you've cleaned the Deleted Items folder, you can
still recover permanently deleted emails from Recoverable Item folder:
Step 1: Login Outlook with your account, go to your email folder list and click "Deleted Items".
Step 2: Tab on "HOME" and click "Recover Deleted Items From Server".
Step 3: Select the item you want to recover and click "Restore Selected Items", then click "OK" to finish.
3. Restore Deleted Emails Even After 30 Days. If you've lost the emails after 30 days, relax! The
professional email recovery software will help you fix this issue with ease.
EaseUS Email Recovery Wizard is an advanced tool to recover lost and deleted emails, folders,
calendars, appointments, meeting requests, contacts, tasks, task requests, journals, notes and attachments
from the corrupted .pst file. It is safe and read-only utility which reads the lost/deleted mail items
without modifying the existing content and restores the lost data into a new file. It can recover mail items
from MS-Outlook 2010, 2007, 2003, 2002/XP, 2000, 98 and 97.
Password cracking :
In cryptanalysis and computer security, password cracking is the process of recovering passwords from
data that have been stored in or transmitted by a computer system. A common approach (brute-force
attack) is to repeatedly try guesses for the password and to check them against an available cryptographic
hash of the password. The purpose of password cracking might be to help a user recover a forgotten
password (installing an entirely new password is less of a security risk, but it involves System
Administration privileges), to gain unauthorized access to a system, or to act as a preventive measure
whereby system administrators check for easily crackable passwords. On a file-by-file basis, password
cracking is utilized to gain access to digital evidence to which a judge has allowed access, when a
particular file's permissions are restricted.
Password cracking refers to various measures used to discover computer passwords. This is usually
accomplished by recovering passwords from data stored in, or transported from, a computer system.
Password cracking is done by either repeatedly guessing the password, usually through a computer
algorithm in which the computer tries numerous combinations until the password is successfully
discovered. Password cracking can be done for several reasons, but the most malicious reason is in order
to gain unauthorized access to a computer without the computer owner’s awareness. This results in cyber
crime such as stealing passwords for the purpose of accessing banking information. Other, non-malicious,
reasons for password cracking occur when someone has misplaced or forgotten a password. Another
example of non-malicious password cracking may take place if a system administrator is conducting tests
on password strength as a form of security so that hackers cannot easily access protected systems. The
best way that users can protect their passwords from cracking is to ensure they choose strong passwords.
Typically, passwords must contain a combination of mixed-case random letters, digits and symbols.
Strong passwords should never be actual words. In addition, strong passwords are at least eight
characters long. In many password-protected applications, users are notified of the strength of the
password they've chosen upon entering it. The user can then modify and strengthen the password based
on the indications of its strength. Other, more stringent, techniques for password security include key
stretching algorithms like PBKDF2. Algorithms create hashes of passwords that are designed to protect
passwords from being readily cracked. Security tokens constantly shift passwords so that even if a
password is cracked, it can be used for a very limited amount of time. The shift to sophisticated
technology within computing methods gave rise to software that can crack passwords. Password-cracking
computers working in conjunction with each other are usually the most effective form of password
cracking, but this method can be very time consuming. There are many password cracking software tools,
but the most popular are Air-crack, Cain and Abel, John the Ripper, Hash-cat, Hydra, Dave Grohl and
Elcom Soft. Many litigation support software packages also include password cracking functionality. Most
of these packages employ a mixture of cracking strategies, algorithm with brute force and dictionary
attacks proving to be the most productive.
Password-cracking Techniques :
1. Dictionary attack
The dictionary attack, as its name suggests, is a method that uses an index of words that feature most
commonly as user passwords. This is a slightly less-sophisticated version of the brute force attack but it
still relies on hackers bombarding a system with guesses until something sticks. If you think that by
mashing words together, such as "superadministratorguy", will defend you against such an attack, think
again. The dictionary attack is able to accommodate for this, and as such will only delay a hack for a
matter of seconds.
7. Offline cracking
It's easy to imagine that passwords are safe when the systems they protect lock out users after three or
four wrong guesses, blocking automated guessing applications. Well, that would be true if it were not for
the fact that most password hacking takes place offline, using a set of hashes in a password file that has
been obtained' from a compromised system. Often the target in question has been compromised via a
hack on a third party, which then provides access to the system servers and those all-important user
password hash files. The password cracker can then take as long as they need to try and crack the code
without alerting the target system or individual user.
8. Shoulder surfing
The most confident of hackers will take the guise of a parcel courier, aircon service technician or
anything else that gets them access to an office building. Once they are in, the service personnel
"uniform" provides a kind of free pass to wander around unhindered, giving them the opportunity to
snoop literally over the shoulders of genuine members of staff to glimpse passwords being entered, or
spot passwords that less security-conscious workers have written down on post-it notes or in notepads.
9. Spidering
Savvy hackers have realized that many corporate passwords are made up of words that are connected to
the business itself. Studying corporate literature, website sales material and even the websites of
competitors and listed customers can provide the ammunition to build a custom word list to use in a
brute force attack. Really savvy hackers have automated the process and let a spidering application,
similar to those employed by leading search engines to identify keywords, collect and collate the lists for
them.
10. Guess
The password crackers best friend, of course, is the predictability of the user. Unless a truly random
password has been created using software dedicated to the task, a user-generated random' password is
unlikely to be anything of the sort.
MOBILE FORENSIC
Mobile Forensics is a branch of Digital Forensics and it is about the acquisition and the analysis of
mobile devices to recover digital evidences of investigative interest. When we talk about Mobile Forensics
generally, we use the term “Forensically Sound”, commonly used in the forensic community to define the
application of methods and techniques, which respect the international guidelines for acquisition, and
examination of mobile devices. The principles for the correct application of Forensically Sound
techniques assume the primary purpose, which is the preservation and the possibility of non-
contamination of the state of things.
All the phases, from the acquisition to forensics analysis of the mobile device, have to totally avoid
non-alteration of the examined device. This process is not easy at all, particularly in mobile devices. The
continuous evolution of mobile devices technology, allows the commercialization of new mobile phones,
which creates new digital investigations problems. Hardware and software for these type of mobile device
analysis are numerous, but none is able to give an integrated solution for the acquisition and the forensic
analysis of all smartphones.
Furthermore, mobile devices are able to contain plenty of digital information, almost like a
computer, so not only a call log or SMS messages as old mobile phones. Many of the digital information in
a smartphone is reliant on applications installed on it, which evolve in such a variety that analysis
software are not able to support them completely. Often the data acquisition from a mobile device is not
compatible with some parameters, which define a Forensically Sound method. In other words to have
access to the mobile device it is necessary to use communication vectors, bootloader and other agents
which are installed in the memory to enable the communication between the mobile phone and the
instrument that we use for the acquisition and so it is not possible to use a write blocking option.
Often we resort on modify the device configuration for acquisition, but this operation risks to
invalidate the evidence in the Court, even though all the techniques are always well-documented. As
much as possible it is always fundamental to respect the international guidelines on mobile forensic to
ensure the evidence integrity and the repeatability of the forensic process.
A fundamental aspect on device preservation at the crime scene is evidence collection on site; that
is the preservation of the device found turned on, safeguarding it from Wi-Fi signals, telecommunication
systems, GPS signals and keeping the battery on charge. This is required to avoid its shutdown and the
loss of important information such as a PIN.The shutdown could entail a later PIN bypass or even a data
loss because of passwords or cryptography. It is also fundamental to immediately provide electromagnetic
isolation using faraday bags; devices or cases, which allows isolating the mobile device, darken from radio
signals.
A practical example of a device found in to a crime scene and, not isolated, it can be the complete remote
wiping.Figure – Remote wiping command of an iphone. The production process of the forensic evidence is
divided in five main phase: the seizure, the identification, the acquisition and the examination or
analysis. Once the data is extracted from a device, different methods of analysis are used based on the
underlying case. As each investigation is distinct, it is not possible to have a single definitive procedures
for all cases. Each one of these steps has a basic role in the process of digital evidence production. The
international standard are fed by many studies and publications that try to define the best practices and
the guidelines for procedures and methods for the digital forensic, such as lots of publications and NIST
guidelines.
Although the most recent ISO 27037 certification “Guidelines for identification, collection and/or
acquisition and preservation of digital evidence” released in 2012 it is not specific for mobile forensic, it
concerns the ISO/IEC standard. This standard mostly defines methods and techniques in digital forensic
investigations, which is accepted in many Courts.However, the overall process can be broken into four
phases as shown in the diagram Following:Below will be elucidated the two first steps involved in the
production of a forensic evidence. In the next lessons will be explained in detail the remaining three
steps.Handling the device during seizure is one of the important steps while performing forensic analysis.
It is important, for device seizure on the crime scene, to document with pictures, writing the “where and
when”, mobile condition, if it was damaged, turned on or switched off, picture of the display if switched
on, document the event of memory cards.
It is necessary to seizure cables, chargers, SIM card data or any papers or notes which may contain
access codes that can also be deduced from the personal papers of the criminals whose devices were
confiscated. Statistically many users use password similar on date of birth, celebrations, names, number
plates and other personal information to remind themselves of passwords. Look for PIN and password
can save much time later to investigators. On the crime scene, it is fundamental to use proper techniques
to protect the device from communicating with other devices, which may be phone calls, SMS, Wi-Fi
Hotspot interferences, Bluetooth, GPS and many more. It is necessary to place the device into a Faraday
bag and if it is possible add the use of a jammer, to avoid the alteration of the original state of the device.
A phone call, an SMS, an email may overwrite the previous ones during the evidence collection phase if
the phone was not isolated.
MOBILE DEVICE ISOLATION TECHNIQUES :
Faraday’s bag – The immediate use of a Faraday bag is essential in case finding a turned-on mobile
phone. It is important to isolate the mobile phone keeping it on charge with an emergency battery which
will allow you to arrive to the lab safely. It is also important for the power cord to be isolated because it
may allow the mobile to receive communications. There are different types of Faraday bags on sale that
go from simple bags isolated from radio signals (which I do not recommend) to real isolation boxes which
allow more efficiency. They are made up of silver/copper/nickel with RoHS double layer conductors. A
Faraday bag can be a great solution to isolate the seizure mobile device.
Jamming – The jammers are devices, also known as radio jammers, used to block the use of mobile
phones sending radio waves with the same frequency used by mobile phones. This causes an interference,
which inhibits the communication between mobiles and BTS, paralyzing every phone activity in its range
of action. Most mobile phones, encounter this disturbance merely as a lack of network connection. In case
of mobile evidence collection jammer devices are used to block radio communications from
GSM/UMTS/LTE. Obviously, the use of a jammer in these circumstances must be limited to a power that
is less (<1W), otherwise it can disturb every telephone network around. The use is illegal in some
countries and it is often allowed only to police forces.
Airplane mode – The airplane mode is one of the options that can be used to protect the mobile collected
into the crime scene to avoid in and out radio transmission. It is a risky option because it is necessary to
interact with the mobile phone, and possible only if the phone is not protected with Passcode. To activate
iOS on this option, from iOS7 with display locked, airplane mode can be set sliding the dock upward. To
set the mode aereoplane in the Android OS:
1. Click the menu button on the phone to open the menu.
2. Select "Settings" at the bottom of the menu that comes up
3. Under "Wireless & Networks", tap on "More"
4. Look for the "Airplane mode" option at the top of the settings screen. Tap on it to put a "check mark"
on the box beside it
5. Wait for the on button to turn blue. This tells you that the mode is active and your transmissions are
now off.
The technical methods of protection devices, we mentioned in the previous paragraphs, they
should be used more attention for Android devices, compared to Apple devices. As they are sequestered, it
takes attention to be sure that our actions will not cause any change of data on the device. In the
meantime, it is necessary to use every and each opportunity that might help the following analysis. If the
device is found unlocked on the crime scene, in other words without lock screen or access code, it is
necessary to change device’s settings to have a better access on the device. Some of the settings it is
necessary to modify in this situation are:Enable stay awake setting: by activating this option and putting
the device on charge (it can be used an emergency charger), it allows keeping the device active and with
unlocking setting. On Android devices can be found in Settings | Development.
Activation of debug USB: the activation of this option allows a major access on the device with
Android Debug Bridge (ADB) connection. This option will be a great tool for the forensic examiner during
the extraction data process. On Android devices, this option can be found in Settings | Development.
In next Android versions, from 4.2, the development settings are hidden by default setting. For the
activation, Settings | About phone and tap Build number seven times.
APPLE IPHONE :
Before the analysis of an iPhone it is necessary to identify the hardware type and which firmware
is installed on. Easier it is to check the rear of the device’s shell, where it is impressed:Figure 2.0 –
Hardware number iPhone. About the firmware version, it is possible to check that by accessing on iPhone
menu -
Settings/General/About/Version:Figure 2.1 – firmware version iPhone
A good alternative to get lots of information from an iPhone is the use of libimobiledevice
(http://www.libimobiledevice.org ), currently released in 1.2 version, are a good alternative to
communicate with Apple devices among which iPhone, iPad, iPod Touch, Apple TV. They do not need
Jailbreak, and they allow reading device’s information, backup and restore and similar options on the
logical file system a cquisition. They can be downloaded and used in Linux environment, are integrated in
live distro Santoku (https://santoku-linux.com/).
Practical Exercise
In this practical exercise, we get information from an Apple iPhone Smartphone:
Step one – Download to web site https://santoku-linux.com/, the santoku live distro – named
santoku_0.5.iso -, burn it in DVD-ROM and start with boot.
Step two - Running libimobiledevice, navigate to Santoku –> Device Forensics –> lib - iMobile.
Step three - This should open a terminal window and list the commands available in the lib imobile
device tool.
Step four - At this point, you can connect your iOS device to Santoku. If you are using a VM, make sure the
USB device is “attached” to the VM and not the host.Figure 2.4 – iPhone connected to Santoku
Step five: You can easily check the connectivity between your iPhone and Santoku by type this command
in a terminal window:
idevice_id -s
The command gives all the information you see in the picture, including the devicename, UDID, the
hardware model and many more.
If you want to see only the iPhone’s UDID run the command:idevice_id -l This should return the UDID of
your phone.
ANDROID :
To get information from an Android device is easy. Go on menu Settings/About Phone/Software and
Hardware information.
PRACTICAL EXERCISE
In this case, we use a Host Windows and Android Software Development Kit. The Android Software
Development Kit (SDK) helps developers build, test, and debug applications to run on Android. It includes
software libraries, APIs, emulator, reference material, and many other tools. These tools not only help
create Android applications but also provide documentation and utilities that help significantly in
forensic analysis of Android devices. Having sound knowledge of the Android SDK can help you
understand the particulars of a device. This, in turn, will help you during an investigation. During
forensic examination, the SDK helps us connect the device and access the data present on the device.
The method to get the serial number of an Android device is the following:
Step one – Download from web site the SDK package:
https://developer.android.com/sdk/download.html?v=archives/android-sdk-windows-1.6_r1.zip
Step two – Create a folder called ANDROID SDK and unzip the zip file you downloaded
Step three – Connect your Android device via USB cable
Step four – In the command prompt Windows, browse on the ANDROID SDK folder, tools, and we run adb
device command.
Step five – If all work properly, a list of linked devices will appear with a serial number, if not present on
devices’ list, check that the proper work of the driver and USB debugging enabled.
profile includes friends, groups, video feeds, and undeleted photos (“U.S. Law Enforcement obtaining
warrants to search Facebook Profiles,” www.foxnews.com/tech/2011/07/12/us-law-enforcement-obtain-
warrants-to-search-facebook-profiles/, 2011). Typically, this profile is given to law enforcement only with
a warrant.
Social Media Forensics on Mobile Devices
In mid-2017, Facebook had 2 billion users worldwide. Of those, 1.74 billion, or 87%, were mobile
users (www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/).
Although Twitter has only 328 million monthly users, 80% of them are mobile users
(www.statista.com/statistics/282087/number-of-monthly-active-twitter-users/). A study in 2012
examined Facebook, Twitter, and MySpace use on BlackBerries, iPhones, and Android devices and
discovered, for example, that physical acquisitions of iPhones required “jailbreaking,” meaning they got
root access to the device’s OS to bypass the provider’s codes for preventing users from switching to other
providers and preventing unauthorized people from taking actions an investigator would take (Noora Al
Mutawa, et al, “Forensic analysis of social networking applications on mobile devices,” Digital
Investigation 9, 2012, www.dfrws.org/2012/proceedings/DFRWS2012-3.pdf).
In addition, they found that evidence artifacts vary depending on the social media channel and the
device. For example, on iPhones, a SQLite database for Facebook was found that lists friends, their ID
numbers, and phone numbers as well as files that tracked all uploads, including pictures. Similar
databases were found on Twitter. On Android devices, Facebook friends were found in the contacts list
because these devices synchronized with Facebook. Forensic analysis also showed that iPhone and
Android devices yielded the most information, and much of the data was stored in SQLite databases.
Following standard procedures—doing a logical acquisition followed by a physical acquisition—can yield
solid evidence, especially with devices that aren’t locked. Forensics Tools for Social Media Investigations
Software for social media forensics is being developed, but not many tools are available. A number of
social media tools that were free or inexpensive have now been incorporated into forensics suites, such as
FTK Social Analyzer, or offer only 14-day to 30-day trials. In addition, there are many questions about
how the information these tools gather can be used in court or in arbitration. Investigators often run into
the problem of finding information unrelated to a case, and sometimes they must stop to get another
warrant or subpoena, such as investigating a claim of fraud and finding evidence of corporate espionage.
Using social media forensics software might also require getting the permission of people whose
information is being examined. Many OSN tools use customized Web crawlers to find data, but they take
too long to find information to make them efficient. A few helpful software packages are available,
however. For example, X1 Social Discovery (www.x1.com/products/x1_social_discovery/) can be used in
two modes in Facebook: a credentialed user account (which requires the username and password of the
person under investigation) and a public account (created to examine the publicly accessible posts of
people or groups). X1 also has tools for Twitter and YouTube. In addition, researchers created an open-
source tool to target Facebook accounts (Huber, Mulazzani, et al, “Social Snapshots: Digital Forensics for
Online Social Networks,” ACSAC ’11, Proceedings of the 27th Annual Computer Security Applications
Conference, December 2011,
www.sba-research.org/wp-content/uploads/publications/social_snapshots_preprint.pdf). As with any
investigation, you need a warrant or subpoena to ask an OSN to produce its records. There are other
approaches you can take, however. If people are cooperating with your investigation, they might give you
the usernames and passwords to their social media accounts. If not, you can access only their public
profile or become friends with one of their friends, which might give you limited information. For this
approach, there are a few steps you need to take:
1. Begin with a workstation that doesn’t contain any of your personal information, or create a virtual
machine with a bridged network (meaning it has a different IP address from the host computer).
2. Many people link their cell phone numbers to their Facebook accounts, so try looking up the suspect’s
cell phone number in Facebook, which shows you the person’s username, too. People often use the
same username in all platforms, including Twitter, Instagram, LinkedIn, and so forth.
3. Next, you should do a Google search on this username, making sure to use your investigation
workstation. Disable Google’s Safe Search feature and “instant results,” which Google uses to guess
what you’re searching for. Last, but not least, turn off location-based searches so that Google doesn’t
use your location to filter results. For example, if you’re searching for a restaurant serving pizza and
you’re in New York City, Google won’t return search results for Miami—or any other city.
4. Collect as much information as possible on Google, and use it to find friends of the suspect and then
attempt to friend these people. With some social media tools, you need to create a decoy account.
Remember that it’s against the law to use someone else’s likeness as your own for a social media
account, and operating within the law is crucial in any investigation.
DATA ACQUISITION OF ENCRYPTED SYSTEMS :
Typically, a static acquisition is done on a computer seized during a police raid, for example. If the
computer has an encrypted drive, a live acquisition is done if the password or passphrase is available—
meaning the computer is powered on and has been logged on to by the suspect. Static acquisitions are
always the preferred way to collect digital evidence. However, they do have limitations in some situations,
such as an encrypted drive that’s readable only when the computer is powered on or a computer that’s
accessible only over a network. Some solutions can help decrypt a drive that has been encrypted with
whole disk encryption, such as Elcomsoft Forensic Disk Decryptor (www.elcomsoft.com/efdd.html).
For both types of acquisitions, data can be collected with four methods: creating a disk-to-image
file, creating a disk-to-disk copy, creating a logical disk-to-disk or disk-to-data file, or creating a sparse
copy of a folder or file. Determining the best acquisition method depends on the circumstances of the
investigation.
Creating a disk-to-image file is the most common method and offers the most flexibility for your
investigation. With this method, you can make one or many copies of a suspect drive. These copies are bit-
for-bit replications of the original drive. In addition, you can use many commercial forensics tools to read
the most common types of disk-to-image files you create. These programs read the disk-to-image file as
though it were the original disk. Older MS-DOS tools can only read data from a drive. To use MS-DOS
tools, you have to duplicate the original drive to perform the analysis. GUI programs save time and disk
resources because they can read and interpret directly from the disk-to-image file of a copied drive.
Sometimes you can’t make a disk-to-image file because of hardware or software errors or
incompatibilities. This problem is more common when you have to acquire older drives. For these drives,
you might have to create a disk-to-disk copy of the suspect drive. Several imaging tools can copy data
exactly from an older disk to a newer disk. These programs can adjust the target disk’s geometry (its
cylinder, head, and track configuration) so that the copied data matches the original suspect drive. These
imaging tools include EnCase and X-Ways Forensics. See the vendors’ manuals for instructions on using
these tools for disk-to-disk copying.
Awareness Building:
Awareness building is most important to reduce Cyber Crime and IT crime; thus following things are
essential to follow:
1. Creating changes in the password of the computing devices such as computers, search and
networking.
2. systems, changes of the password of other services such as email, social networking site, and other.
3. service based site registered by the applicant or user.
4. Reduction in use of email in cyber café and other places and computing devices.
5. Open and communicating with the unknown computer and similar device.
Why Volatility?
Before you start using Volatility, you should understand some of its unique features. As previously
mentioned, Volatility is not the only memory forensics application it was specifically designed to be
different. Here are some of the reasons why it quickly became our tool of choice:
11. A single, cohesive framework. Volatility analyzes memory from 32- and 64-bit Windows, Linux,
Mac systems (and 32-bit Android). Volatility’s modular design allows it to easily support new
operating systems and architectures as they are released.
12. It is Open Source GPLv2. This means you can read the source code, learn from it, and extend it. By
learning how Volatility works, you will become a more effective analyst.
13. It is written in Python. Python is an established forensic and reverse engineering language with
loads of libraries that can easily integrate into Volatility.
14. Runs on Windows, Linux, or Mac analysis systems. Volatility runs anywhere Python can be
installed a refreshing break from other memory analysis tools that run only on Windows.
15. Extensible and scriptable application programming interface (API). Volatility gives you the
power to go beyond and continue innovating. For example, you can use Volatility to drive your
malware sandbox, perform virtual machine (VM) introspection, or just explore kernel memory in an
automated fashion.
16. Unparalleled feature sets. Capabilities have been built into the framework based on reverse
engineering and specialized research. Volatility provides functionality that even Microsoft’s own
kernel debugger doesn’t support.
17. Comprehensive coverage of file formats. Volatility can analyze raw dumps, crash dumps,
hibernation files, and various other formats (see Chapter 4). You can even convert back and forth
between these formats.
18. Fast and efficient algorithms. This lets you analyze RAM dumps from large systems in a fraction of
the time it takes other tools, and without unnecessary memory consumption.
19. Serious and powerful community. Volatility brings together contributors from commercial
companies, law enforcement, and academic institutions around the world. Volatility is also being built
on by a number of large organizations, such as Google, National DoD Laboratories, DC3, and many
antivirus and security shops.
20. Focused on forensics, incident response, and malware. Although Volatility and Windbg share
some functionality, they were designed with different primary purposes in mind. Several aspects are
often very important to forensics analysts but not as important to a person debugging a kernel driver
(such as unallocated storage, indirect artifacts, and so on).
What Volatility Is Not?
Volatility is a lot of things, but there are a few categories in which it does not fit. These categories are:
4. It is not a memory acquisition tool: Volatility does not acquire memory from target systems. You
acquire memory with one of the tools mentioned in Chapter 4 and then analyze it with Volatility. An
exception is when you connect to a live machine over Firewire and use Volatility’s imagecopy plugin
to dump the RAM to a file. In this case, you are essentially acquiring memory.
5. It is not a GUI: Volatility is a command line tool and a Python library that you can import from your
own applications, but it does not include a front-end. In the past, various members of the forensics
community developed GUIs for Volatility, but these are currently unsupported by the official
development team.
6. It is not bug-free: Memory forensics can be fragile and sensitive in nature. Supporting RAM dumps
from multiple versions of most major operating systems (that are usually running obscure third-party
software) comes with a cost: It can lead to complex conditions and difficult-to-reproduce problems.
Although the development team makes every effort to be bug free, sometimes it’s just not possible.
iSCSI connection. F-Response presents a vendor- and OS-agnostic view of the target system’s physical
memory and hard disks, which means you can access them from Windows, Mac OS X, or Linux
analysis stations and process them with any tool.
13. Mandiant Memoryze: A tool you can easily run from removable media and that supports acquisition
from most popular versions of Microsoft Windows. You can import the XML output of Memoryze into
Mandiant Redline for graphical analysis of objects in physical memory.
14. HBGary FastDump: A tool that claims to leave the smallest footprint possible, the ability to acquire
page files and physical memory into a single output file (HPAK), and the ability to probe process
memory (a potentially invasive operation that forces swapped pages to be read back into RAM before
acquisition).
15. MoonSols Windows Memory Toolkit: The MWMT family includes win32dd, win64dd, and the most
recent version of DumpIt—a utility that combines the 32- and 64-bit memory dumping acquisition
tools into an executable that requires just a single click to operate. No further interaction is required.
However, if you do need more advanced options, such as choosing between output format types,
enabling RC4 encryption, or scripting the execution across multiple machines, you can do that as well.
16. AccessData FTK Imager: This tool supports acquisition of many types of data, including RAM.
AccessData also sells a pre-configured live response USB toolkit that acquires physical memory in
addition to chat logs, network connections, and so on.
17. EnCase/WinEn: The acquisition tool from Guidance Software can dump memory in compressed
format and record metadata in the headers (such as the case name, analyst, etc.). The Enterprise
version of EnCase leverages similar code in its agent that allows remote interrogation of live systems
(see http://volatility-labs.blogspot.com/2013/10/sampling-ram-across-encase-enterprise.html).
18. Belkasoft Live RAM Capturer: A utility that advertises the ability to dump memory even when
aggressive anti-debugging and anti-dumping mechanisms are present. It supports all the major 32-
and 64-bit Windows versions and can be run from a USB thumb drive.
19. ATC-NY Windows Memory Reader: This tool can save memory in raw or crash dump formats and
includes a variety of integrity hashing options. When used from a UNIX-like environment such as
MinGW or Cygwin, you can easily send the output to a remote netcat listener or over an encrypted
SSH tunnel.
20. Winpmem: The only open-source memory acquisition tool for Windows. It includes the capability to
output files in raw or crash dump format, choose between various acquisition methods (including the
highly experimental PTE remapping technique), and expose physical memory through a device for
live analysis of a local system.
Acquiring Data with dcfldd in Linux :
The dd command is intended as a data management tool; it’s not designed for forensics
acquisitions. Because of these shortcomings, Nicholas Harbour of the Defense Computer Forensics
Laboratory (DCFL) developed a tool that can be added to most UNIX/Linux OSs. This tool, the dcfldd
command, works similarly to the dd command but has many features designed for forensics acquisitions.
The following are important functions dcfldd offers that aren’t possible with dd:
Specify hexadecimal patterns or text for clearing disk space.
Log errors to an output file for analysis and review.
Use the hashing options MD5, SHA-1, SHA-256, SHA-384, and SHA-512 with logging and the option of
specifying the number of bytes to hash, such as specific blocks or sectors.
Refer to a status display indicating the acquisition’s progress in bytes.
Split data acquisitions into segmented volumes with numeric extensions (unlike dd’s limit of 99).
Verify the acquired data with the original disk or media data.
When using dcfldd, you should follow the same precautions as with dd. The dcfldd command can
also write to the wrong device, if you aren’t careful. The following examples show how to use the dcfldd
command to acquire data from a 64 MB USB drive, although you can use the command on a larger media
device. All commands need to be run from a privileged root shell session. To acquire an entire media
device in one image file, type the following command at the shell prompt:
dcfldd if=/dev/sda of=usbimg.dat
If the suspect media or disk needs to be segmented, use the dcfldd command with the split
command, placing split before the output file field (of=), as shown here:
dcfldd if=/dev/sda hash=md5 md5log=usbimgmd5.txt bs=512 conv=noerror,sync split=2M of=usbimg
This command creates segmented volumes of 2 MB each. To create segmented volumes that fit on
a CD of 650 MB, change the split=2M to split=650M. This command also saves the MD5 value of the
acquired data in a text file named usbimgmd5.txt.
Capturing an Image with AccessData FTK Imager Lite :
The following activity assumes you have removed the suspect drive and connected it to a USB or
FireWire write-blocker device connected to your forensic workstation. The acquisition is written to a
work folder on your C drive, assuming it has enough free space for the acquired data. Follow these steps to
perform the first task of connecting the suspect’s drive to your workstation:
Document the chain of evidence for the drive you plan to acquire.
Remove the drive from the suspect’s computer.
For IDE drives, configure the suspect drive’s jumpers as needed. (Note: This step doesn’t apply to
SATA or USB drives.)
Connect the suspect drive to the USB or FireWire write-blocker device.
Create a storage folder on the target drive. For this activity, you use your work folder (C:\Work\
Chap03\Chapter), but in real life, you’d use a folder name such as C:\Evidence.
FTK Imager is a data acquisition tool included with a licensed copy of AccessData Forensic Toolkit.
Like most Windows data acquisition tools, it requires using a USB dongle for licensing. FTK Imager Lite,
Debian and Ubuntu x64 command-line interfaces, and macOS 10.5 and 10.6x command-line interfaces are
free and require no dongle license.
FTK Imager can make disk-to-image copies of evidence drives and enables you to acquire an
evidence drive from a logical partition level or a physical drive level. You can also define the size of each
disk-to-image file volume, allowing you to segment the image into one or many split volumes. For
example, you can specify 650 MB volume segments if you plan to store volumes on 650 MB CD-Rs or 2.0
GB volume segments so that you can record volumes on DVD-/+Rs. An additional feature of FTK Imager is
that it can image RAM on a live computer. The evidence drive you’re acquiring data from must have a
hardware write-blocking device or run from a Live CD, such as Mini-WinFE.
FTK Imager can’t acquire a drive’s HPA and device configuration overlay (DCO), however. In other
words, if the drive’s specifications indicate it has 11,000,000 sectors and the BIOS display indicates
9,000,000, a host protected area of 2,000,000 sectors might be assigned to the drive. If you suspect an
evidence drive has a host protected area, you must use an advanced acquisition tool to include this area
when copying data. With older MS-DOS tools, you might have to define the exact sector count to make
sure you include more than what the BIOS shows as the number of known sectors on a drive. Review
vendors’ manuals to determine how to account for a drive’s host protected area.
Validating (Hash – validation ) dcfldd-Acquired Data :
Because dcfldd is designed for forensics data acquisition, it has validation options integrated: hash and
hashlog. You use the hash option to designate a hashing algorithm of md5, sha1, sha256, sha384, or
sha512. The hashlog option outputs hash results to a text file that can be stored with image files. To create
an MD5 hash output file during a dcfldd acquisition, you enter the following command (in one line) at
the shell prompt:
dcfldd if=/dev/sda split=2M of=usbimg hash=md5
hashlog=usbhash.log
To see the results of files generated with the split command, you enter the list directory (ls) command at
the shell prompt. You should see the following output:
usbhash.logusbimg.004 usbimg.010 usbimg.016 usbimg.022 usbimg.028
Note that the first segmented volume has the extension .000 rather than .001. Some Windows
forensics tools might not be able to read segmented file extensions starting with .000. They’re typically
looking for .001. If your forensics tool requires starting with an .001 extension, the files need to be
renamed incrementally. So segmented file.000 should be renamed .001, .001 should be renamed .002, and
so on.
Another useful dcfldd option is vf (verify file), which compares the image file with the original
medium, such as a partition or drive. The vf option applies only to a nonsegmented image file. To validate
segmented files from dcfldd, use the md5sum or sha1sum command described previously. To use the vf
option, you enter the following command at the shell prompt:
dcfldd if=/dev/sda vf=sda_hash.img
forensics programs, such as OSForensics, Autopsy, EnCase, and FTK. In Chapter 9, you learn how to hash
specific data by using a hexadecimal editor to locate and verify groups of data that have no file association
or are sections within a file.
Commercial forensics programs also have built-in validation features. Each program has its own
validation technique used with acquisition data in its proprietary format. For example, Autopsy uses MD5
to validate an image. It reads the metadata in Expert Witness Compression or AFF image files to get the
original hash. If the hashes don’t match, Autopsy notifies you that the acquisition is corrupt and can’t be
considered reliable evidence.
In Autopsy and many other forensics tools, however, raw format image files don’t contain
metadata. As mentioned, a separate manual validation is recommended for all raw acquisitions at the
time of analysis. The previously generated validation file for raw format acquisitions is essential to the
integrity of digital evidence. The saved validation file can be used later to check whether the acquisition
file is still good.
In FTK Imager Lite, when you select the Expert Witness Compression (.e01) or the SMART (.s01)
format, additional options for validation are displayed. This validation report also lists the MD5 and SHA-
1 hash values. The MD5 hash value is added to the proprietary format image or segmented files. When
this image is loaded into FTK, SMART, or X-Ways Forensics (which can read only .e01 and raw files), the
MD5 hash is read and compared with the image to verify whether the acquisition is correct.
Disk Cloning : Disk cloning is the process of copying the entire contents of one hard drive to another
including all the information that enables you to boot to the operating system from the drive. A cloning
program enables you to make a one-to-one copy of one of your computer's hard drives on another hard
drive. This second copy of the hard drive is fully operational and can be swapped with the computer's
existing hard drive. If you boot to the cloned drive, its data will be identical to the source drive at the time
it was created. A cloned drive can be used to replace its source drive in a computer in the event that
something bad happens to the original drive.
Disk Imaging : Disk imaging is the process of making an archival or backup copy of the entire
contents of a hard drive. A disk image is a storage file that contains all the data stored on the source hard
drive and the necessary information to boot to the operating system. However, the disk image needs to be
applied to the hard drive to work. You can't restore a hard drive by placing the disk image files on it; it
needs to be opened and installed on the drive with an imaging program. Unlike cloned drives, a single
hard drive can store several disk images on it. Disk images can also be stored on optical media and flash
drives.
Disk Cloning by Image : When you apply a disk image to a hard drive, you're creating a copy of the
original contents of the drive. Disk images are usually used for restoring a hard drive's previous contents
or transferring contents to a new hard drive. However, you can use a disk image to create a copy of the
source hard drive on a second hard drive, making it a clone of the original drive.
indistinguishable from uniformly-distributed noise, which can make detection efforts more difficult, and
save the steganographic encoding technique the trouble of having to distribute the signal energy evenly
(but see above concerning errors emulating the native noise of the carrier).
Barrage noise
If inspection of a storage device is considered very likely, the steganographer may attempt to barrage a
potential analyst with, effectively, misinformation. This may be a large set of files encoded with anything
from random data, to white noise, to meaningless drivel, to deliberately misleading information. The
encoding density on these files may be slightly higher than the "real" ones; likewise, the possible use of
multiple algorithms of varying detectability should be considered. The steganalyst may be forced into
checking these decoys first, potentially wasting significant time and computing resources. The downside
to this technique is it makes it much more obvious that steganographic software was available, and
was used.
Steganalysis methods
There are various methods of analysis depending on what information is available:
1. Stego-only attack: Only the stego-object is available for analysis.
2. Known cover attack: The stego-object as well as the original medium is available. The stego-object is
compared with the original cover object to detect any hidden information.
3. Known message attack: The hidden message and the corresponding stego-image are known. The
analysis of patterns that correspond to the hidden information could help decipher such messages in
future.
4. Known stego attack: The steganography algorithm is known and both the original and stego-object are
available.
5. Chosen stego attack: The steganography algorithm and stego-object are known.
6. Chosen message attack: The steganalyst generates a stego-object from some steganography tool or
algorithm of a chosen message. The goal in this attack is to determine patterns in the stego-object that
may point to the use of specific steganography tools or algorithms.
Tools
Niels Provos' Stegdetect is a common steganalysis tool. Stegdetect can find hidden information in JPEG
images using such steganography schemes as F5, Invisible Secrets, JPHide, and JSteg. It also has a
graphical interface called Xsteg. WetStone Technologies' Stego Suite is comprised of three products. Stego
Watch is a steganography tool that looks for hidden content in digital image or audio files. Stego Analyst
is an image and audio file analyzer which integrates with Stego Watch to provide more detailed analysis
of suspect files and Stego Break is a password cracker designed to obtain the passphrase for a file found to
contain steganography.
2. Candidates have successfully completed 40 hours of training from an approved agency, organization,
or training company.
3. Candidates must provide documentation of at least 10 cases in which they participated.
4. Certified Computer Crime Investigator, Advanced
5. Candidates must have five years of experience directly related to investigating computer-related
incidents or crimes.
6. Candidates have successfully completed 80 hours of training from an approved agency, organization,
or company.
7. Candidates have served as lead investigator in at least 20 cases during the past three years and were
involved in at least 40 other cases as a lead investigator or supervisor or in a supportive capacity.
Candidates have at least 60 hours of involvement in cases in the past three years.
Certified Computer Forensic Technician, Basic
1. Candidates must have three years of experience in computing investigations for law enforcement or
corporate cases.
2. Candidates must have completed 40 hours of computer forensics training from an approved
organization.
3. Candidates must provide documentation of at least 10 computing investigations.
Certified Computer Forensic Technician, Advanced
1. Candidates must have five years of hands-on experience in computer forensics investigations for law
enforcement or corporate cases.
2. Candidates must have completed 80 hours of computer forensics training from an approved
organization.
3. Candidates must provide documentation of at least 15 computing investigations.
4. Candidates must have been the lead computer forensics investigator in 20 or more investigations in
the past three years and in 40 or more additional computing investigations as lead computer forensics
technician, supervisor, or contributor. The candidate must have completed at least 60 investigations
in the past three years.
EnCase Certified Examiner Certification
Guidance Software, the creator of EnCase, sponsors the EnCase Certified Examiner (EnCE)
certification program. EnCE certification is open to the public and private sectors and is specific to use
and mastery of EnCase forensics analysis. Requirements for taking the EnCE certification exam don’t
depend on taking the Guidance Software EnCase training courses. Candidates for this certification are
required to have a licensed copy of EnCase. For more information on EnCE certification requirements,
visit www.guidancesoftware.com/training/certifications?cmpid=nav_r. Additional certifications offered
by Guidance Software are Certified Forensic Security Responder (CFSR) and EnCase Certified eDiscovery
Practitioner (EnCEP).
While this is not an exhaustive list, it gives you a picture of what constitutes digital forensics tools
and what you can do with them. Sometimes multiple tools are packaged together into a single toolkit to
help you tap into the potential of related tools. Also, it is important to note that these categories can get
blurred at times depending on the skill set of the staff, the lab conditions, availability of equipment,
existing laws, and contractual obligations. For example, tablets without SIM cards are considered to be
computers, so they would need computer forensics tools and not mobile forensics tools.But regardless of
these variations, what is important is that digital forensics tools offer a vast amount of possibilities to
gain information during an investigation. It is also important to note that the landscape of digital
forensics is highly dynamic with new tools and features being released regularly to keep up with the
constant updates of devices.
Choosing the right tool
Given the many options, it is not easy to select the right tool that will fit your needs. Here are some
aspects to consider while making the decision.
Skill level: Skill level is an important factor when selecting a digital forensics tool. Some tools only need a
basic skill set while others may require advanced knowledge. A good rule of thumb is to assess the skills
you have versus what the tool requires, so you can choose the most powerful tool that you have the
competence to operate.
Output : Tools are not built the same, so even within the same category, outputs will vary. Some tools will
return just raw data while others will output a complete report that can be instantly shared with non-
technical staff. In some cases, raw data alone is enough as your information may anyway have to go
through more processing, while in others, having a formatted report can make your job easier.
Cost : Needless to say, the cost is an important factor as most departments have budgetary constraints.
One aspect to keep in mind here – the cheapest tools may not have all the features you want as that’s how
developers keep the costs low. Instead of choosing a tool based on cost alone, consider striking a balance
between cost and features while making your choice.
Focus : Another key aspect is the focus area of the tool, since different tasks usually require different
tools. For example, tools for examining a database are very different from those needed to examine a
network. The best practice is to create a complete list of feature requirements before buying. As
mentioned before, some tools can cover multiple functionality in a single kit which could be a better deal
than finding separate tools for every task.
Additional accessories : Some tools may need additional accessories to operate and this is
something that has to be taken into account as well. For example, some network forensics tools may
require specific hardware or software-bootable media. So make sure to check the hardware and software
requirements before buying. Here are 05 of the best free tools that will help you conduct a digital forensic
investigation. Whether it’s for an internal human resources case, an investigation into unauthorized
access to a server, or if you just want to learn a new skill, these suites and utilities will help you conduct
memory forensic analysis, hard drive forensic analysis, forensic image exploration, forensic imaging and
mobile forensics. As such, they all provide the ability to bring back in-depth information about what’s
“under the hood” of a system.
01 SANS SIFT
The SANS Investigative Forensic Toolkit (SIFT) is an Ubuntu based Live CD which includes all the
tools you need to conduct an in-depth forensic or incident response investigation. It supports analysis of
Expert Witness Format (E01), Advanced Forensic Format (AFF), and RAW (dd) evidence formats. SIFT
includes tools such as log2timeline for generating a timeline from system logs, Scalpel for data file
carving, Rifiuti for examining the recycle bin, and lots more. When you first boot into the SIFT
environment, I suggest you explore the documentation on the desktop to help you become accustomed to
what tools are available and how to use them. There is also a good explanation of where to find evidence
on a system. Use the top menu bar to open a tool, or launch it manually from a terminal window.
Key features
1. 64-bit base system
2. Auto-DFIR package update and customizations
3. Cross compatibility with Linux and Windows.
4. Expanded file system support
5. Option to install the standalone system
02 CrowdStrike CrowdResponse
Crowd-Response is a lightweight console application that can be used as part of an incident
response scenario to gather contextual information such as a process list, scheduled tasks, or Shim Cache.
Using embedded YARA signatures you can also scan your host for malware and report if there are any
indicators of compromise. To run Crowd-Response, extract the ZIP file and launch a Command Prompt
with Administrative Privileges. Navigate to the folder where the Crowd-Response*.exe process resides and
enter your command parameters. At minimum, you must include the output path and the ‘tool’ you wish
to use to collect data. For a full list of ‘tools’, enter CrowdResponse64.exe in the command prompt and it
will bring up a list of supported tool names and example parameters. Once you’ve exported the data you
need, you can use CRconvert.exe to convert the data from XML to another file format like CSV or HTML.
Key features
1. Comes with three modules – directory-listing, active running module, and YARA processing module.
2. Displays application resource information
3. Verifies the digital signature of the process executable.● Scans memory, loaded module files, and on-
disk files of all currently running processes
03 Volatility
Volatility is a memory forensics framework for incident response and malware analysis that
allows you to extract digital artefacts from volatile memory (RAM) dumps. Using Volatility you can
extract information about running processes, open network sockets and network connections, DLLs
loaded for each process, cached registry hives, process IDs, and more. If you are using the standalone
Windows executable version of Volatility, simply place volatility-2.x.standalone.exe into a folder and open
a command prompt window. From the command prompt, navigate to the location of the executable file
and type “volatility-2.x.standalone.exe –f <FILENAME> –profile=<PROFILENAME><PLUGINNAME>”
without quotes – FILENAME would be the name of the memory dump file you wish to analyse,
PROFILENAME would be the machine the memory dump was taken on and PLUGINNAME would be the
name of the plugin you wish to use to extract information.
[ Note: In the example above I am using the ‘connscan’ plugin to search the physical memory dump for
TCP connection information. ]
Key features
1. Supports a wide variety of sample file formats.
2. Runs on Windows, Linux, and Mac
3. Comes with fast and efficient algorithms to analyze RAM dumps from large systems.
4. Its extensible and scriptable API opens new possibilities for extension and innovation.
04 The Sleuth Kit (+Autopsy)
The Sleuth Kit is an open source digital forensics toolkit that can be used to perform in-depth
analysis of various file systems. Autopsy is essentially a GUI that sits on top of The Sleuth Kit. It comes
with features like Timeline Analysis, Hash Filtering, File System Analysis and Keyword Searching out of
the box, with the ability to add other modules for extended functionality.
[ Note: You can use The Sleuth Kit if you are running a Linux box and Autopsy if you are running a
Windows box. ]
When you launch Autopsy, you can choose to create a new case or load an existing one. If you
choose to create a new case you will need to load a forensic image or a local disk to start your analysis.
Once the analysis process is complete, use the nodes on the left hand pane to choose which results to view.
Key features
1. Displays system events through a graphical interface.
2. Offers registry, LNK files, and email analyses.
3. Supports most common file formats
4. Extracts data from SMS, call logs, contacts, Tango, and Words with Friends, and analyses the same.
05 FTK Imager
FTK Imager is a data preview and imaging tool that allows you to examine files and folders on
local hard drives, network drives, CDs/DVDs, and review the content of forensic images or memory
dumps. Using FTK Imager you can also create SHA1 or MD5 hashes of files, export files and folders from
forensic images to disk, review and recover files that were deleted from the Recycle Bin (providing that
their data blocks haven’t been overwritten), and mount a forensic image to view its contents in Windows
Explorer.
[ Note: There is a portable version of FTK Imager that will allow you to run it from a USB disk. ]
When you launch FTK Imager, go to ‘File > Add Evidence Item…’ to load a piece of evidence for
review. To create a forensic image, go to ‘File > Create Disk Image…’ and choose which source you wish to
forensically image.
Key features
1. Comes with data preview capability to preview files/folders as well as the content in it.
2. Supports image mounting
3. Uses multi-core CPUs to parallelize actions.
4. Accesses a shared case database, so a single central database is enough for a single case.
DATA RECOVERY ETHICS
To the extent possible, logical data recovery operations shall be performed only against a copy of
the data rather than against original media. A first objective in data recovery should always be to obtain a
copy of the data containing sectors regardless of the assumed condition of the media.
Exception 1: If media is determined to be fully functional by obtaining a full sector by sector copy of the
data, logical operations may be performed against original media so long as a copy is set aside for the sole
purpose of backup.
Data Recovery equipment and software shall be configured such that no modification of the
original media's data shall occur. If imaging/cloning is performed using software only methods, a
hardware write block device shall be used. If a hardware imaging tools are used, channels for source &
destination drives shall be clearly labeled and/or numbered to avoid confusion which may result in data
being copied from destination to source media.
POINTS TO REMEMBER :
1. Never remove the cover of a drive except in a Class 100 or better clean room environment
2. Never physically impact (“percussive maintenance”) or aggressively twist a drive to remedy stiction
3. Never place a drive in a freezer or refrigerator, even if sealed in a plastic bag
4. Always image/clone the drive and work on the image/clone—never the patient drive
5. Return the drive in the same or better condition as received if your quote is rejected
6. Never apply firmware changes to a drive without having first backed-up the drive’s FW resources and
having the skills and equipment to restore them from the back-up.
7. Diagnose the drive and determine the cause of failure before applying repairs or trying to recover data
8. Know the difference between hardware, firmware and logical failures and apply the tools and repairs
appropriate to the diagnosed failure type (i.e., don't try to use a software tool to remedy a hardware
failure)
9. Identity mark the patient drive, it’s PCB and ROM to prevent confusion or mismatches with respective
donor parts
10. Catalogue and label all drives received to minimize their loss or return to the wrong customer
11. Only return data and drives to the owner/customer; zero or destroy the media of drives before their
disposal
12. Never power on a drive that has been dropped, without first inspecting the heads for damage
13. Never image/clone a drive without double-checking the source and destination drive identities; if
there is any doubt at all, do not proceed
14. Zero destination drives before use to prevent cross-contamination of data
15. Respect and protect the privacy and confidentiality of the data; don’t explore the data beyond what’s
required to provide the service; don’t share the data with others having no right to access it
16. Be truthful when dealing with people and about what’s needed to repair the drive or recover the data,
e.g., that heads need to be changed in a clean room when the problem is logical, or charging for a
donor that’s not really required
5. Make sure you note the date and time of your forensic workstation when starting your analysis. If
precise time is an issue, consider using an Internet clock, such as the one at www.time.gov or
www.nist.gov/pml/div688/grp40/its.cfm (downloading nistime-32bit.exe), or an atomic clock to
verify the accuracy of your workstation’s clock. Many retailers, such as Walmart and Radio Shack,
now sell atomic clocks.
6. Keep only successful output when running analysis tools; don’t keep previous runs, such as those
missing necessary switch or output settings. Note that you used the tool, but it didn’t generate results
because of these missing settings.
7. When searching for keyword results, rerun searches with well-defined keywords and search
parameters. You might even want to state how they relate to the case, such as being business or
personal names. Narrow the search to reduce false hits, and eliminate search results containing false-
positive hits.
8. When taking notes of your findings, keep them simple and specific to the investigation. You should
avoid any personal comments so that you don’t have to explain them to opposing counsel.
9. When writing your report, list only the evidence that’s relevant to the case; do not include unrelated
findings.
10. Define any procedures you use to conduct your analysis as scientific and conforming to your
profession’s standards. Listing textbooks, technical books, articles by recognized experts, and
procedures from highly respected professional organizations that you relied on or referenced during
your examination is a common way to prove your conformity with scientific and professional
standards.
Recover internet data use :
Here in this passage, we'll offer you three major methods to recover browser/internet history
files: use DNS Cache to find deleted browsing history, use data recovery software to recover lost browsing
history files or to recover deleted history by using Google history. And all these three methods can be
applied for browsing internet history recovery on all browsers such as Chrome, Firefox, IE Edge etc. Let's
see how to recover lost or deleted browser internet history now.
Method 1: Use DNS Cache to find and view deleted browsing history
DNS, which is known as Domain Name System, can work as a fast method to restore searches or history
for you. But when computer is restarted, it will not be able to help you find browsing history then. DNS
cache can only work when almost everything is connected to the internet. Therefore, if you need to
restore deleted browsing history for an app or video game, please do not shut down or restart the
computer. You may still have a chance to view the deleted internet
HISTORY:
1. Press Windows + R, type cmd and click OK. Or you can also type cmd in Windows search bar.
2. Open Command Prompt, type ipcongif/displaydns and click Enter.Then all your recently visited
websites will be displayed. You can view all your recent browsing history and find those important
websites back.
Method 2: Use data recovery software to recover lost browsing history files
If you don't know where to find your saved computer browsing history, please follow next path to
see whether the history files are deleted or not now:
Google Chrome: C:\Users\(username)\AppData\Local\Google\Chrome\User Data\Default\local storage
Mozilla Firefox: C:\Users\(username)\AppData\Roaming\Mozilla\Firefox\Profiles\
Internet Explorer: C:\Users\(username)\AppData\Local\Microsoft\Windows\History
If you like to save all browser history in your computer like other files, when you deleted the
browsing history from the browser, you'll delete the history files from your computer. You still have a
chance to restore the deleted browsing history files by using professional data recovery software. Here
we'd like to recommend you to try EaseUS Data Recovery Wizard which can recover all deleted files
including the browsing history data saved in your computer without any obstacles. Only three steps will
do all recovery jobs: launch software > choose the location and scan>recover found browser/internet
history data.Download for PC Download for Mac.
Method 3. Recover deleted browsing history from Google History
If you have Google Account and logged in everything when you browse websites, you will have a
great chance to find and recover browser/internet history. When deleted history from browsers, the
Google History is not deleted. It will store all browsing history including all pages that you've ever visited
and even devices attached to your Google Account.
Go to Google History, sign in with Google account.
Then all of your browser/internet history will be displayed along with date/time. When you carelessly
deleted important history bookmarks or lost important websites, don't worry. Follow this article, EaseUS
software will tell you how to recover browser/internet history files and data without any obstacles.
Extra Tip: Restoring deleted/lost Chrome history on Android phoneIf you happen to lost your website
browsing history or delete history on Android phone, don't worry. If you have turned on the Google sync
on, things will be pretty easy for you to find the lost website browsing history back:
Open a webpage in Chrome;
Open this page: https://www.google.com/settings/...
RECOVERING TEMPORARY / CACHE / SWAP FILES :
RECYCLE BIN
The “trash can” has been a familiar presence on our computer desktops starting with the early
Macintosh systems. It’s a really good idea, especially from the casual user’s perspective. Users may not
understand sectors and bytes, but most everyone “gets” the trash can. Sometimes, though, the trash can
“gets” them. This is especially true when they count on the trash can to erase their evidence. They assume
that their incriminating data have disappeared into a digital “Bermuda Triangle,” never again to see the
light of day. Unlike Amelia Earhart, that’s definitely not the case. Using forensic tools such as Forensic
Toolkit and EnCase, we can quite often bring those files back in mint condition.
THUMBNAIL CACHE
To make it easier to browse the pictures on your computer, Windows creates smaller versions of
your photos called thumbnails. Thumbnails are just miniaturized versions of their larger counterparts.
These miniatures are created automatically by Windows when the user chooses “Thumbnail” view when
using Windows Explorer. Windows creates a couple of different kinds of thumbnail files, depending on
the version being used. Windows XP creates a file called thumbs.db. Microsoft Vista and Windows 7
create a similar file called thumbcache.db.
Most users are completely unaware that these files even exist. The cool thing about these files is
that they remain even after the original images have been deleted. Even if we don’t recover the original
image, thumbnails can serve as the next best evidence. Their mere existence tells us that those pictures
existed at one point on the system.
Link: https://www.prodiscover.com
Link: https://www.sleuthkit.org
CAINE
CAINE is a Ubuntu-based app that offers a complete forensic environment that provides a graphical
interface. This tool can be integrated into existing software tools as a module. It automatically extracts a
timeline from RAM.
Features:
1. It supports the digital investigator during the four phases of the digital investigation.
2. It offers a user-friendly interface.
3. You can customize features of CAINE.
4. This software offers numerous user-friendly tools.
Link: https://www.caine-live.net
PALADIN
PALADIN is Ubuntu based tool that enables you to simplify a range of forensic tasks. This Digital
forensics software provides more than 100 useful tools for investigating any malicious material. This tool
helps you to simplify your forensic task quickly and effectively.
Features:
1. It provides both 64-bit and 32-bit versions.
2. This tool is available on a USB thumb drive.
3. This toolbox has open-source tools that help you to search for the required information effortlessly.
4. This tool has more than 33 categories that assist you in accomplishing a cyber forensic task.
Link: https://sumuri.com/software/paladin/
EnCase
Encase is an application that helps you to recover evidence from hard drives. It allows you to conduct an
in-depth analysis of files to collect proof like documents, pictures, etc.
Features:
1. You can acquire data from numerous devices, including mobile phones, tablets, etc.
2. It is one of the best mobile forensic tools that enables you to produce complete reports for
maintaining evidence integrity.
3. You can quickly search, identify, as well as prioritize evidence.
4. Encase-forensic helps you to unlock encrypted evidence.
5. It is one of the best digital forensics tools that automates the preparation of evidence.
6. You can perform deep and triage (severity and priority of defects) analysis.
Link: https://www.guidancesoftware.com/encase-forensic
SIFT Workstation
SIFT Workstation is a computer forensics distribution based on Ubuntu. It is one of the best computer
forensic tools that provides a digital forensic and incident response examination facility.
Features:
1. It can work on a 64-bit operating system.
2. This tool helps users to utilize memory in a better way.
3. It automatically updates the DFIR (Digital Forensics and Incident Response) package.
4. You can install it via SIFT-CLI (Command-Line Interface) installer.
5. This tool contains numerous latest forensic tools and techniques.
Link: https://www.sans.org/tools/sift-workstation/
FTK Imager
FTK Imager is a forensic toolkit i developed by AccessData that can be used to get evidence. It can create
copies of data without making changes to the original evidence. This tool allows you to specify criteria,
like file size, pixel size, and data type, to reduce the amount of irrelevant data.
Features:
1. It provides a wizard-driven approach to detect cybercrime.
2. This program offers better visualization of data using a chart.
3. You can recover passwords from more than 100 applications.
4. It has an advanced and automated data analysis facility.
5. FTK Imager helps you to manage reusable profiles for different investigation requirements.
6. It supports pre and post-processing refinement.
Link: https://accessdata.com/products-services/forensic-toolkit-ftk
Magnet RAM capture
Magnet RAM capture records the memory of a suspected computer. It allows investigators to recover and
analyze valuable items which are found in memory.
Features:
1. You can run this app while minimizing overwritten data in memory.
2. It enables you to export captured memory data and upload it into analysis tools like magnet AXIOM
and magnet IEF.
3. This app supports a vast range of Windows operating systems.
4. Magnet RAM capture supports RAM acquisition.
Link: https://www.magnetforensics.com/resources/magnet-ram-capture/
X-Ways Forensics
X-Ways is software that provides a work environment for computer forensic examiners. This program is
supports disk cloning and imaging. It enables you to collaborate with other people who have this tool.
Features:
1. It has ability to read partitioning and file system structures inside .dd image files.
2. You can access disks, RAIDs (Redundant array of independent disk), and more.
3. It automatically identifies lost or deleted partitions.
4. This tool can easily detect NTFS (New Technology File System) and ADS (Alternate Data Streams).
5. X-Ways Forensics supports bookmarks or annotations.
6. It has the ability to analyze remote computers.
7. You can view and edit binary data by using templates.
8. It provides write protection for maintaining data authenticity.
Link: http://www.x-ways.net/forensics/
Wireshark
Wireshark is a tool that analyzes a network packet. It can be used to for network testing and
troubleshooting. This tool helps you to check different traffic going through your computer system.
Features:
1. It provides rich VoIP (Voice over Internet Protocol) analysis.
Link: https://www.wireshark.org
Registry Recon
Registry Recon is a computer forensics tool used to extract, recover, and analyze registry data from
Windows OS. This program can be used to efficiently determine external devices that have been
connected to any PC.
Features:
1. It supports Windows XP, Vista, 7, 8, 10, and other operating systems.
2. This tool automatically recovers valuable NTFS data.
3. You can integrate it with the Microsoft Disk Manager utility tool.
4. Quickly mount all VSCs (Volume Shadow Copies) VSCs within a disk.
5. This program rebuilds the active registry database.
Link: https://arsenalrecon.com/products/
Volatility Framework
Volatility Framework is software for memory analysis and forensics. It is one of the best Forensic imaging
tools that helps you to test the runtime state of a system using the data found in RAM. This app allows
you to collaborate with your teammates.
Features:
1. It has API that allows you to lookups of PTE (Page Table Entry) flags quickly.
2. Volatility Framework supports KASLR (Kernel Address Space Layout Randomization).
3. This tool provides numerous plugins for checking Mac file operation.
4. It automatically runs Failure command when a service fails to start multiple times.
Link: https://www.volatilityfoundation.org
Xplico
Xplico is an open-source forensic analysis app. It supports HTTP( Hypertext Transfer Protocol), IMAP
(Internet Message Access Protocol), and more.
Features:
1. You can get your output data in the SQLite database or MySQL database.
2. This tool gives you real time collaboration.
3. No size limit on data entry or the number of files.
4. You can easily create any kind of dispatcher to organize the extracted data in a useful way.
5. It is one of the best open source forensic tools that support both IPv4 and IPv6.
6. You can perform reserve DNS lookup from DNS packages having input files.
7. Xplico provides PIPI (Port Independent Protocol Identification) feature to support digital forensic.
Link: https://www.xplico.org
e-fense
E-fense is a tool that helps you to meet your computer forensics and cybersecurity needs. It allows you to
discover files from any device in one simple to use interface.
Features:
1. It gives protection from malicious behavior, hacking, and policy violations.
2. You can acquire internet history, memory, and screen capture from a system onto a USB thumb drive.
3. This tool has a simple to use interface that enables you to achieve your investigation goal.
4. E-fense supports multithreading, that means you can execute more than one thread simultaneously.
Link: http://www.e-fense.com/products.php
Crowdstrike
Crowdstrike is digital forensic software that provides threat intelligence, endpoint security, etc. It can
quickly detect and recover from cybersecurity incidents. You can use this tool to find and block attackers
in real time.
Features:
1. It is one of the best cyber forensics tools that help you to manage system vulnerabilities.
2. It can automatically analyze malware.
3. You can secure your virtual, physical, and cloud-based data center.
Link: https://www.crowdstrike.com/endpoint-security-products/falcon-endpoint-protection-pro/
Write-Blocker
The first item you should consider for a forensic workstation is a write-blocker. Write-blockers
protect evidence disks by preventing data from being written to them. Software and hardware write-
blockers perform the same function but in a different fashion.
Software write-blockers, such as PDBlock from Digital Intelligence, typically run in a shell mode
(such as a Windows CLI). PDBlock changes interrupt-13 of a workstation’s BIOS to prevent writing to the
specified drive. If you attempt to write data to the blocked drive, an alarm sounds, advising that no writes
have occurred. PDBlock can run only in a true DOS mode, however, not in a Windows CLI.
With hardware write-blockers, you can connect the evidence drive to your workstation and start
the OS as usual. Hardware write-blockers, which act as a bridge between the suspect drive and the
forensic workstation, are ideal for GUI forensics tools. They prevent Windows or Linux from writing data
to the blocked drive. In the Windows environment, when a write-blocker is installed on an attached drive,
the drive appears as any other attached disk. You can navigate to the blocked drive with any Windows
application, such as File Explorer, to view files or use Word to read files. When you copy data to the
blocked drive or write updates to a file with Word, Windows shows that the data copy is successful.
However, the write-blocker actually discards the written data—in other words, data is written to null.
When you restart the workstation and examine the blocked drive, you won’t see the data or files you
copied to it previously.
Many vendors have developed write-blocking devices that connect to a computer through
FireWire, USB 2.0 and 3.0, SATA, PATA, and SCSI controllers. Most of these write-blockers enable you to
remove and reconnect drives without having to shut down your workstation, which saves time in
processing the evidence drive. For more information on write-blocker specifications, visit
www.cftt.nist.gov. The following vendors offer write-blocking devices:
1. www.digitalintelligence.com
2. www.forensicpc.com
3. www.guidancesoftware.com
4. www.voomtech.com
5. www.mykeytech.com
6. www.lc-tech.com
7. www.logicube.com
Introduction to Log Analysis
How to utilize log analysis for investigative purposes in digital forensic cases. In the case of log
analysis, log files is grouped into 2 main categories for log analysis which can be explored by a forensic
investigator:
1. Logs from Network Devices and Security Devices (Routers, Switches, IDS, Firewalls, Proxies, NGFW,
WAF, etc)
2. Logs from the Endpoint side (Server, Desktop, etc)
Log analysis from the endpoint side, can be in the form of event log from the operating system, log
from the application, log from the database, and others. Basically when an investigator doing the
investigation in a security incident, the most frequently asked question is whether the log is still
available, and if the answer is yes, what logs can be obtained from the system?
From the log file in general, an investigator will see an overview of the timeline of activities and
events that occured on the endpoint side during the incident. Usually the method used by a digital
forensic investigator is similar to what a detective does when doing in a crime scene. Digital forensic
investigators will look at activities before a security incident happens to see what activities involve the
threat actor and then collect the evidence.
But there is a condition when the digital forensic investigator is conducting an analysis, they will
find a situation where threat actors have deleted or wiped the log to eliminate the tracks (covering
tracks). For this reason, the importance of Log management is to aggregate logs from the endpoint (and
other devices), to be integrated into devices such as Log management or SIEM, so that when there is
removal of tracks or covering tracks from the threat actors who wiped from this log, investigators still can
analyze the logs that have been aggregated to the SIEM / Log Management device.
Critical Log Review in DFIR Process
There are several log elements that are quite critical and are usually often become a concern for
DFIR team who analyze a log of security incidents. Critical log review is usually an important log to
highlight to find sources of information about the security incident that occurred.
Windows Operating System
1. Application logs from event viewer.
2. Security logs from event viewer.
3. System logs from event viewer.
1. Service Created, New Service Installed, Service Start, Service Stop : Usually related on Persistence
Mechanism
2. User Account Added, User Account Modified, Add User to Group : also related on persistence
mechanism by attacker
4. Disable Firewall, Stop Security Services (such as AV, HIPS, other Endpoint Protection) : Related to
Attacker activity for further movement
6. USB Log : Case incident like data theft, fraud, etc maybe need this kind of USB log to identify USB
Storage access into system
The following authors include some references and sources that can be very useful to use,
especially when investigating security incidents :
1.https://www.malwarearchaeology.com/cheat-sheets/
The URL above provides a variety of detailed information about the intricacies of the Log on the Windows
Platform. You can get very valuable information there. The website author also includes a mapping
between Windows Log with MITRE ATT&CK Framework where the ATT&CK is a Framework that studies
TTPs (Tactic, Technique, and Procedure) from threat actors, so this makes it easier for investigators to
understand how the thinking about the patterns are commonly used by Threat Actor, and where the
source log / log location can be used as a reference for analyzing TTPs used by threat Actors
2.https://www.ultimatewindowssecurity.com/
The above website is one of the author’s reference sources related to Event ID Widows. As we all know,
there are a lot of Windows Event IDs and types for each of these Event IDs, so for those of you who have
difficulty memorizing or often forgot for some Windows Event IDs that may not appear in the common
log in the Windows Event Viewer, you can use that website as reference. The website above can be used as
a reference to learn in more detail about the Windows Event ID and also they provides information in the
form of a Cheat Sheet to pay attention to some Windows Event IDs that often correlate with the activities
of threat actors / security incidents.
3.https://www.jpcert.or.jp/english/pub/sr/ir_research.html
The URL above is the research from the JP CERT Team (Japan Computer Emergency Response Team)
regarding the detection of Lateral Movement from Threat Actors using Event Logs. The research
published by JP CERT is very interesting, especially focusing on the use of tools and TTPs used by threat
actors when conducting Lateral Movement.
4.https://www.eurovps.com/blog/important-linux-log-files-you-must-be-monitoring/
The website above provides a source of reference about Linux Logs that is noteworthy, especially for
system administrators or Infosec Officers. The above reference is enough to help you to learn in more
detail about the logs mentioned above to learn about activities that occur during a security incident.
/dev - Device files that act as stand-ins for the devices they represent, as described in Chapter 3; for
example, /dev/sda is the first non-IDE disk drive on the system, usually the main hard drive.
/var - Subdirectories such as log (often useful for investigations), mail (storing e-mail accounts), and
spool (where print jobs are spooled).
In this section, you use standard commands to find information about your Linux system. Most of
the commands used in this activity work the same in all UNIX-like OSs, including Mac OSs. Remember
that UNIX and Linux commands are case sensitive. The wrong capitalization can mean your commands
are rejected as incorrect or interpreted as something different. If you don’t have Ubuntu 16.04 installed,
follow these steps to create a virtual machine for running it.
1. Start VirtualBox, and click the New icon at the upper left to start the Create Virtual Machine Wizard.
2. In the Name and operating system window, type Ubuntu 16.04 for the virtual machine name. Accept
the default settings, and click Next.
3. In the Memory size window, increase the setting to 1024, and then click Next.
4. In the Hard drive window, click Create a virtual hard drive now, and then click Create. In the Hard
drive file type window, click Virtual Machine Disk (VMDK), and then click Next. In the “Storage on
physical hard drive” window, click the Dynamically allocated option button, and then click Next.
5. In the File location and size window, increase the setting to 20 GB, and then click Create. Leave
VirtualBox open.
6. Start a Web browser, go to www.ubuntu.com/download/desktop, and download the ISO image for
Ubuntu 16.04.
8. Click Storage in the left pane. In the Storage Tree section, click Empty under Controller: IDE. In the
Attributes section on the right, click the CD icon (see Figure 7-1). Click Choose Virtual Optical Disk
File. Navigate to the folder where the ISO file is stored, double-click the ISO file, and then click OK.
9. In the Oracle VM VirtualBox Manager, click the Ubuntu 16.04 virtual machine, and then click the
Start icon. The VM should follow a standard OS installation. Accept the default settings. Leave the
virtual machine running for the next activity.
Before moving on to working with Linux forensics tools, the following activity gives you a chance
to review some commands. For example, being able to find a machine’s name is always useful; the uname
command is used for this task. Displaying file listings and permissions is also useful for investigators. To
help with these tasks, you can use the > character to redirect the output of the command preceding it to a
file you specify. If the file exists, it’s overwritten with a new one; if the file doesn’t exist, it’s created. The
double >> adds output at the end of a specified file, if it already exists. For all the commands in the
following activity, you can see their output in the terminal window or add the output to your log file by
entering >> ~/my.log at the end of each command. (The ~ character specifies the current user’s home
directory.) Use the echo command to add notes or headings in the log, and add blank lines to make the
contents easier to read. Just don’t forget that a single > character replaces the entire file instead of
appending to it. You aren’t prompted that you’re overwriting the file.
As you’ve learned, Linux commands use options to create variations of a command. There’s no
difference between grouping letter arguments (such as l and a) or entering them separately. Therefore, ls
-la functions the same as ls -l -a. Arguments consisting of multiple letters must be preceded by two –
characters instead of one and can’t be grouped together, as in ls --all.
The pipe (|) character also redirects the output of the command preceding it. Unlike the >
character, however, it redirects output as input for the following command. As you see in this activity, the
output of the cat command (which would have displayed the entire file in the terminal window) is sent to
the grep command to search for occurrences of your username. The grep command then displays only
lines matching search criteria.
1. Start Ubuntu 16.04, if necessary. On the left side of the desktop are icons for different categories of
applications. You can use these desktop icons to select an application, or click the Ubuntu icon and
start typing the name of the application you want to have the system make a suggestion. Type term
(in this case, to suggest opening the terminal window), and click the Terminal icon.
2. To find the name of your computer and the Linux kernel revision number, type uname -a and press
Enter. To record the results in a file, type uname -a > ~/my.log and press Enter. Nothing is displayed
in the terminal window, but a file called my.log is created in your user profile folder, and the output of
the uname -a command is redirected to it.
3. To identify your current path, type pwd (which stands for “print working directory”) and press Enter.
In a new terminal window, it’s likely the user’s home directory.
4. To see a list of the directory’s contents, type ls and press Enter. For comparison, try typing ls -l and
pressing Enter, and then typing ls -la and pressing Enter. (Note: In listings, all files beginning with the
. character are usually omitted, unless you add the a option, which stands for “all.”)
5. To record the full listing in the same log file you created earlier, type echo "" >> ~/my.log and press
Enter, and then type echo "Full listing:" >> ~/my.log and press Enter. Finally, type ls -la >> ~/my.log
and press Enter. These commands add a blank line, followed by the heading Full listing:, and finally
the listing of the directory’s contents in your log file.
6. To see the updated contents of your log file, type cat ~/my.log and press Enter.
7. Type ifconfig and press Enter to see your network interfaces: wired, wireless, FireWire, lo (the
loopback device), and so forth. They’re displayed with their MAC addresses (in the “HWaddr”
column) and currently assigned IP addresses (in the “inet addr” column). Try the same command
with -a, and observe the difference in the output. Append the output of this command to your log file.
8. Navigate to the root directory by typing cd / and pressing Enter. Confirm that you’re at the top of the
directory tree by typing pwd and pressing Enter.
9. To identify the username you’re currently using, type whoami and press Enter.
10. To see a listing of all user accounts configured on the system, type sudo cat/ etc/passwd and press
Enter, and then type the password and press Enter. The output displays the contents of the user
account configuration file, passwd. It contains the superuser account “root,” the regular user account
you’re currently using, and a long list of system accounts for system services, such as lp, sys, daemon,
and sync. For each account, you see the username, numeric user and group IDs, possibly a formatted
display name, the home directory (which is /root for the superuser), and the standard command
shell, which is usually /bin/bash for regular and root users.
11. To see just the information for your user account, type cat /etc/passwd | grep user (replacing user
with your own username) and press Enter.
12. Append the /etc/passwd file to your log file by typing cat /etc/passwd >> ~/my.log and pressing
Enter. The /etc/passwd file doesn’t contain user passwords, although it used to store hashed
passwords. Because everyone can read this file, storing even hashed passwords was considered a
security risk, so they were moved to the /etc/shadow file, which can be accessed only by the root user.
13. To get a detailed listing of the /etc/shadow file, type ls -l /etc/shadow and press Enter. If permission
is denied, repeat this command preceded by sudo.
14. Type sudo cat /etc/shadow and press Enter, and then type the password and press Enter. The file’s
contents are shown, but only regular user accounts contain a password hash. You should see this
information for your user account.
15. To append just the entry for your user account to your log file, type sudo cat/etc/shadow | grep user
>> ~/my.log (replacing user with your username) and press Enter. This command redirects the
output of cat as input to grep, which leaves only the line containing your username, and then appends
it to your log file. You can have multiple | pipes in a single command but only one redirection to a file
(using > or >>) because the file is a like a dead end—there can be no output after it’s redirected to a
file.
16. Close the terminal window by typing exit and pressing Enter, and leave.
Next, you examine deconstructing password hash values in the etc/shadow file. The entries in
/etc/shadow are separated by colons. The first field is the username, and the second is the password
hash, if available. (For more details, see www.cyberciti.biz/faq/understanding-etcshadow-file/.) The
remaining fields are numeric settings, including the maximum time before a password must be changed.
Take a look at a typical password hash field:
$digit$ShortHashString$LongHashString
It begins with a $ symbol, followed by a digit representing the hashing algorithm (which is 6 for
SHA-512). Next is another $ symbol followed by a short hash string, which is the password salt, used to
make password hashes different even if two users have the same password. Finally, there’s another $
symbol followed by a long hash string, which is the salted password hash. Even though passwords aren’t
stored in plaintext, two users having the same password normally results in identical hashes, which could
make cracking passwords easier. In addition, without password salting, it’s possible for others to create
rainbow tables to look up passwords.
The salt and hash are stored in an encoded format with letters, numbers, dots, and slashes that’s
similar to base-64 encoding. Assuming the password hash field starts with $6$, meaning SHA-512 is being
used, you can use the following command to find a salted password hash, replacing ShortHashString and
password with the information from your own entry in the /etc/shadow file:
mkpasswd --method=sha-512 --salt=ShortHashString password
This command returns the salted password hash and is used internally by the OS to check
whether the correct password was entered. However, knowing how password hash values are created is
helpful in case you need to attempt cracking passwords.
The only pieces of metadata not in an inode are the filename and path. Inodes contain
modification, access, and creation (MAC) times, not filenames. An assigned inode has 13 pointers that link
to data blocks and other pointers where files are stored. Pointers 1 through 10 link directly to data storage
blocks in the disk’s data block and contain block addresses indicating where data is stored on the disk.
These pointers are direct pointers because each one is associated with one block of data storage.
As a file grows, the OS provides up to three layers of additional inode pointers. In a file’s inode, the
first 10 pointers are called indirect pointers. The pointers in the second layer are called double-indirect
pointers, and the pointers in the last or third layer are called triple-indirect pointers.
To expand storage allocation, the OS initiates the original inode’s 11th pointer, which links to 128
pointer inodes. Each pointer links directly to 128 blocks located in the drive’s data block. If all 10 pointers
in the original inode are consumed with file data, the 11th pointer links to another 128 pointers. The first
pointer in this indirect group of inodes points to the 11th block. The last block of these 128 inodes is block
138.
If more storage is needed, the 12th pointer of the original inode is used to link to another 128 inode
pointers. From each of these pointers, another 128 pointers are created. This second layer of inode
pointers is then linked directly to blocks in the drive’s data block. The first block these double-indirect
pointers point to is block 139. If more storage is needed, the 13th pointer links to 128 pointer inodes, each
pointing to another 128 pointers, and each pointer in this second layer points to a third layer of 128
pointers. File data is stored in these data blocks.
All disks have more storage capacity than the manufacturer states. For example, a 240 GB disk
might actually have 240.5 GB free space because disks always have bad sectors. Windows doesn’t keep
track of bad sectors, but Linux does in an inode called the bad block inode. The root inode is inode 2, and
the bad block inode is inode 1. Some forensics tools ignore inode 1 and fail to recover valuable data for
cases. Someone trying to mislead an investigator can access the bad block inode, list good sectors in it, and
then hide information in these supposedly “bad” sectors. To find bad blocks on your Linux computer, you
can use the badblocks command, although you must log in as root to do so. Linux includes two other
commands that supply bad block information: mke2fs and e2fsck. The badblocks command can destroy
valuable data, but the mke2fs and e2fsck commands include safeguards that prevent them from
overwriting important information.
byte 510 is the logical EOF. The physical EOF is the number of bytes allotted on the volume for a file, so for
file B, it’s byte 1023.
In macOS, file fragmentation is reduced by using clumps, which are groups of contiguous
allocation blocks. As a file increases in size, it occupies more of the clump. Volume fragmentation is kept
to a minimum by adding more clumps to larger files.
For older HFS-formatted drives, the first two logical blocks, 0 and 1, on the volume (or disk) are
the boot blocks containing system startup instructions. Optional executable code for system files can also
be placed in boot blocks. Older Mac OSs use the Master Directory Block (MDB) for HFS, which is the
Volume Information Block (VIB) in HFS+. All information about a volume is stored in the MDB and
written to it when the volume is initialized. A copy of the MDB is also written to the next-to-last block on
the volume to support disk utility functions. When the OS mounts a volume, some information from the
MDB is written to a Volume Control Block (VCB), stored in system memory. When a user no longer needs
the volume and unmounts it, the VCB is removed.
The copy of the MDB is updated when the extents overflow file or catalog increases in size. The
extents overflow file is used to store any file information not in the MDB or a VCB. The catalog is the
listing of all files and directories on the volume and is used to maintain relationships between files and
directories on a volume. Volume Bitmap, a system application, tracks each block on a volume to
determine which blocks are in use and which ones are available to receive data. It has information about
the blocks’ use but not about their content. Volume Bitmap’s size depends on the number of allocated
blocks for the volume. File-mapping information is stored in two locations: the extents overflow file and
the file’s catalog entry. In earlier Mac versions, the B*-tree file system is also used to organize the
directory hierarchy and file block mapping. In this file system, files are nodes (records or objects)
containing file data. Each node is 512 bytes. The nodes containing actual file data are called leaf nodes;
they’re the bottom level of the B*-tree. The B*-tree also has the following nodes that handle file
information:
1. The header node stores information about the B*-tree file.
2. The index node stores link information to previous and next nodes.
3. The map node stores a node descriptor and map record.
Forensics Procedures in Mac :
Although understanding Linux file structures can help you learn about macOS file structures,
there are some differences between the Linux and macOS file systems. For example, Linux has the
/home/username and /root directories. In macOS, the corresponding folders are /users/username
and /private/var/root.
The /home directory exists in macOS, but it’s empty. In addition, macOS users have limited access
to other user accounts’ files, and the guest account is disabled by default. If it’s enabled, it has no
password, and guest files are deleted at logout. For forensics procedures in macOS, you must know where
file system components are located and how both files and file components are stored. Application
settings are in three formats: plaintext, plist files (which include plain XML plists and binary plists, which
are condensed XML), and the SQLite database. Plaintext files, of course, can be viewed in any text editor.
Plist files are preference files for installed applications on a system, usually stored in
/Library/Preferences. To view them, you use special editors, such as the one available at the Apple
the hard drive easier. BlackBag Technologies sells acquisition tools for OS 9 and OS X and offers a forensic
boot CD called MacQuisition for making a drive image (see
www.blackbagtech.com/software-products/macquisition-2/macquisition.html). It also offers some free
tools for forensics examiners (www.blackbagtech.com/resources/freetools.html).
After making an acquisition, the next step is examining the image of the file system with a
forensics tool. The tool you use depends on the image file’s format. For example, if you used EnCase, FTK,
or X-Ways Forensics to create an Expert Witness (.e0l) image, you must use one of these tools to analyze
the image. If you made a raw format image, you can use any of the following tools:
1. BlackBag Technologies Macintosh Forensic Software (OS X only)
2. SubRosaSoft MacForensicsLab (OS X only)
3. Guidance Software EnCase
4. Recon Mac OS X Forensics with Palladin (https://sumuri.com/software/recon/)
5. X-Ways Forensics
6. AccessData FTK
BlackBag Technologies Macintosh Forensic Software and SubRosaSoft MacForensicsLab have a
function for disabling and enabling Disk Arbitration, which is a macOS feature for disabling and enabling
automatic mounting when a drive is connected via a USB or FireWire device (see
www.appleexaminer.com). Being able to turn off the mount function in macOS allows you to connect a
suspect drive to a Mac without a write-blocking device.
Understanding Metafile Graphics
Metafile graphics combine raster and vector graphics and can have the characteristics of both file
types. For example, if you scan a photograph (a bitmap image) and then add text or arrows (vector
drawings), you create a metafile graphic. Although metafile graphics have the features of both bitmap
and vector files, they share the limitations of both. For example, if you enlarge a metafile graphic, the area
created with a bitmap loses some resolution, but the vector-formatted area remains sharp and clear.
Understanding Graphics File Formats
Graphics files are created and saved in a graphics editor, such as Microsoft Paint, Adobe Freehand
MX, Adobe Photoshop, or Gnome GIMP. Some graphics editors, such as Freehand MX, work only with
vector graphics, and some programs, such as Photoshop, work with both.
Recovering Graphics Files
Most graphics editors enable you to create and save files in one or more of the standard graphics
file formats. Standard bitmap file formats include Portable Network Graphic (.png), Graphics Interchange
Format (.gif), Joint Photographic Experts Group (.jpg or .jpeg), Tagged Image File Format (.tif or .tiff), and
Windows Bitmap (.bmp). Standard vector file formats include Hewlett-Packard Graphics Language
(.hpgl) and AutoCad (.dxf). Nonstandard graphics file formats include less common formats, such as
Targa (.tga) and Raster Transfer Language (.rtl); proprietary formats, such as Photoshop (.psd),
Illustrator (.ai), and Freehand (.fh11); newer formats, such as Scalable Vector Graphics (.svg); and old or
obsolete formats, such as Paintbrush (.pcx). Because you can open standard graphics files in most or all
graphics programs, they are easier to work with in a digital forensics investigation.
If you encounter files in nonstandard formats, you might need to rely on your investigative skills
to identify the file as a graphics file, and then find the right tools for viewing it. To determine whether a
file is a graphics file and to find a program for viewing a nonstandard graphics file, you can search the
Web or consult a dictionary Web site. For example, suppose you find a file with a .tga extension during an
investigation. None of the programs on your forensic workstation can open the file, and you suspect it
could provide crucial evidence. To learn more about this file format, see
www.garykessler.net/library/file_sigs.html, or follow these steps:
Most digital devices store graphics files as Exif JPEG files. In addition, if the device has GPS capability, the
latitude and longitude location data might be recorded in the Exif section of the file.
Because the Exif format collects metadata, investigators can learn more about the type of digital
device and the environment in which photos were taken. Viewing an Exif JPEG file’s metadata requires
special programs, such as Exif Reader (www.takenet.or.jp/~ryuuji/minisoft/exifread/english/),
IrfanView (www.irfanview.com), or Magnet Forensics AXIOM (www.magnetforensics.com), which has a
built-in Exif viewer. Originally, JPEG and TIF formats were designed to store only digital photo data. Exif
is an enhancement of these formats that modifies the beginning of a JPEG or TIF file so that metadata can
be inserted. In the similar photos in Figure 8-1, the one on the left is an Exif JPEG file, and the one on the
right is a standard JPEG file.
All JPEG files, including Exif, start from offset 0 (the first byte of a file) with hexadecimal FFD8.
The current standard header for regular JPEG files is JPEG File Interchange Format (JFIF), which has the
hexadecimal value FFE0 starting at offset 2. For Exif JPEG files, the hexadecimal value starting at offset 2
is FFE1. In addition, the hexadecimal values at offset 6 specify the label name (refer to Figure 8-2). For all
JPEG files, the ending hexadecimal marker, also known as the end of image (EOI), is FFD9
With tools such as Autopsy and Exif Reader, you can extract metadata as evidence for your case.
As you can see in Figure 8-4, Autopsy shows that the picture was taken on July 10, 2017, at 5:50 p.m. PDT.
As in any digital forensics investigation, determining date and time for a file is important. Getting this
information might not be possible, however, for a variety of reasons, such as suspects losing cameras after
transferring photo files to their computers. You should list this type of evidence as subjective in your
report because intentional and unintentional acts make date and time difficult to confirm. For example,
suspects could alter a camera’s clock intentionally to record an incorrect date and time when a picture is
taken. An unintentional act could be the battery or camera’s electronics failing, for example, which causes
an incorrect date and time to be recorded. When you’re dealing with date and time values in Exif
metadata, always look for corroborating information, such as where the picture was taken or whether the
device is set to Coordinated Universal Time (abbreviated as UTC), to help support what you find in
metadata.
For example, the photograph in Figure 8-1 was taken in Sante Fe, New Mexico, on September 10,
2013. If the camera’s date and time had been set to UTC, you would need to adjust for local time. In
September, Sante Fe’s local time is mountain daylight saving (MDT), which is -6 hours from UTC time. So
the actual local time might be 7:09 p.m. MDT. Because 7:09 p.m. is early evening, you should determine
when sunset occurred on that date by using online tools, such as Time and Date
(www.timeanddate.com/worldclock/sunrise.html) or SunriseSunset
(www.sunrisesunset.com/sun.html). The Time and Date Web site shows that sunset for this location and
time happened at 7:18 p.m. If the camera is set to 7:09 p.m. local time, you might assume sunlight would
cast long shadows. Because the shadows look short, the date and time might not be accurate. In addition,
if latitude and longitude values are available in the Exif file, you could approximate the time of day based
on the length and angle of shadows to the sun. Of course, this calculation applies only to photos taken
outside on sunny days.
Searching for and Carving Data from Unallocated Space :
At this time, you have to think about what to look for in the e-mails and on the mail servers. You
need to ask some basic questions and make some assumptions based on available information to proceed
in your search for information. The message from t_sadler@zoho.com is addressed to b_aspen@aol.com,
which matches the contractor’s name, Bob Aspen. Next, look at the time and date stamps in this message.
The date and time are July 10, 2017 3:32 PM, and farther down is a header from Terry Sadler with a time
and date stamp of July 10, 2017, 3:28 PM. Therefore, it seems Jim Shu sent the original message to the
t_sadler@zoho.com account. Now examine the second e-mail, which contains the following pieces of
information:
INTRODUCTION
Many say that the eyes are the window to the soul, but for the forensic examiner, Windows can be
the “soul” of the computer. The odds are high that examiners will encounter the Windows operating
system more times than not when conducting an investigation. The good news for us is that we can use
Windows itself as a tool to recover data and track the footprints left behind by the user. Because of this, it
is imperative that examiners have an extensive understanding of the Windows operating system and all
of its functions.
It’s a Windows world. With about 90% (Brodkin, 2011) of the desktop market share, a forensic examiner
will face a Windows machine the majority of the time. Getting cozy with Windows is an absolute necessity
in this line of work. In the course of using Windows and its multitude of compatible applications, users
will leave artifacts or footprints scattered throughout the machine. As you can imagine, this is pretty
handy from an investigative perspective. These artifacts are often located in unfamiliar or “hard to reach”
places. Even a savvy individual, bent on covering their tracks, can miss some of these buried forensic
treasures.
DELETED DATA
For the average user, hitting the delete key provides a satisfying sense of security. With the click of
a mouse, we think our data are forever obliterated, never again to see the light of day. Think again. We
know from Chapter 2 that, contrary to what many folks believe, hitting the delete key doesn’t do anything
to the data itself. The file hasn’t gone anywhere. “Deleting” a file only tells the computer that the space
occupied by that file is available if the computer needs it. The deleted data will remain until another file is
written over it. This can take quite some time, if it’s done at all.
File Carving:
The unallocated space on a hard drive can contain valuable evidence. Extracting this data is no
simple task. The process is known as file carving and can be done manually or with the help of a tool. As
you might imagine, tools can greatly speed up the process. Files are identified in the unallocated space by
certain unique characteristics. File headers and footers are common examples of these characteristics or
signatures. Headers and footers can be used to identify the file as well as marking its beginning
and end. Allocated space refers to the data that the computer is using and keeping tabs on. These are all
the files that we can see and open in Windows. The computer’s file system monitors these files and
records a variety of information about them. For example, the file system tracks and records the date and
time a particular file was last modified, accessed, and created. We’ll revisit this kind of information when
we talk about metadata later in this chapter.
Registry Structure
The registry is set up in a tree structure similar to the directories, folders, and files you’re used to
working with in Windows. The registry is broken into four tiers or levels. Inspecting the registry is
something that is done in nearly every forensic examination. Looking at the registry requires a tool that
can translate this information into something we can understand. Two of the major multipurpose
forensic tools, EnCase and FTK, do just that. As a key repository of critical system information, the
registry could contain quite a bit of evidence. As an added bonus, the Registry can also hold the
information we need to break any encrypted files we find.
FROM THE CASE FILES: THE WINDOWS REGISTRY
The Windows Registry helped law enforcement officials in Houston, Texas crack a credit card
case. In this case, the suspect’s stolen credit card numbers were used to purchase items from the Internet.
The two suspects in this case, a married couple, were arrested after a controlled drop of merchandise
ordered from the Internet. Examination of the computer’s NTUSER.DAT, Registry, and Protected Storage
System Provider information, found a listing of multiple other names, addresses, and credit card numbers
that where being used online to purchase items. After further investigation, investigators discovered that
these too were being used illegally without the owners consent.
The information recovered from the registry was enough to obtain additional search warrants.
These extra searches netted the arrest of 22 individuals and lead to the recovery of over $100,000 of
illegally purchased merchandise. Ultimately, all of the suspects plead guilty to organized crime charges
and were sentenced to jail time.
FROM THE CASE FILES: THE WINDOWS REGISTRY AND USBSTOR
In a small town outside of Austin, Texas, guests at a local hotel called police after observing an
individual at the hotel who was roaming mostly naked and appearing somewhat intoxicated. When the
police arrived, they found the individual and determined he was staying at the hotel. They accompanied
him back to his room and were surprised by what they found. When the door opened, they discovered
another individual in the room and a picture of child pornography being projected on the wall. The
projector was attached to a laptop. Two external hard drives were found lying next to the laptop. The
unexpected occupant said that the laptop was his but that the two external Registry drives belonged to
the other gentlemen and had never been connected to his laptop. All of the equipment was seized and
sent for examination. Forensic clones were made of the laptop and both external drives. The initial
examination of the external drives found both still images and movies of child pornography.
Next, examiners wanted to determine if either of those drives had ever been connected to the
laptop. The system registry file of the laptop was searched for entries in the USBStor key. Listings for
external hard drives were discovered along with the hardware serial numbers from both external hard
drives. Next, examiners sought to validate their results. Using a lab computer system with a clean
installation of Windows, they connected the defendants external drives to the lab system. A write blocker
was connected between the drives and the system to prevent any changes or modifications to the clones
of the external drives.
Attribution
The lab computer’s system registry file was then examined and the USBStor keys showed the same
external hard drive listings as the suspect’s with matching hardware serial numbers. These results proved
that the suspect’s external hard drives had in fact been hooked to the laptop at one time. The suspect was
eventually convicted of possession of child pornography. Digital forensics can be used to answer many
questions, such as, what terms were searched using Google? We can find that. Did Bob type those terms?
Houston, we’ve got a problem. Unfortunately, we can rarely put someone’s sticky fingers on the keyboard
when a particular artifact is created. We may need to uncover other evidence in order to connect those
dots. Tracking something back to a specific user account or identifying the registered owner of the system
is a much easier task. A single PC can have multiple user accounts set up on the machine. In a technical
sense, user accounts establish what that specific user can and can’t do on the computer (Microsoft
Corporation). A PC will set up two accounts by default, the administrator and a guest account. Other
accounts may be created, but they are not required. The administrator has all rights and privileges on the
machine. They can do anything. A guest account (which doesn’t require any login) generally has less
authority. For example, a family PC could have separate accounts for mom, dad, and each of the kids. Each
of these accounts could be password-protected. Each account on the machine is assigned a unique
number called a security identifier or SID. Many actions on the computer are associated with, and tracked
by, a specific SID. It’s through the SID that we can tie an account to some particular action or event.
External Drives
Information has value, sometimes substantial value. They don’t keep the formula for Coke under
lock and key for grins. Theft of intellectual property is a huge concern. One way that would-be thieves
could easily smuggle data out of an organization is by way of one of these external storage devices, such as
a thumb drive. As a result, examiners are often asked to determine whether any such device has been
attached to a computer. These devices can take a variety forms such as thumb drives or external hard
drives. In addition to stealing information, these devices can also be used to inject a virus or store child
pornography. Whether or not such a device was attached can be determined by data contained in the
registry. The registry records this kind of information with a significant amount of detail. It tells us both
the vendor and the serial number of the device.
PRINT SPOOLING
In some investigations, a suspect’s printing activities may be relevant. As you might expect,
printing can also leave some tracks for us to follow. You’ve probably noticed that there’s a bit of a delay
after you click Print. This delay is an indication of a process called spooling. Essentially, spooling
temporarily stores the print job until it can be printed at a time that is more convenient for the printer
(TechTarget). During this spooling procedure, Windows creates a pair of complementary files. One is the
Enhanced Meta File (EMF) which is an image of document to be printed. The other is the spool file which
contains information about the print job itself.
There is one of each for every print job. What kind of information can we recover from the spool
file? The spool file (.spl) tells us things like the printer name, computer name as well as the user account
that sent the job to the printer. Either or both of these files may have evidentiary value. The problem is
they don’t stick around long. In fact, they are normally deleted automatically after the print job is
finished. However, there are a few exceptions. The first exception occurs if there is some kind of problem
and the document didn’t print. The second is that the computer that is initiating the print job may be set
up to retain a copy. Some companies may find this setup appealing if they have some reason to hang onto
a copy. Spool and EMF files can be used to directly connect targets to their crimes. Copies of extortion
letters, forged contracts, stolen client lists, and maps to body dump sites are but a few pieces of
evidentiary gold potentially mined from their computers.
RECYCLE BIN
The “trash can” has been a familiar presence on our computer desktops starting with the early
Macintosh systems. It’s a really good idea, especially from the casual user’s perspective. Users may not
understand sectors and bytes, but most everyone “gets” the trash can. Sometimes, though, the trash can
“gets” them. This is especially true when they count on the trash can to erase their evidence. They assume
Recycle Bin that their incriminating data have disappeared into a digital “Bermuda Triangle,” never again
to see the light of day. Unlike Amelia Earhart, that’s definitely not the case. Using forensic tools such as
Forensic Toolkit and EnCase, we can quite often bring those files back in mint condition.
Recycle Bin Function : Here’s a quick question. Where is a file moved when it’s deleted? I bet some of you
said the recycle bin. That would make the most sense. I mean, that’s where we put the unwanted files,
right? But it would also be wrong. When you delete a file, it’s moved to … wait for it … nowhere. The file
itself stays exactly where it was. It’s a common notion that when deleted, the file is actually picked up and
moved to the recycle bin. That’s not the case.
Unwanted files can be moved to the recycle bin a few different ways. They can be moved from a
menu item or by dragging and dropping the file to the recycle bin. Finally, you can right-click on an item
and choose Delete. The benefit of putting files into the recycle bin is that we can dig through it and pull
our files back out. I’ve worked in places where digging through office trash can be a pretty hazardous
undertaking. Fortunately, things aren’t nearly as dicey on our computers. As long as our files are still “in
the can,” we can get them back. However, emptying the recycle bin (i.e., “taking out the trash”) makes
recovery pretty much impossible for the average user.
Not everything that’s deleted passes through the recycle bin. A user can actually bypass the bin
altogether. Bypassing can be done a couple of ways. First, if you press Shift+Delete, the file will go straight
to unallocated space without ever going through the recycle bin. You can also configure your machine to
bypass the recycle bin altogether. Your deleted files won’t even brush the sides of the recycle bin. The
recycle bin is obviously one of the first places that examiners look for potential evidence. The first instinct
suspects have is to get rid of any and every incriminating file on their computer. Not fully understanding
how their computer works, they put all their faith in the recycle bin. Now you know that’s a bad move.
Lucky for us, many folks still don’t recognize how misplaced their faith is. As a result, the recycle bin is a
great place to look for all kinds of potentially incriminating files.
Recycle Bin Bypass
If an examiner suspects that the system has been set to bypass the recycle bin, the first thing they
would check would be the registry. The “NukeOnDelete” value would be set to “1” indicating that this
function had been switched on.
METADATA
Metadata is most often defined as data about data. Odds are you’ve come across metadata at some
point. You may not have known that’s what you were looking at. There are two flavors of metadata if you
will: application and file system. Remember, the file system keeps track of our files and folders as well as
some information about them. File system metadata include the date and time a file or folder was created,
accessed, or modified. If you right-click on a file and choose “Properties,” you can see these date/time
stamps.
Although this information can prove quite valuable to an investigation, we must keep in mind that
all these date/time stamps may not be what they seem. One problem is that the system’s clock can be
changed by the user. Time zone differences can also cause some issues. Let’s take a little closer look at the
created, accessed, and modified date/time stamps.
Metadata information as seen after right-clicking on the file and choosing “Properties.” Note the
created, modified, and accessed dates and times.
Created—The created date/time stamp frequently indicates when a file or folder was created on a
particular piece of media, such as a hard drive (Casey, 2009). How the file got there makes a difference. By
and large, a file can be saved, copied, cut and pasted, or dragged and dropped.
Modified—The modified date and time are set when a file is altered in any way and then saved (Casey,
2009).
Accessed—This date/time stamp is updated whenever a file is accessed by the file system. Accessed does
not mean the same thing as opened. You may be asking how a file can be accessed without being opened,
and that’s a good question. You see, the computer itself can interact with the files. Antivirus scans and
other preset events are just two examples of this automated interaction.
Although metadata used to be one of our best-kept secrets, it’s not any more. The criminals aren’t
the only ones taking notice. Corporations, law firms, and private citizens are just some of the folks
concerned about metadata and the information contained therein. These legitimate concerns are being
addressed by actually removing the metadata prior to sharing those files with other folks. Many tools
exist for just that purpose. For example, law firms routinely scrub the metadata from all of their
outbound documents, like those transmitted via e-mail. For the privacy-minded individual, the newer
versions of Microsoft Word have the ability to detect and remove metadata. Recovered metadata can be
used to refute claims by a suspect that they had no knowledge of a file’s existence. It’s tough to claim you
didn’t know it was there when you not only opened the file but you changed or deleted the file as well.
These dates and times can also be used to construct timelines in a case.
FROM THE CASE FILES: METADATA
Metadata can help investigators identify all the suspects in a case and recover more evidence.
Take this case from Houston, Texas regarding the production of counterfeit credit cards. The suspects in
this case used “skimmed” card information in their card production process. Credit card “skimming” is
when thieves grab the data from the magnetic strip on the back of credit and debit cards. This often
occurs during a legitimate transaction, such as when you use your card to pay for dinner.
After identifying their prime suspect, police arrested him and searched his computer. In the end,
the search of the computer was disappointing. The search only found one Microsoft Word document that
contained “skimmed” information. Furthermore, the search of the residence found no skimmer hardware
and there was no skimming software located on the computer. Not exactly the treasure trove they had
hoped to find. The exam didn’t stop there. Further examination of the Word document hit pay dirt. A
review of the metadata revealed the author of the document, a female. Further investigation found that
she was the suspect's girlfriend and that she worked as a waitress in a neighboring town. This
information gave investigators the probable cause needed to obtain a second search warrant for her
apartment. During the second search, the skimmer (the piece of hardware used to extract the data from
the magnetic strip) was recovered. The examination of the computer found not only the skimming
software, but additional lists of debit cards and related information. Fortunately, this information was
seized before it could be used. Both suspects were eventually found guilty.
THUMBNAIL CACHE
To make it easier to browse the pictures on your computer, Windows creates smaller versions of
your photos called thumbnails. Thumbnails are just miniaturized versions of their larger counterparts.
These miniatures are created automatically by Windows when the user chooses “Thumbnail” view when
using Windows Explorer. Windows creates a couple of different kinds of thumbnail files, depending on
the version being used. Windows XP creates a file called thumbs.db. Microsoft Vista and Windows 7
create a similar file called thumbcache. db.
Most users are completely unaware that these files even exist. The cool thing about these files is
that they remain even after the original images have been deleted. Even if we don’t recover the original
image, thumbnails can serve as the next best evidence. Their mere existence tells us that those pictures
existed at one point on the system.
Windows tries to make our lives, at least on our computers, as pleasant as possible. They may not
always succeed, but their hearts are in the right place. The Most Recently Used (MRU) list is one such
example of Microsoft thinking of us. The MRU are links that serve as shortcuts to applications or files
that have recently been used. You can see these in action by clicking on the Windows Start button
through the file menu on many applications.
RESTORE POINTS AND SHADOW COPY
Do you ever wish you could go back in time? We’re not there yet, but lucky for us, Windows is.
There may come a time when it’s just easier (or necessary) for our computers to revert back to an earlier
point in time when everything was working just fine. In Windows, these are called restore points (RP),
and they serve as time travel machines for our computers.
Restore Points
Restore points are snapshots of key system settings and configuration at a specific moment in
time (Microsoft Corporation). These snapshots can be used to return the system to working order.
Restore points are created in different ways. They can be created by the system automatically before
major system events, like installing software. They can be scheduled at regular intervals, such as weekly.
Restore Points and Shadow Copy
Finally, they can be created manually by a user. The restore point feature is on by default, and one
snapshot is automatically produced every day. Before you start looking around for your restore points,
you should know that Microsoft has taken steps to keep them from your prying eyes. They are normally
hidden from the user. These RPs have metadata (data about the data) associated with them. This
information could be valuable in determining the point in time when this snapshot was taken. If the RP
contains evidence, this can tell us exactly when that data existed on the system in question. Digging
through the restore points may reveal evidentiary gems that don’t exist anywhere else. For the average
person trying to conceal information from investigators, restore points are likely not the first place they
would start destroying evidence. Obviously, that works in our favor.
FROM THE CASE FILES: INTERNET HISTORY & RESTORE POINTS
A defendant accused of possessing child pornography claimed that he had visited the site in
question on only one accession, and that was only by accident. To refute this claim, examiners turned to
the restore points for the previous two months. Examination of each of the registry files found in the
various restore points told a significantly different story. The evidence showed that not only had multiple
child pornography sites been visited, but the URLs had been typed directly into the address bar of the
browser, destroying his claim that the site was visited by accident. Confronted with this new evidence, the
defendant quickly accepted a plea deal.
Shadow Copies
Shadow copies provide the source data for restore points. Like the restore point, shadow files are
another artifact that could very well be worth a look. We can use them to demonstrate how a particular
file has been changed over time. They can likewise hold copies of files that have been deleted (Larson,
2010).
FROM THE CASE FILES: RESTORE POINTS, SHADOW COPIES, AND ANTI-FORENSICS
Officers from the Texas OAG (Office or the Attorney General) Cyber Unit, responding to a tip,
served a search warrant at the suspect’s residence. The OAG Cyber Unit obtained the search warrant after
they were alerted that the suspect was uploading child pornography to the Internet. When the officers
served the search warrant, they found the house unoccupied. Officers called the suspect letting him know
they were in his home and that he should come home immediately and meet with them. When the
suspect arrived, officers interviewed the suspect and searched his vehicle. Inside the car was a laptop
computer. All items seized were taken to the OAG offices for forensic examination. During the exam of the
suspect’s laptop, an alarming discovery was made. It appeared the suspect, on the drive home to meet the
officers, used a wiping tool to get rid of not only incriminating images but the Internet history from his
laptop. While the initial exam found no child pornography on the laptop, other compelling evidence was
recovered.
For example, the examiner was able to recover logs from the wiping program itself showing that it
had indeed been run. That wasn’t all. Since the operating system was Windows Vista, the examiner
decided to check the shadow copies found on the machine. Remember, these Shadow Copies (or System
Restore Points) are essentially snap shots of data at a given point in time. Next, the forensic image (clone)
of the suspect's laptop was loaded into a virtual environment. This enabled the examiner to see the
computer system as the suspect saw it. The examiner exported out the restore points from the suspects
laptop, then imported those same files into his forensic tool. This process allowed the examiner to use his
tools to extract images and other information from the suspect’s system restore points. This procedure hit
pay dirt. More than 3000 images of child pornography were recovered. In addition, log files were found
showing searches and downloads of those same files. When it was all said and done, the suspect plead
guilty and is currently serving 10 years in a Texas state prison.
PREFETCH
Prefetch files can show that an application was indeed installed and run on the system at one
time. Take, for example, a wiping application such as “Evidence Eliminator.” Programs like this are
designed to completely destroy selected data on a hard drive. Although we may not be able to recover the
original evidence, the mere presence of “Evidence Eliminator” can prove to be almost as damning as the
original files themselves. Stay tuned for more discussion on “Evidence Eliminator.”
LINK FILES
We all love shortcuts. They help us avoid road construction and steer clear of traffic jams. They
save us time and make our travels easier, at least in theory. Microsoft Windows also like shortcuts. It likes
them a lot. Link files are simply shortcuts. They point to other files. Link files can be created by us, or
more often by the computer. You may have created a shortcut on your desktop to your favorite program
or folder. The computer itself creates them in several different places. You’ve likely seen and used these
link files before. Take Microsoft Word, for example. If you look under the File menu, you’ll see an option
called “recent.” The items in that list are link files, or shortcuts, created by the computer. Link files have
their own date and time stamps showing when they were created and last used. The existence of a link
file can be important. It can be used to show that someone actually opened the file in question. It can also
be used to refute the assertion that a file or folder never existed. Link files can also contain the full file
path, even if the storage device is no longer connected, like a thumb drive.
Installed Programs
Software that is or has been installed on the questioned computer could also be of interest. This is
especially true if the same application has now been removed after some relevant point in time (i.e., when
the suspect became aware of a potential investigation). There are multiple locations on the drive to look
for these artifacts. The program folder is a great place to start. Link and pre-fetch files are two other
locations that could also bear fruit.
5. VFAT—Developed to handle files with more than eight-character filenames and three-character
extensions; introduced with Windows 95. VFAT is an extension of other FAT file systems.
Cluster sizes vary according to the hard disk size and file system. Table 5-2 lists the number of
sectors and bytes assigned to a cluster on FAT16 disk according to hard disk size. For FAT32 file systems,
cluster sizes are determined by the OS. Clusters can range from 1 sector consisting of 512 bytes to 128
sectors of 64 KB.
Microsoft OSs allocate disk space for files by clusters. This practice results in drive slack,
composed of the unused space in a cluster between the end of an active file’s content and the end of the
cluster. Drive slack includes RAM slack (found mainly in older Microsoft OSs) and file slack. In newer
Windows OSs, when data is written to disk, the remaining RAM slack is zeroed out and contains no RAM
data. For example, suppose you create a text document containing 5000 characters— that is, 5000 bytes
of data. If you save this file on a FAT16 1.6 GB disk, a Microsoft OS reserves one cluster for it automatically.
For a 1.6 GB disk, the OS allocates about 32,000 bytes, or 64 sectors (512 bytes per sector), for your file. The
unused space, 27,000 bytes, is the file slack. That is, RAM slack is the portion of the last sector used in the
last assigned cluster, and the remaining sectors are referred to as “file slack”. File fragments, deleted e-
mails, and passwords are often found in RAM and file slack.
When you run out of room for an allocated cluster, the OS allocates another cluster for your file. As
files grow and require more disk space, assigned clusters are chained together. Typically, chained clusters
are contiguous on the disk. However, as some files are created and deleted and other files are expanded,
the chain can be broken or fragmented. With a tool such as WinHex, you can view the cluster-chaining
sequence and see how FAT addresses linking clusters to one another.
When the OS stores data in a FAT file system, it assigns a starting cluster position to a file. Data for
the file is written to the first sector of the first assigned cluster. When this first assigned cluster is filled
and runs out of room, FAT assigns the next available cluster to the file. If the next available cluster isn’t
contiguous to the current cluster, the file becomes fragmented. In the FAT for each cluster on the volume
(the partitioned disk), the OS writes the address of the next assigned cluster. Think of clusters as buckets
that can hold a specific number of bytes. When a cluster (or bucket) fills up, the OS allocates another
cluster to collect the extra data. On rare occasions, such as a system failure or sabotage, these cluster
chains can break. If they do, data can be lost because it’s no longer associated with the previous chained
cluster. FAT looks forward for the next cluster assignment but doesn’t provide pointers to the previous
cluster. Rebuilding these broken chains can be difficult.
Deleting FAT Files
When a file is deleted in Windows Explorer or with the MS-DOS delete command, the OS inserts a
HEX E5 (0xE5) in the filename’s first letter position in the associated directory entry. This value tells the
OS that the file is no longer available and a new file can be written to the same cluster location. In the FAT
file system, when a file is deleted, the only modifications made are that the directory entry is marked as a
deleted file, with the HEX E5 character replacing the first letter of the filename, and the FAT chain for
that file is set to 0. The data in the file remains on the disk drive. The area of the disk where the deleted
file resides becomes unallocated disk space (also called “free disk space”). The unallocated disk space is
now available to receive new data from newly created files or other files needing more space as they grow.
Most forensics tools can recover data still residing in this area.
NT File System (NTFS) was introduced when Microsoft created Windows NT and is still the main
file system in Windows 10. Each generation of Windows since NT has included minor changes in NTFS
configuration and features. The NTFS design was partially based on, and incorporated many features
from, Microsoft’s project for IBM with the OS/2 operating system; in this OS, the file system was High
Performance File System (HPFS). When Microsoft created Windows NT, it provided backward-
compatibility so that NT could read OS/2 HPFS disk drives. Since the release of Windows 2000, this
backward-compatibility is no longer available. For a detailed explanation of NTFS structures, see
www.ntfs.com/ntfs.html.
NTFS offers substantial improvements over FAT file systems. It provides more information about a
file, including security features, file ownership, and other file attributes. With NTFS, you also have more
control over files and folders (directories) than with FAT file systems.
NTFS was Microsoft’s move toward a journaling file system. The system keeps track of
transactions such as file deleting or saving. This journaling feature is helpful because it records a
transaction before the system carries it out. That way, in a power failure or other interruption, the system
can complete the transaction or go back to the last good setting.
In NTFS, everything written to the disk is considered a file. On an NTFS disk, the first data set is
the Partition Boot Sector, which starts at sector [0] of the disk and can expand to 16 sectors. Immediately
after the Partition Boot Sector is the Master File Table (MFT). The MFT, similar to FAT in earlier
Microsoft OSs, is the first file on the disk. An MFT file is created at the same time a disk partition is
formatted as an NTFS volume and usually consumes about 12.5% of the disk when it’s created. As data is
added, the MFT can expand to take up 50% of the disk. (The MFT is covered in more detail in “NTFS
System Files.”)
An important advantage of NTFS over FAT is that it results in much less file slack space. Compare
the cluster sizes in Table 5-3 with Table 5-2, which showed FAT cluster sizes. Clusters are smaller for
smaller disk drives. This feature saves more space on all disks using NTFS. NTFS (and VFAT for long
filenames) also uses Unicode, an international data format. Unlike the American Standard Code for
Information Interchange (ASCII) 8-bit configuration, Unicode uses an 8-bit, a 16-bit, or a 32-bit
configuration. These configurations are known as UTF-8 (Unicode Transformation Format), UTF-16, and
UTF-32. For Western-language alphabetic characters, UTF-8 is identical to ASCII (see
www.unicode.org/versions for more details). Knowing this feature of Unicode comes in handy when you
perform keyword searches for evidence on a disk drive. (This feature is discussed in more detail in
Chapter 9.) Because NTFS offers many more features than FAT, more utilities are used to manage it.
This information helps you determine when a suspect’s computer was last accessed, which is
particularly important with computers that might have been used after an incident was reported.
Startup in Windows 7, Windows 8, and Windows 10 Since Windows Vista, Microsoft has changed
its approach to OS boot processes. In addition, Windows 8 and 10 are multiplatform OSs that can run on
desktops, laptops, tablets, and smartphones. This discussion covers desktop and laptop computers
running Windows 10, although Windows Vista, 7, and 8 are very similar.
All Windows 8 and 10 boot processes are designed to run on multiple devices, ranging from
desktop or laptop systems to tablets and smartphones. In Windows Vista and later, the boot process uses
a boot configuration data (BCD) store. For desktops and laptops (BIOS-designed systems), a BCD Registry
file in the \Boot\Bcd folder is maintained to control the boot process. To access this file, you use the BCD
Editor; Regedit and Regedt32 aren’t associated with this file.
In Windows 8 and 10, the BCD contains the boot loader that initiates the system’s bootstrap
process when Windows starts. To access the Advanced Boot Options menu during the bootstrap process,
press F8 or F12 when the system is starting. This menu enables you to choose between Safe Mode (or
Enable Safe Mode, in Windows 8 and 10), Enable boot logging, or Disable Driver Signature Enforcement.
To access the computer’s firmware to modify the boot priority order, press F2 or Delete. Follow the
onscreen instructions to save the updates and reboot the computer. For additional information on
Windows boot processes, refer to “Insight of Operating System booting process – Windows 10” (Vinit
Pandey, https://vinitpandey.wordpress.com/2016/10/21/insight-of-operating-system-booting-process-
windows-10/). For information on IBM-compatible laptop and desktop computers, see “The BIOS/MBR
Boot Process” (https://neosmart.net/wiki/mbr-boot-process/). To learn more about changing the boot
order of a Windows OS, see “Computer Boot Order: How to change computer boot order for booting from a
CD/DVD, USB disk or floppy” (Margus Saluste, www.winhelp.us/computer-boot-order.html).
Linux inherited the majority of Unix design ideals, primarily because it was begun as something
functionally identical to the standard Unix that had been developed by AT&T and was reimplemented by
a small group at the University of California at Berkeley as the Berkeley Systems Distribution (BSD). This
meant that anyone familiar with how Unix or even BSD worked could start using Linux and be
immediately productive. Over the decades since Torvalds first released Linux, many projects have started
up to increase the functionality and user-friendliness of Linux. This includes several desktop
environments, all of which sit on top of the X/Windows system, which was first developed by MIT
(which, again, was involved in the development of Multics).
The development of Linux itself, meaning the kernel, has changed the way developers work. As an
example, Torvalds was dissatisfied with the capabilities of software repository systems that allowed
concurrent developers to work on the same files at the same time. As a result, Torvalds led the
development of git, a version-control system that has largely supplanted other version-control systems for
open source development. If you want to grab the current version of source code from most open source
projects these days, you will likely be offered access via git. Additionally, there are now public repositories
for projects to store their code that support the use of git, a source code manager, to access the code.
GNOME Desktop
The default environment provided in Kali Linux is based on the GNOME desktop. This desktop
environment was part of the GNU (GNU’s Not Unix, which is referred to as a recursive acronym) Project.
Currently, RedHat is the primary contributor and uses the GNOME desktop as its primary interface, as
does Ubuntu and others.
File and Directory Management
To start, let’s talk about getting the shell to tell you the directory you are currently in. This is
called the working directory. To get the working directory, the one we are currently situated in from the
perspective of the shell, we use the command pwd, which is shorthand for print working directory. In,
you can see the prompt, which ends in #, indicating that the effective user who is currently logged in is a
superuser. The # ends the prompt, which is followed by the command that is being entered and run. This
is followed on the next line by the results, or output, of the command. Printing your working directory
root@rosebud:~# pwd
/root
Once we know where in the filesystem we are, which always starts at the root directory (/) and
looks a bit like a tree, we can get a listing of the files and directories. You will find that with Unix/Linux
commands, the minimum number of characters is often used. In the case of getting file listings, the
command is ls. While ls is useful, it only lists the file and directory names. You may want additional
details about the files, including times and dates as well as permissions. You can see those results by using
the command ls -la. The l (ell) specifies long listing, including details. The a specifies that ls should show
all the files, including files that are otherwise hidden.
root@rosebud:~# ls -la
drwxr-xr-x 2 root root 4096 Oct 29 21:10 Desktop
drwxr-xr-x 2 root root 4096 Oct 29 21:10 Documents
drwxr-xr-x 2 root root 4096 Oct 29 21:10 Downloads
directories store user-specific settings and logs. Because they are managed by theapplications that create
them, as a general rule, they are hidden from regular directory listings. The program touch can be used to
update the modified date and time to the moment that touch is run. If the file doesn’t exist, touch will
create an empty file that has the modified and created timestamp set to the moment touch was executed.
Shells
The common default shell used is the Bourne Again Shell (bash). However, other shells can be
used. If you are feeling adventurous, you could look at other shells like zsh, fish, csh, or ksh. A shell like
zsh offers the possibility of a lot of customization using features including plug-ins. If you want to
permanently change your shell, you can either edit /etc/passwd or just use chsh and have your shell
changed for you.
Administrative Privileges for Services
Services are system-level. Managing them requires administrative privileges. Either you need to be
root or you need to use sudo to gain temporary root privileges in order to perform the service
management functions. For a long time, many Linux distributions used the AT&T init startup process.
This meant that services were run with a set of scripts that took standard parameters. The init startup
system used runlevels to determine which services started. Single-user mode would start up a different
set of services than multiuser mode. Even more services would be started up when a display manager is
being used, to provide GUIs to users. The scripts were stored in /etc/init.d/ and could be managed by
providing parameters such as start, stop, restart, and status. As an example, if you wanted to start the SSH
service, you might use the command /etc/init.d/ssh start. The problem with the init system, though, was
that it was generally serial in nature. This caused performance issues on system startup because every
service would be started in sequence rather than multiple services starting at the same time. The other
problem with the init system was that it didn’t support dependencies well. Often, one service may rely on
other services that had to be started first.
Ethics Warning
You need to ensure that the systems you are working on—especially when there could be damage
or disruption, and just about everything we will be talking about has that potential—are either yours or
systems you have permission to be testing. It’s unethical at a minimum and likely even illegal to be
testing any system you don’t own or have permission to be testing. Testing, no matter how simple it may
seem to be, always has the potential to cause damage. Get your permission in writing, always!









