You are on page 1of 29

Asim Sharif Satti

1
Introduction
• Hometown: Islamabad

• Current Job:
– Designation: Visiting Lecturer (International Islamic University, Islamabad)
– Joined: September 2009

• Education with grades:


– [2017 - 2019] MS Network Security, SZABIST, Islamabad
• GPA: 3.46/4.00 (Highest GPA in the course)
• Research Work: An investigative study into cloud computing forensics
– [1999-2001] Masters in Computer Science, Hamdard University, Karachi
• GPA: 3.27/4.00
• FYP: IP Telephony for text chat, voice chat and whiteboard testing
– [1996 - 1998] Bachelor of Science, Punjab University
• Marks: 377/800
– [1994-1996] HSSC, Federal Government Postgraduate College for Men H-8 Islamabad
• Marks: 631/1100
– [1992-1994] SSC, Federal Government Postgraduate College for Men H-8 Islamabad
• Marks: 609/850 2
Summary
• Publication: NO
• Awards:
– Achieved the highest GPA in MS Network Security
• Teaching experience :
– Position: Visiting Lecturer (2009 to date)
Organization: International Islamic University, Islamabad
• Subjects taught:
– Operating Systems, Theory of Computation, Computer Networks, Introduction to
Computer Science, Design & Analysis of Algorithms, Calculus, Multivariable calculus,
Discrete structures, Probability & Statistics, Differential Equations
• Final year project supervised: NO
• Administrative experience:
– Position: Assistant Project Manager (2006 to 2015)
Organization: DevDesk Technologies (Pvt.) Ltd, Islamabad
– Position: Assistant Director PISCES (2004 to 2006)
Organization: Federal Investigation Agency (FIA), Pakistan
– Position: Network Administrator (2001 to 2004)
Organization: Renaissance Soft (Pvt) Ltd., Rawalpindi 3
An investigative study into cloud
computing forensics

4
Introduction
• Digital Forensics is the process of identifying, preserving, analyzing and presenting
digital evidence in a way that is legally acceptable.

• Acquiring digital evidence from cloud computing platform is much more complex
due to its distributed nature, elasticity, data ownership and remote storage
locations controlled by the service providers.

• Three common platforms that represent the cloud are IaaS (infrastructure as a
service), PaaS (platform as a service) and SaaS (software as a service).

• Cloud environment is a virtual environment; so all types of evidentiary data


maintained by the operating system such as execution of application programs,
temporary internet files, log entries and registry entries are lost when the user
exits or closes the sessions.

• Another issue is that most of the evidentiary data resides on the CSP side.
Obtaining data from CSP depends on the jurisdiction related to that country and
5
SLA provision.
Objectives
• To investigate and explore the existing digital forensic analysis
techniques in the domain of cloud computing.

• To critically evaluate the strengths and weaknesses of existing


forensic analysis techniques in the area of cloud computing.

• To propose a conceptual model that addresses the identified


issues.

6
Critical analysis/evaluation

7
Critical analysis/evaluation

8
Critical analysis/evaluation

9
Key Challenges
• Acquisition of forensic data

• Integrity / authenticity of forensic data

• CSP Dependence

• Multi-tenancy nature of cloud

• Decentralization

10
Proposed Conceptual Model

11
Conclusions
• Critical evaluation of different digital forensic analysis
approaches which facilitate speedy and authentic analysis of
the incriminating activities on the cloud environment.

• Critical analysis discusses merits and demerits of the existing


cloud forensics approaches.

• Review of forensic approaches helped to understand existing


gaps and based on these gaps, a conceptual framework is
proposed.

12
Future work
• The proposed conceptual model for Cloud Forensic
Investigation needs to be implemented/validated as a future
work .

13
References
• M. E. Alex and R. Kishore, "Forensics framework for cloud computing," Computers & Electrical
Engineering 60 (2017): 193-205, 2017.
• N.A. Mutawa, J. Bryce, V.N.L. Franqueira and A.Marrington, "Forensic investigation of cyberstalking cases
using Behavioural Evidence Analysis," Digital investigation 16 (2016): 96-103, 2016.
• H.Chung, J.Park, S.Lee and C.Kang, "Digital forensic investigation of cloud storage services," Digital
investigation 9.2 (2012): 81-95, 2012.
• V.Roussev and S.McCulley, "Forensic analysis of cloud-native artifacts," Digital Investigation 16 (2016):
104-113, 2016.
• B.Martini and K.K.R. Choo, "An integrated conceptual digital forensic framework for cloud
computing," Digital Investigation 9(2): 71-80, 2012.
• J.Dykstra and A.T.Sherman, "Acquiring forensic evidence from infrastructure-as-a-service cloud
computing: Exploring and evaluating tools, trust, and technique," Digital Investigation 9 (2012): 90-98,
2012.
• T. Sang, "A log based approach to make digital forensics easier on cloud computing," In: Third
International Conference on Intelligent System Design and Engineering Applications (ISDEA), 16-18 Jan.
2013 Hong Kong, China pp. 91-94, 2013.
• V.Roussev, I. Ahmed, A. Barreto, S. McCulley and V.Shanmughan, "Cloud forensics–Tool development
studies & future outlook," Digital investigation 18 (2016): 79-95, 2016.

14
References
• Z.Qi, C. Xiang, R. Ma, J.Li, H.Guan and D.S.L Wei,” ForenVisor: A tool for acquiring and preserving reliable
data in cloud live forensics,” IEEE Transactions on Cloud Computing 5(3):443-56, 2017.
• J.Boucher and N. A.L. Khac, "Forensic framework to identify local vs synced artefacts," Digital
Investigation 24 (2018): 68-75, 2018.
• E.E.D.Hemdan and D. H. Manjaiah,"A cloud forensic strategy for investigation of cybercrime,” In:IEEE
International Conference on Emerging Technological Trends (ICETT), 21-22 October 2016, Kollam,
India,pp. 1-5,2016.
• J.Dykstra and A. T. Sherman, "Design and implementation of FROST: Digital forensic tools for the
OpenStack cloud computing platform," Digital Investigation 10 (2013): 87-95, 2013.
• D.Quick and K.K. R. Choo, "Forensic collection of cloud storage data: Does the act of collection result in
changes to the data or its metadata?," Digital Investigation 10.3 (2013): 266-277, 2013.
• J. S.Hale, "Amazon cloud drive forensic analysis," Digital Investigation 10.3 (2013): 259-265, 2013.
• M. E. Alex and R. Kishore, "Forensic model for cloud computing: an overview," In: IEEE International
Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), 23–25 March
2016, Chennai, India 2016.
• K. Karen, S. Chevalier, T. Grance and Hung Dang, "Guide to integrating forensic techniques into incident
response," NIST Special Publication 10 (2006): 800-86, 2006.
• K.Ruan, J. Carthy, T. Kechadi and M.Crosbie, "Cloud forensics," IFIP International Conference on Digital
Forensics, Springer, Berlin, Heidelberg, 2011.
• P. M. Mell and T. Grance, "The NIST definition of cloud computing: Recommendations of the National
Institute of Standards and Technology," Special Publication 800-145, 2011.
15
Advanced Operating System
(Topic: Process Scheduling)

16
Introduction
• When a computer is multi-programmed, it frequently has
multiple processes or threads competing for the CPU at the
same time

• If only one CPU is available, a choice has to be made as which


process to run next

• The part of the operating system that makes the choice is


called the scheduler and the algorithm it uses is called the
scheduling algorithm

17
Objectives of good scheduling policy
• Fairness
• Efficiency
• Low response time (important for interactive jobs)
• Low turnaround time (important for batch jobs)
• High throughput
• Repeatability “wasted cycles” and limited logins for
repeatability

18
When to do Scheduling? (1/2)
• New process created

• Process exits

• Process blocks on I/O

• I/O interrupt occur

• Clock interrupt

19
When to do Scheduling? (2/2)
• Non Pre-emptive
Non pre-emptive algorithms are designed so that once a
process enters the running it is not removed from the
processor until it has completed its service time.
• Pre-emptive
If a process is currently using the processor and a new
process with a higher priority enters, the ready list, the
process on the processor should be removed and returned
to the ready list until it is once again the highest-priority
process in the system

20
Scheduling in Batch Systems
• Shortest job first (SJF)  Non Pre-emptive

• First-come First-served  Non Pre-emptive

• Shortest remaining Time next  Pre-emptive

21
Scheduling in Batch Systems
(Shortest Job First SJF)
• If we assume the run times of the jobs to be known in
advance, the non-preemptive batch SJF algorithm picks the
shortest job first.

• Note that this algorithm is optimal when all the jobs are
available simultaneously.

• The difficulty of this algorithm is prediction of the time usage


for a job

22
Scheduling in Batch Systems
(Shortest Job First SJF)

For example :
(a)FIFO (b)SJF

Turnaround average: Turnaround average:


(8 + 12 + 16+ 20)/4= (4 + 8 + 12 + 20)/4 =
14 in FIFO 11 in SJF

• Turnaround is time from submission to complete a process


23
Scheduling in Interactive Systems
• Round-robin scheduling
• Priority scheduling
• Multiple queues
• Shortest process next
• Guaranteed scheduling
• Lottery scheduling
• Fair-share scheduling
Scheduling in Interactive Systems
(Round Robin Scheduling)
• Each process is assigned a time interval, called its quantum

• If the process is running at the end of quantum, the CPU is


pre-empted and given to anther process

• If the process has blocked/finished before quantum has


elapsed, CPU switches to the other process when process
blocks
Scheduling in Interactive Systems
(Round Robin Scheduling)

• The list of runable process


• In this list after B uses up its quantum
Scheduling in Interactive Systems
(Round Robin Scheduling)
EXAMPLE DATA:
Process Arrival Service
Time Time
1 0 8
2 1 4
3 2 9
4 3 5
Round Robin, quantum = 4, no priority-based preemption

P1 P2 P3 P4 P1 P3 P4 P3
0 4 8 12 16 20 24 25 26

Average wait = ( (20-0) + (8-1) + (26-2) + (25-3) )/4 = 74/4 = 18.5


27
Criteria For Performance Evaluation

UTILIZATION TIME: The fraction of time a device is in use. ( ratio of in-use


time / total observation time )

THROUGHPUT: The number of job completions in a period of time (jobs


second)

SERVICE TIME: The time required by a device to handle a request. (seconds)

QUEUEING TIME: Time on a queue waiting for service from the device (seconds)

RESIDENCE TIME: The time spent by a request at a device.


RESIDENCE TIME = SERVICE TIME + QUEUEING TIME.

RESPONSE TIME: Time used by a system to respond to a User Job ( seconds )

THINK TIME: The time spent by the user of an interactive system to figure out the
next request. (seconds)

The goal is to optimize both the average and amount of variation


Thank you
Questions?

29

You might also like