Final Was Record
Final Was Record
CONTENTS
PAGE FACULTY
[Link] DATE NAME OF THE EXPERIMENT MARKS
NO SIGN
AVERAGE:
LIST OF EXPERIMENTS :
30 PERIODS
a. GET
b. PUSH
c. POST
d. DELETE
a. SQL injection
AIM :
To Install wireshark and explore the various protocols , to Analyze the difference between HTTP
vs HTTPS and to Analyze the various security mechanisms embedded with different protocols.
PROCEDURE :
Open Wireshark and start capturing traffic on the desired network interface.
Step 5 : Result:
PROGRAM :
def packet_callback(packet):
if IP in packet:
ip_src = packet[IP].src
ip_dst = packet[IP].dst
if TCP in packet:
protocol = "TCP"
src_port = packet[TCP].sport
dst_port = packet[TCP].dport
elif UDP in packet:
protocol = "UDP"
src_port = packet[UDP].sport
dst_port = packet[UDP].dport
else:
return
# Start sniffing packets on the specified network interface (replace 'eth0' with your actual interface)
sniff(iface='eth0', prn=packet_callback, store=0)
INSTALLING WIRESHARK :
Installation Components :
On the Choose Components page of the installer you can select from the following:
Wireshark - The network protocol analyzer that we all know and mostly love.
TShark - A command-line network protocol analyzer. If you haven’t tried it you should.
External Capture (extcap) - External Capture Interfaces
Install Location:
By default Wireshark installs into %ProgramFiles%\Wireshark on 32-bit Windows and
%ProgramFiles64%\Wireshark on 64-bit Windows. This expands to C:\Program Files\Wireshark
on most systems.
Installing Npcap :
The Wireshark installer contains the latest Npcap installer.
If you don’t have Npcap installed you won’t be able to capture live network traffic but you will
still be able to open saved capture files. By default the latest version of Npcap will be installed. If
you don’t wish to do this or if you wish to reinstall Npcap you can check the Install Npcap box as
needed.
/S runs the installer or uninstaller silently with default values. The silent installer will not
install Npcap.
/desktopicon installation of the desktop icon, =yes - force installation, =no - don’t install,
otherwise use default settings. This option can be useful for a silent installer.
/quicklaunchicon installation of the quick launch icon, =yes - force installation, =no - don’t
install, otherwise use default settings.
/D sets the default installation directory ($INSTDIR), overriding InstallDir and
InstallDirRegKey. It must be the last parameter used in the command line and must not
contain any quotes even if the path contains spaces.
/NCRC disables the CRC check. We recommend against using this flag.
/EXTRACOMPONENTS comma separated list of optional components to install. The
following extcap binaries are supported.
Example:
Running the installer without any parameters shows the normal interactive installer.
Update Npcap :
Wireshark updates may also include a new version of Npcap. Manual Npcap updates instructions
can be found on the Npcap web site at [Link] You may have to reboot your machine
after installing a new Npcap version.
In line number 17 you see the response we are getting back with full DNS resolution
Now if you look at Packet number 4 i.e is get request,HTTP primarily used two command
2: POST: To send information(For eg: when we submit some form we fill some data i.e is POST)
Here I am trying to get [Link] via HTTP protocol 1.1(The new version of protocol is now
available i.e 2.0)
Then at line number 5 we see the acknowledgment as well as line number 6 server was able to
found that page and send HTTP status code 200.
You will see some more info like for packet 6, like Server type is Apache, content type is HTML,
how long is the content length is,
Then you will see bunch of continuation that is due to TCP window where you don’t get
acknowledgement for each and every packet
Then if we click on any application data that data is unreadable to us it’s all gibberish but with
wireshark we can decrypt that data only thing we need is the Private Key of the server.
IP address: [Link]
Port: 443
Protocol: http
Key File:
[Link]
.tgz
Wireshark comes with several capture and display filters. But a user can create display filters using
protocol header values as well. Use this technique to analyze traffic efficiently.
proto[offset:size(optional)]=value
Following the above syntax, it is easy to create a dynamic capture filter, where:
Analyzing endpoints :
This feature comes in handy to determine the endpoint generating the highest volume or abnormal
traffic in the network. To analyze the endpoints between two communication devices, do the
following:
Capture traffic and select the packet whose endpoint you wish to check. -> Click Statistics menu ->
Select Endpoints.
MARKS ALLOCATION
Marks Marks
Details Allotted Awarded
Pre Lab Questions 10
Algorithm & Logic Explanation 30
Coding 30
Execution & Output 20
Post Lab Questions (Viva) 10
Total 100
RESULT :
Wireshark analysis revealed distinctions between HTTP and HTTPS, showcasing HTTPS's
encryption for secure data transmission. Security mechanisms like SSL/TLS were observed in
various protocols, ensuring confidentiality and integrity.
[Link]
Date: Identify the vulnerabilities using OWASP ZAP tool
AIM :
PROCEDURE :
Download and install OWASP ZAP from the official website ([Link]
Navigate through the web application while OWASP ZAP records and analyzes traffic.
PROGRAM :
Starting ZAP
Once setup you can start ZAP by clicking the ZAP icon on your Windows desktop or from the start
menu.
When the app launches, it asks you whether you want to save the session or not. If you want to use
the current run configuration or test results later, you should save the session for later. For now let’s
select “No, I do not want to persist this session at this moment in time”.
Once you click the “Start” button, the ZAP UI will be launched.
Spidering a web application means crawling all the links and getting the structure of the application.
ZAP provides two spiders for crawling web applications;
The traditional ZAP spider discovers links by examining the HTML in responses from the web
application. This spider is fast, but it is not always effective when exploring an AJAX web
application.
AJAX spider:
This is more likely to be effective for AJAX applications. This spider explores the web application
by invoking browsers which then follow the links that have been generated. The AJAX spider is
slower than the traditional spider.
Automated scan :
This option allows you to launch an automated scan against an application just by entering the URL.
If you are new to ZAP, it is best to start with Automated Scan mode
Spiders are a great way to explore the basic site, but they should be combined with manual
exploration to be more effective. This functionality is very useful when your web application needs
a login or contains things like registration forms, etc.
You can launch browsers that are pre-configured to proxy through ZAP via the Quick Start tab.
Browsers launched in this way will also ignore any certificate validation warnings that would
otherwise be reported.
To Manually Explore the web application:
Start ZAP and click on the large ‘Manual Explore’ button in the Quick Start tab.
Enter the full URL of the web application to be explored in the ‘URL to explore’ text box.
Select the browser you would like to use and click the ‘Launch Browser’ button.
This will launch the selected browser with a new profile. Now explore all of the targeted web
applications through this browser. ZAP passively scans all the requests and responses made during
your exploration for vulnerabilities, continues to build the site tree, and records alerts for potential
vulnerabilities found during the exploration.
Once the scan is completed, ZAP generates a list of issues that are found during the scan. These
issues can be seen on the Alerts tab that is located in the bottom pane. All the issues are marked with
colour coded flags. You can also generate an HTML scan report through the ‘Report’ menu option
on the top of the screen.
OUTPUT :
PRELAB VIVA QUESTIONS :
1. Why is proactively identifying and addressing web application vulnerabilities important for
overall cybersecurity?
2. What are the specific system requirements and steps involved in installing OWASP ZAP?
3. How does OWASP ZAP differentiate between passive and active scanning methods for web
applications?
4. Can you name a few examples of common vulnerabilities listed in the OWASP Top Ten?
5. In what manner does OWASP ZAP categorize and prioritize vulnerabilities in its scan reports?
1. How does OWASP ZAP differentiate between false positives and actual vulnerabilities?
2. Explain the significance of addressing vulnerabilities listed in the OWASP Top Ten.
3. What actions can be taken to remediate SQL injection vulnerabilities identified by OWASP
ZAP?
4. How does OWASP ZAP contribute to a DevSecOps approach in application development?
5. In what ways can continuous monitoring with OWASP ZAP enhance overall web application
security?
MARKS ALLOCATION
Marks Marks
Details Allotted Awarded
Pre Lab Questions 10
Algorithm & Logic
Explanation 30
Coding 30
Execution & Output 20
Post Lab Questions (Viva) 10
Total 100
RESULT :
OWASP ZAP generates detailed reports highlighting vulnerabilities such as SQL injection,
XSS, and more, aiding in web application security assessment.
[Link]: 3 Create simple REST API using python for following operation
Date: a. GET [Link] [Link] d. DELETE
AIM :
PROCEDURE :
PROGRAM :
To write code that interacts with REST APIs, most Python developers turn to requests to send
HTTP requests. This library abstracts away the complexities of making HTTP requests. It’s one of
the few projects worth treating as if it’s part of the standard library.
To start using requests, you need to install it first. You can use pip to install it:
Now that you’ve got requests installed, you can start sending HTTP requests.
1. GET :
GET is one of the most common HTTP methods you’ll use when working with REST APIs. This
method allows you to retrieve resources from a given API. GET is a read-only operation, so you
shouldn’t use it to modify an existing resource.
To test out GET and the other methods in this section, you’ll use a service called JSONPlaceholder.
This free service provides fake API endpoints that send back responses that requests can process.
To try this out, start up the Python REPL and run the following commands to send a GET request to
a JSONPlaceholder endpoint:
This code calls [Link]() to send a GET request to /todos/1, which responds with the todo item
with the ID 1. Then you can call .json() on the response object to view the data that came back from
the API.
The response data is formatted as JSON, a key-value store similar to a Python dictionary. It’s a very
popular data format and the de facto interchange format for most REST APIs.
Beyond viewing the JSON data from the API, you can also view other things about the response:
>>> response.status_code
200
>>> [Link]["Content-Type"]
'application/json; charset=utf-8'
Here, you access response.status_code to see the HTTP status code. You can also view the
response’s HTTP headers with [Link]. This dictionary contains metadata about the
response, such as the Content-Type of the response.
2. POST :
Now, take a look at how you use requests to POST data to a REST API to create a new resource.
You’ll use JSONPlaceholder again, but this time you’ll include JSON data in the request. Here’s the
data that you’ll send:
{
"userId": 1,
"title": "Buy milk",
"completed": false
}
This JSON contains information for a new todo item. Back in the Python REPL, run the following
code to create the new todo:
>>> response.status_code
201
First, you create a dictionary containing the data for your todo. Then you pass this dictionary to the
json keyword argument of [Link](). When you do this, [Link]() automatically sets the
request’s HTTP header Content-Type to application/json. It also serializes todo into a JSON string,
which it appends to the body of the request.
If you don’t use the json keyword argument to supply the JSON data, then you need to set Content-
Type accordingly and serialize the JSON manually. Here’s an equivalent version to the previous
code:
>>> response.status_code
201
In this code, you add a headers dictionary that contains a single header Content-Type set to
application/json. This tells the REST API that you’re sending JSON data with the request.
You then call [Link](), but instead of passing todo to the json argument, you first call
[Link](todo) to serialize it. After it’s serialized, you pass it to the data keyword argument. The
data argument tells requests what data to include in the request. You also pass the headers
dictionary to [Link]() to set the HTTP headers manually.
When you call [Link]() like this, it has the same effect as the previous code but gives you
more control over the request.
Note: [Link]() comes from the json package in the standard library. This package provides
useful methods for working with JSON in Python.
Once the API responds, you call [Link]() to view the JSON. The JSON includes a generated
id for the new todo. The 201 status code tells you that a new resource was created.
3. DELETE :
Last but not least, if you want to completely remove a resource, then you use DELETE. Here’s the
code to remove a todo:
>>> response.status_code
200
You call [Link]() with an API URL that contains the ID for the todo you would like to
remove. This sends a DELETE request to the REST API, which then removes the matching
resource. After deleting the resource, the API sends back an empty JSON object indicating that the
resource has been deleted.
The requests library is an awesome tool for working with REST APIs and an indispensable part of
your Python tool belt. In the next section, you’ll change gears and consider what it takes to build a
REST API.
4. PATCH :
Next up, you’ll use [Link]() to modify the value of a specific field on an existing todo.
PATCH differs from PUT in that it doesn’t completely replace the existing resource. It only
modifies the values set in the JSON sent with the request.
You’ll use the same todo from the last example to try out [Link](). Here are the current
values:
>>> response.status_code
200
When you call [Link](), you can see that title was updated to Mow lawn.
OUTPUT :
PRELAB VIVA QUESTIONS :
1. What is the significance of understanding basic REST API operations for web development?
2. How do HTTP methods (GET, POST, PUT, DELETE) map to CRUD (Create, Read, Update,
Delete) operations in the context of REST APIs?
3. What are the key considerations when designing a simple REST API using Python?
4. How can you ensure proper error handling and status code responses in a REST API?
5. In what scenarios would you choose one HTTP method over another in a RESTful service
design?
1. How does the chosen framework (Flask or Django) influence the simplicity and flexibility of the
REST API design?
2. Explain the role of HTTP status codes in conveying the success or failure of REST API
operations.
3. What measures can be taken to ensure the security of a simple REST API, considering common
vulnerabilities?
4. How would you handle authentication and authorization in the context of your RESTful service?
5. In what ways can the REST API design be improved to enhance scalability and maintainability?
MARKS ALLOCATION
Marks Marks
Details Allotted Awarded
Pre Lab Questions 10
Algorithm & Logic
Explanation 30
Coding 30
Execution & Output 20
Post Lab Questions (Viva) 10
Total 100
RESULT :
After running the program, you can use tools like curl or Postman to interact with the API, receiving
appropriate responses for each operation.
[Link] Install Burp Suite to do following vulnerabilities:
Date: a. SQL injection
AIM :
Install and use Burp Suite to detect SQL injection and XSS vulnerabilities.
PROCEDURE :
Step 1 : Installation :
Download and install Burp Suite from the official website ([Link]
Configure your browser to use Burp as a proxy.
Launch Burp Suite and intercept traffic.
Navigate through the application to identify and analyze requests.
PROGRAM :
Download the installer for Burp Suite Enterprise Edition. The link below opens the
download page for the latest stable release in a new tab.
Run the installer and click Next to display the Select Destination Directory page.
Windows:
Extract the installer burpsuite_enterprise_windows-x64_vYYYY_MM.exe from the installer
zip file.
Right-click the installer file and select Run as administrator.
Linux:
Extract the installer burpsuite_enterprise_linux_vYYYY_MM.sh from the installer zip file.
From the command line, run sudo sh <installer-sh-file> -c.
The destination directory is the directory in which the Enterprise server itself will be installed.
Enter or select a directory and then click Next to display the Installation options screen.
If you want to run the Enterprise server, web server, and scanning machines all on the same
computer, make sure that both the Running the Enterprise server and web server and
Running scans boxes are selected.
If you want to run scans on a separate machine to the Enterprise server and web server,
uncheck the Running scans box.
The logs directory is the folder that Burp Suite Enterprise Edition saves all generated logs to.
Enter or select a directory and then click Next to display the Data Directory page.
Step 6: Specify a data directory
The data directory is the folder that Burp Suite Enterprise Edition saves application data to.
Enter the Username of the system user (that is, the user on your machine as opposed to a Burp
Suite Enterprise Edition user) that you want to run Burp Suite Enterprise Edition processes under.
If this user does not already exist on your system then the installer creates a user at the end of the
process with the default name burpsuite.
The web server port is the port through which you can access the Burp Suite Enterprise Edition
application in your browser. By default, this is set to port 8080 if you are using the embedded
database or 8443 if you are using an external database. However, you can specify a different port
number if this port is not available on your machine.
The port must be available for use on the machine that you want to install the
Enterprise and web servers on.
The operating system user must be allowed to bind to that port. On Linux and
MacOS, low-privileged users are unable to bind to low port numbers (such as 80 or
433). If you want to use a low port number, you should configure port redirection at
the OS level.
Click Next.
Step 10: Specify a database backups directory
The database backups directory is the folder that Burp Suite Enterprise Edition backs up the
embedded database to.
After installation :
Now that you have installed Burp Suite Enterprise Edition, you need to complete the final part of
the configuration in the app itself. Access the app in your browser. By default, this should be
[Link] or [Link]
SQL injection vulnerabilities occur when an attacker can interfere with the queries that an
application makes to its database. You can use Burp to test for these vulnerabilities:
Professional Use Burp Scanner to automatically flag potential SQL injection vulnerabilities.
Use Burp Intruder to insert a list of SQL fuzz strings into a request. This may enable you to
change the way SQL commands are executed.
Steps
You can follow this process using a lab with a SQL injection vulnerability. For example, SQL
injection vulnerability in WHERE clause allowing retrieval of hidden data.
If you're using Burp Suite Professional, you can use Burp Scanner to test for SQL injection
vulnerabilities:
Identify a request that you want to investigate.
In Proxy > HTTP history, right-click the request and select Do active scan. Burp Scanner
audits the application.
Review the Issues list on the Dashboard to identify any SQL injection issues that Burp
Scanner flags.
You can alternatively use Burp Intruder to test for SQL injection vulnerabilities. This process also
enables you to closely investigate any issues that Burp Scanner has identified:
First, ensure that Burp is correctly configured with your [Link] intercept turned off in the
Proxy "Intercept" tab, visit the web application you are testing in your browser.
Visit the page of the website you wish to test for XSS vulnerabilities
Return to [Link] the Proxy "Intercept" tab, ensure "Intercept is on".
Enter some appropriate input in to the web application and submit the request.
The request will be captured by Burp. You can view the HTTP request in the Proxy "Intercept"
[Link] can also locate the relevant request in various Burp tabs without having to use the
intercept function, e.g. requests are logged and detailed in the "HTTP history" tab within the
"Proxy" [Link] click anywhere on the request to bring up the context [Link] "Send to
Repeater"
Go to the "Repeater" [Link] we can input various XSS payloads into the input [Link] can test
various inputs by editing the "Value" of the appropriate parameter in the "Raw" or "Params" tabs.A
simple payload such as <s> can often be used to check for [Link] this example we have used a
payload that attempts to perform a proof of concept pop up in our [Link] "Go".
We can assess whether the attack payload appears unmodified in the response. If so, the application
is almost certainly vulnerable to [Link] can find the response quickly using the search bar at the
bottom of the response [Link] highlighted text is the result of our search.
Right click on the response to bring up the context [Link] "Show response in browser" to
copy the [Link] can also use "Copy URL" or "Request in browser".
1. Why is it important to actively identify and address SQL injection and XSS vulnerabilities in
web applications?
2. What are the key features of Burp Suite that make it a popular tool for web application security
testing?
3. How does SQL injection differ from XSS in terms of attack vectors and potential impact on web
applications?
4. What are the common indicators of SQL injection and XSS vulnerabilities in web application
code?
5. How can web developers prevent SQL injection and XSS vulnerabilities during the software
development life cycle?
1. How can the information obtained from Burp Suite be used to exploit SQL injection
vulnerabilities?
2. Explain the importance of false positive and false negative results when using Burp Scanner.
3. What steps can be taken to remediate SQL injection vulnerabilities identified by Burp Suite?
4. Describe the impact of a successful XSS attack on a web application and its users.
5. How can Burp Suite be integrated into the development process to ensure ongoing security
testing of web applications?
MARKS ALLOCATION
Marks Marks
Details Allotted Awarded
Pre Lab Questions 10
Algorithm & Logic
Explanation 30
Coding 30
Execution & Output 20
Post Lab Questions (Viva) 10
Total 100
RESULT :
Burp Suite provides detailed reports highlighting potential SQL injection and XSS vulnerabilities in
the tested web application.
[Link]: 5 Attack the website using Social Engineering method
Date:
AIM :
PROCEDURE :
PROGRAM :
Now knowing how email addresses are structured, we can use Github Crosslinked. The program
will look up every person associated with the organization via LinkedIn and then generate an entire
list of email addresses to send a phishing email to.
Armed with the list of targets, now we can go phishing. We can use GoPhish, which is essentially a
one-stop-shop for conducting a phishing campaign.
1: Linking GoPhish with an SMTP Server
SendinBlue is an email marketing platform for sending and automating email marketing campaigns.
Unlike other email marketing platforms, which requires you to authenticate your organization’s
domain, anyone can use SendInBlue with zero authentication requirements. Within SendInBlue, we
generated the SMTP server name, and port.
2: Spoofing the Sender :
Now GoPhish has the ability to send emails using SendInBlue’s SMTP server. Here is where we
configure who the email is “supposed” to be coming from. In this instance we are sending it
“from” Igal. If you want to make it look like it’s coming from the CEO of the company, all you
need is their email address and put it in the “From” field.
You can even send a test email within GoPhish to check your configurations.
And it works! We wanted to make it look like we were sending an email from Igal Iytzki, the only
notable difference is that it says it’s via [Link].
Now we can upload the entire list of email addresses that GitHub CrossLinked generated.
5: Perfecting the Spoofed Brand
Let’s say we want to harvest credentials of the targets’ Facebook accounts. So we want to send
them an email that looks like it’s coming from Facebook. By importing a legitimate email from
Facebook requesting its users to reset their password, we can create a spoofing email that looks
almost indistinguishable from the real thing.
Notice the “Change Links to Landing Page,” GoPhish will automatically change all the links
within the email to point to the “fake” reset password page (otherwise known as a landing page).
After we imported the Facebook page, this is how it looks like in the email template editor.
Step 4: Creating the Phishing Site
Now we need to create the actual spoofed Facebook reset password website page. There are a few
ways to do this. More advanced attackers will buy a domain that is almost the same as the
legitimate site, e.g., [Link] as opposed to [Link]. Another way is to use a tool
called ZPhisher. Just press on Option 1, and it will generate the spoofed reset password page, and
will also allow you to choose where you want to host it, either Ngrok or CloudFlare.
In GoPhish, you can configure it so that the spoofed reset password landing page captures any
submitted data and passwords that the target enters and also redirect them to a legitimate page
within the next of the reset password flow so they do not suspect that their data has been stolen.
You are now ready to send out your phishing campaign. You can schedule when exactly you want
the email to go out, as timing of the campaign is another crucial factor in its success. Should the
email be sent out when members of the organization are just getting into the office, or during their
lunch break, or perhaps right before they sign off for the day?
PRELAB VIVA QUESTIONS :
1. Reflect on the ethical dilemmas and challenges encountered during the social engineering
simulation. How might these insights inform future security strategies?
2. Evaluate the effectiveness of the educational measures implemented to raise awareness about
social engineering among employees.
3. Discuss any legal or policy changes that could be recommended based on the outcomes and
findings of the social engineering lab exercise.
4. How can organizations continuously adapt and improve their defenses against evolving social
engineering tactics and techniques?
5. Reflect on the broader implications of social engineering attacks in the context of user trust,
privacy, and data protection. What measures can be taken to rebuild trust after such incidents?
MARKS ALLOCATION
Marks Marks
Details Allotted Awarded
Pre Lab Questions 10
Algorithm & Logic
Explanation 30
Coding 30
Execution & Output 20
Post Lab Questions (Viva) 10
Total 100
RESULT :
The social engineering lab exercise highlighted the importance of awareness, education, and ethical
considerations in addressing potential vulnerabilities within an organization's human factor,
contributing to a more resilient cybersecurity posture.