0% found this document useful (0 votes)
94 views61 pages

Final Was Record

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views61 pages

Final Was Record

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

CCS374 - WEB APPLICATION SECURITY

CONTENTS

PAGE FACULTY
[Link] DATE NAME OF THE EXPERIMENT MARKS
NO SIGN

Install wireshark and explore the various protocols


a. Analyze the difference between HTTP vs
HTTPS
1
b. Analyze the various security mechanisms
embedded with different protocols.

Identify the vulnerabilities using OWASP ZAP


2
tool

Create simple REST API using python for


following operation .
a. GET
3
b. PUSH
c. POST
d. DELETE
Install Burp Suite to do following
4 vulnerabilities:

Attack the website using Social Engineering


5 method
a. SQL injection
b. cross-site scripting (XSS)

AVERAGE:
LIST OF EXPERIMENTS :

30 PERIODS

1. Install wireshark and explore the various protocols

a. Analyze the difference between HTTP vs HTTPS

b. Analyze the various security mechanisms embedded with different protocols.

2. Identify the vulnerabilities using OWASP ZAP tool

3. Create simple REST API using python for following operation

a. GET

b. PUSH

c. POST

d. DELETE

4. Install Burp Suite to do following vulnerabilities:

a. SQL injection

b. cross-site scripting (XSS)

[Link] the website using Social Engineering method


[Link] Install wireshark and explore the various protocols
Date: a. Analyze the difference between HTTP vs HTTPS
b. Analyze the various security mechanisms embedded
with different protocols.

AIM :

To Install wireshark and explore the various protocols , to Analyze the difference between HTTP
vs HTTPS and to Analyze the various security mechanisms embedded with different protocols.

PROCEDURE :

Step 1 : Install Wireshark:

 Download and install Wireshark from the official website ([Link]

Step 2 : Capture Traffic:

 Open Wireshark and start capturing traffic on the desired network interface.

Step 3 : HTTP vs HTTPS Analysis:

 Filter captured packets for HTTP and HTTPS protocols.


 Analyze differences in packet structure, content, and encryption methods.

Step 4 : Security Mechanism Analysis:

 Capture packets for various protocols (e.g., TCP, UDP).


 Investigate security features like SSL/TLS, IPsec, and examine their implementations.

Step 5 : Result:

 Identify differences in packet structures, encryption levels, and security mechanisms


between HTTP and HTTPS.
 Analyze the presence of security features in different protocols.

PROGRAM :

from [Link] import sniff, IP, TCP, UDP

def packet_callback(packet):
if IP in packet:
ip_src = packet[IP].src
ip_dst = packet[IP].dst

if TCP in packet:
protocol = "TCP"
src_port = packet[TCP].sport
dst_port = packet[TCP].dport
elif UDP in packet:
protocol = "UDP"
src_port = packet[UDP].sport
dst_port = packet[UDP].dport
else:
return

print(f"{protocol} Packet: {ip_src}:{src_port} -> {ip_dst}:{dst_port}")

# Start sniffing packets on the specified network interface (replace 'eth0' with your actual interface)
sniff(iface='eth0', prn=packet_callback, store=0)

INSTALLING WIRESHARK :

Installing Wireshark under Windows :


The official Windows packages can be downloaded from the Wireshark main page or the
download page. Installer names contain the version and platform. For example, Wireshark-4.3.0-
[Link] installs Wireshark 4.3.0 for Windows on 64-bit Intel processors. The Wireshark installer
includes Npcap which is required for packet capture. Windows packages automatically update. See
Section 2.8, “Updating Wireshark” for details.

Simply download the Wireshark installer from [Link] and


execute it. Official packages are signed by Wireshark Foundation. You can choose to install
several optional components and select the location of the installed package. The default settings
are recommended for most users.

Installation Components :
On the Choose Components page of the installer you can select from the following:

 Wireshark - The network protocol analyzer that we all know and mostly love.
 TShark - A command-line network protocol analyzer. If you haven’t tried it you should.
 External Capture (extcap) - External Capture Interfaces

 Androiddump - Provide capture interfaces from Android devices.


 Etwdump - Provide an interface to read Event Tracing for Windows (ETW) event trace
(ETL).
 Randpktdump - Provide an interface to the random packet generator. (see also randpkt)
 Sshdump, Ciscodump, and Wifidump - Provide remote capture through SSH. (tcpdump,
Cisco EPC, wifi)
 UDPdump - Provide capture interface to receive UDP packets streamed from network
devices.
Additional Tasks :

 Wireshark Start Menu Item - Add a shortcut to the start menu.


 Wireshark Desktop Icon - Add a Wireshark icon to the desktop.
 Associate trace file extensions with Wireshark - Associate standard network trace files to
Wireshark.

Install Location:
By default Wireshark installs into %ProgramFiles%\Wireshark on 32-bit Windows and
%ProgramFiles64%\Wireshark on 64-bit Windows. This expands to C:\Program Files\Wireshark
on most systems.

Installing Npcap :
The Wireshark installer contains the latest Npcap installer.

If you don’t have Npcap installed you won’t be able to capture live network traffic but you will
still be able to open saved capture files. By default the latest version of Npcap will be installed. If
you don’t wish to do this or if you wish to reinstall Npcap you can check the Install Npcap box as
needed.

For more information about Npcap see [Link] and


[Link]

Windows installer command line options :


For special cases, there are some command line parameters available:

 /S runs the installer or uninstaller silently with default values. The silent installer will not
install Npcap.
 /desktopicon installation of the desktop icon, =yes - force installation, =no - don’t install,
otherwise use default settings. This option can be useful for a silent installer.
 /quicklaunchicon installation of the quick launch icon, =yes - force installation, =no - don’t
install, otherwise use default settings.
 /D sets the default installation directory ($INSTDIR), overriding InstallDir and
InstallDirRegKey. It must be the last parameter used in the command line and must not
contain any quotes even if the path contains spaces.
 /NCRC disables the CRC check. We recommend against using this flag.
 /EXTRACOMPONENTS comma separated list of optional components to install. The
following extcap binaries are supported.

 androiddump - Provide interfaces to capture from Android devices


 ciscodump - Provide interfaces to capture from a remote Cisco router through SSH
 randpktdump - Provide an interface to generate random captures using randpkt
 sshdump - Provide interfaces to capture from a remote host through SSH using a remote
capture binary
 udpdump - Provide a UDP receiver that gets packets from network devices

Example:

> [Link] /NCRC /S /desktopicon=yes /quicklaunchicon=no /D=C:\Program


Files\Foo

> [Link] /S /EXTRACOMPONENTS=sshdump,udpdump

Running the installer without any parameters shows the normal interactive installer.

Manual Npcap Installation :


As mentioned above, the Wireshark installer also installs Npcap. If you prefer to install Npcap
manually or want to use a different version than the one included in the Wireshark installer, you
can download Npcap from the main Npcap site at [Link]

Update Npcap :
Wireshark updates may also include a new version of Npcap. Manual Npcap updates instructions
can be found on the Npcap web site at [Link] You may have to reboot your machine
after installing a new Npcap version.

a. Analyze the difference between HTTP vs HTTPS :


Before start analyzing any packet, please turn off “Allow subdissector to reassemble
TCP streams”(Preference → Protocol → TCP)(This will prevent TCP packet to split
into multiple PDU unit)
As you can see I am using HTTP so that the encryption will not be hidden behind TLS.

As you can see at line number 13 standard DNS resolution is happening.

In line number 17 you see the response we are getting back with full DNS resolution

Now if you look at Packet number 4 i.e is get request,HTTP primarily used two command

1: GET: To retrieve information

2: POST: To send information(For eg: when we submit some form we fill some data i.e is POST)
Here I am trying to get [Link] via HTTP protocol 1.1(The new version of protocol is now
available i.e 2.0)

Then at line number 5 we see the acknowledgment as well as line number 6 server was able to
found that page and send HTTP status code 200.

If you want more info about HTTP status code

You will see some more info like for packet 6, like Server type is Apache, content type is HTML,
how long is the content length is,

Then you will see bunch of continuation that is due to TCP window where you don’t get
acknowledgement for each and every packet

and at that top some usual TCP handshake

Now lets try to dissect HTTPS capture

as you can see

 3 way handshake is happening,


 hello from SSL client and then acknowledgement from Server
 Server Hello and then ACK
 Exchanging some key and cipher information
 Finally it actually start exchanging data.

Then if we click on any application data that data is unreadable to us it’s all gibberish but with
wireshark we can decrypt that data only thing we need is the Private Key of the server.

Once again go to Preference → Protocol → SSL

Add these value

IP address: [Link]

Port: 443

Protocol: http

Key File:
[Link]
.tgz

as you can see data is now decrypted


Analyze the various security mechanisms embedded with different protocols.

Capture filters with protocol header values

Wireshark comes with several capture and display filters. But a user can create display filters using
protocol header values as well. Use this technique to analyze traffic efficiently.

proto[offset:size(optional)]=value

Following the above syntax, it is easy to create a dynamic capture filter, where:

 proto = desired protocol


 offset = header value
 size = data length
 value = data you want to find
Some instances are in the following table:

Analyzing endpoints :

This feature comes in handy to determine the endpoint generating the highest volume or abnormal
traffic in the network. To analyze the endpoints between two communication devices, do the
following:

Capture traffic and select the packet whose endpoint you wish to check. -> Click Statistics menu ->
Select Endpoints.

The most traffic-intensive endpoint, as seen in the picture below, is [Link].


OUTPUT :
PRELAB VIVA QUESTIONS :

1. Why is it important to understand network protocols for cybersecurity?


2. What are the potential security risks associated with using HTTP over HTTPS?
3. How do various protocols ensure data confidentiality and integrity?
4. What are the system requirements for installing Wireshark?
5. What are the steps involved in starting a Wireshark capture session?

POSTLAB VIVA QUESTIONS :

1. What encryption algorithm is commonly used in HTTPS to secure data transmission?


2. How does Wireshark help in identifying potential security threats in network traffic?
3. What is the significance of the SSL/TLS handshake in secure communication?
4. Explain how Wireshark can be used to detect and analyze a denial-of-service (DoS) attack.
5. How can network administrators use the insights gained from Wireshark analysis to enhance
network security measures?

MARKS ALLOCATION

Marks Marks
Details Allotted Awarded
Pre Lab Questions 10
Algorithm & Logic Explanation 30
Coding 30
Execution & Output 20
Post Lab Questions (Viva) 10

Total 100

RESULT :

Wireshark analysis revealed distinctions between HTTP and HTTPS, showcasing HTTPS's
encryption for secure data transmission. Security mechanisms like SSL/TLS were observed in
various protocols, ensuring confidentiality and integrity.
[Link]
Date: Identify the vulnerabilities using OWASP ZAP tool

AIM :

To Utilize OWASP ZAP to identify and assess vulnerabilities in a web application.

PROCEDURE :

Step 1 : Install OWASP ZAP:

 Download and install OWASP ZAP from the official website ([Link]

Step 2 : Configure Proxy Settings:

 Set up the browser to use OWASP ZAP as a proxy.

Step 3 : Perform Active Scan:

 Navigate through the web application while OWASP ZAP records and analyzes traffic.

Step 4 : Analyze Results:

 Review the generated reports for identified vulnerabilities.

PROGRAM :
Starting ZAP

Once setup you can start ZAP by clicking the ZAP icon on your Windows desktop or from the start
menu.

When the app launches, it asks you whether you want to save the session or not. If you want to use
the current run configuration or test results later, you should save the session for later. For now let’s
select “No, I do not want to persist this session at this moment in time”.
Once you click the “Start” button, the ZAP UI will be launched.

Spidering the web application :

Spidering a web application means crawling all the links and getting the structure of the application.
ZAP provides two spiders for crawling web applications;

Traditional ZAP spider:

The traditional ZAP spider discovers links by examining the HTML in responses from the web
application. This spider is fast, but it is not always effective when exploring an AJAX web
application.

AJAX spider:

This is more likely to be effective for AJAX applications. This spider explores the web application
by invoking browsers which then follow the links that have been generated. The AJAX spider is
slower than the traditional spider.

Automated scan :

This option allows you to launch an automated scan against an application just by entering the URL.
If you are new to ZAP, it is best to start with Automated Scan mode

To run a Quick Start Automated Scan:


1. Start Zap and click the large ‘Automated Scan’ button in the ‘Quick Start’ tab.
2. Enter the full URL of the web application you want to attack in the ‘URL to attack’ text box.
3. Click the ‘Attack’ button.
Once you click the ‘Attack’ button, ZAP will start crawling the web application with its spider and
passively scan each page it finds. Then ZAP will use the active scanner to attack all of the
discovered pages, functionality and parameters.

Exploring the web application manually

Spiders are a great way to explore the basic site, but they should be combined with manual
exploration to be more effective. This functionality is very useful when your web application needs
a login or contains things like registration forms, etc.

You can launch browsers that are pre-configured to proxy through ZAP via the Quick Start tab.
Browsers launched in this way will also ignore any certificate validation warnings that would
otherwise be reported.
To Manually Explore the web application:

 Start ZAP and click on the large ‘Manual Explore’ button in the Quick Start tab.
 Enter the full URL of the web application to be explored in the ‘URL to explore’ text box.
 Select the browser you would like to use and click the ‘Launch Browser’ button.

This will launch the selected browser with a new profile. Now explore all of the targeted web
applications through this browser. ZAP passively scans all the requests and responses made during
your exploration for vulnerabilities, continues to build the site tree, and records alerts for potential
vulnerabilities found during the exploration.

Inspecting the test results

Once the scan is completed, ZAP generates a list of issues that are found during the scan. These
issues can be seen on the Alerts tab that is located in the bottom pane. All the issues are marked with
colour coded flags. You can also generate an HTML scan report through the ‘Report’ menu option
on the top of the screen.
OUTPUT :
PRELAB VIVA QUESTIONS :

1. Why is proactively identifying and addressing web application vulnerabilities important for
overall cybersecurity?
2. What are the specific system requirements and steps involved in installing OWASP ZAP?
3. How does OWASP ZAP differentiate between passive and active scanning methods for web
applications?
4. Can you name a few examples of common vulnerabilities listed in the OWASP Top Ten?
5. In what manner does OWASP ZAP categorize and prioritize vulnerabilities in its scan reports?

POSTLAB VIVA QUESTIONS :

1. How does OWASP ZAP differentiate between false positives and actual vulnerabilities?
2. Explain the significance of addressing vulnerabilities listed in the OWASP Top Ten.
3. What actions can be taken to remediate SQL injection vulnerabilities identified by OWASP
ZAP?
4. How does OWASP ZAP contribute to a DevSecOps approach in application development?
5. In what ways can continuous monitoring with OWASP ZAP enhance overall web application
security?

MARKS ALLOCATION

Marks Marks
Details Allotted Awarded
Pre Lab Questions 10
Algorithm & Logic
Explanation 30
Coding 30
Execution & Output 20
Post Lab Questions (Viva) 10

Total 100

RESULT :

OWASP ZAP generates detailed reports highlighting vulnerabilities such as SQL injection,
XSS, and more, aiding in web application security assessment.
[Link]: 3 Create simple REST API using python for following operation
Date: a. GET [Link] [Link] d. DELETE

AIM :

To Create a simple REST API using Python for CRUD operations.

PROCEDURE :

Step 1 : Install Flask:

 Run: pip install Flask

Step 2 : Create API Script:

 Create a Python script (e.g., [Link]) with specified routes.

Step 3 : Run Flask App:

 Execute: python [Link]

Step 4 : Test API:

 Use curl or Postman to test each operation.

Step 5 : Interpret Results:

 Observe responses for each operation:


 GET: "Data Retrieved"
 POST: "Data Created"
 PUT: "Data Updated"
 DELETE: "Data Deleted"

PROGRAM :

REST and Python: Consuming APIs

To write code that interacts with REST APIs, most Python developers turn to requests to send
HTTP requests. This library abstracts away the complexities of making HTTP requests. It’s one of
the few projects worth treating as if it’s part of the standard library.

To start using requests, you need to install it first. You can use pip to install it:

$ python -m pip install requests

Now that you’ve got requests installed, you can start sending HTTP requests.
1. GET :

GET is one of the most common HTTP methods you’ll use when working with REST APIs. This
method allows you to retrieve resources from a given API. GET is a read-only operation, so you
shouldn’t use it to modify an existing resource.

To test out GET and the other methods in this section, you’ll use a service called JSONPlaceholder.
This free service provides fake API endpoints that send back responses that requests can process.

To try this out, start up the Python REPL and run the following commands to send a GET request to
a JSONPlaceholder endpoint:

>>> import requests


>>> api_url = "[Link]
>>> response = [Link](api_url)
>>> [Link]()

{'userId': 1, 'id': 1, 'title': 'delectus aut autem', 'completed': False}

This code calls [Link]() to send a GET request to /todos/1, which responds with the todo item
with the ID 1. Then you can call .json() on the response object to view the data that came back from
the API.

The response data is formatted as JSON, a key-value store similar to a Python dictionary. It’s a very
popular data format and the de facto interchange format for most REST APIs.

Beyond viewing the JSON data from the API, you can also view other things about the response:

>>> response.status_code
200

>>> [Link]["Content-Type"]
'application/json; charset=utf-8'

Here, you access response.status_code to see the HTTP status code. You can also view the
response’s HTTP headers with [Link]. This dictionary contains metadata about the
response, such as the Content-Type of the response.

2. POST :

Now, take a look at how you use requests to POST data to a REST API to create a new resource.
You’ll use JSONPlaceholder again, but this time you’ll include JSON data in the request. Here’s the
data that you’ll send:
{
"userId": 1,
"title": "Buy milk",
"completed": false
}

This JSON contains information for a new todo item. Back in the Python REPL, run the following
code to create the new todo:

>>> import requests


>>> api_url = "[Link]
>>> todo = {"userId": 1, "title": "Buy milk", "completed": False}
>>> response = [Link](api_url, json=todo)
>>> [Link]()
{'userId': 1, 'title': 'Buy milk', 'completed': False, 'id': 201}

>>> response.status_code
201

Here, you call [Link]() to create a new todo in the system.

First, you create a dictionary containing the data for your todo. Then you pass this dictionary to the
json keyword argument of [Link](). When you do this, [Link]() automatically sets the
request’s HTTP header Content-Type to application/json. It also serializes todo into a JSON string,
which it appends to the body of the request.

If you don’t use the json keyword argument to supply the JSON data, then you need to set Content-
Type accordingly and serialize the JSON manually. Here’s an equivalent version to the previous
code:

>>> import requests


>>> import json
>>> api_url = "[Link]
>>> todo = {"userId": 1, "title": "Buy milk", "completed": False}
>>> headers = {"Content-Type":"application/json"}
>>> response = [Link](api_url, data=[Link](todo), headers=headers)
>>> [Link]()
{'userId': 1, 'title': 'Buy milk', 'completed': False, 'id': 201}

>>> response.status_code
201
In this code, you add a headers dictionary that contains a single header Content-Type set to
application/json. This tells the REST API that you’re sending JSON data with the request.

You then call [Link](), but instead of passing todo to the json argument, you first call
[Link](todo) to serialize it. After it’s serialized, you pass it to the data keyword argument. The
data argument tells requests what data to include in the request. You also pass the headers
dictionary to [Link]() to set the HTTP headers manually.

When you call [Link]() like this, it has the same effect as the previous code but gives you
more control over the request.

Note: [Link]() comes from the json package in the standard library. This package provides
useful methods for working with JSON in Python.

Once the API responds, you call [Link]() to view the JSON. The JSON includes a generated
id for the new todo. The 201 status code tells you that a new resource was created.

3. DELETE :

Last but not least, if you want to completely remove a resource, then you use DELETE. Here’s the
code to remove a todo:

>>> import requests


>>> api_url = "[Link]
>>> response = [Link](api_url)
>>> [Link]()
{}

>>> response.status_code
200

You call [Link]() with an API URL that contains the ID for the todo you would like to
remove. This sends a DELETE request to the REST API, which then removes the matching
resource. After deleting the resource, the API sends back an empty JSON object indicating that the
resource has been deleted.

The requests library is an awesome tool for working with REST APIs and an indispensable part of
your Python tool belt. In the next section, you’ll change gears and consider what it takes to build a
REST API.

4. PATCH :

Next up, you’ll use [Link]() to modify the value of a specific field on an existing todo.
PATCH differs from PUT in that it doesn’t completely replace the existing resource. It only
modifies the values set in the JSON sent with the request.
You’ll use the same todo from the last example to try out [Link](). Here are the current
values:

{'userId': 1, 'title': 'Wash car', 'completed': True, 'id': 10}


Now you can update the title with a new value:

>>> import requests


>>> api_url = "[Link]
>>> todo = {"title": "Mow lawn"}
>>> response = [Link](api_url, json=todo)
>>> [Link]()
{'userId': 1, 'id': 10, 'title': 'Mow lawn', 'completed': True}

>>> response.status_code
200

When you call [Link](), you can see that title was updated to Mow lawn.
OUTPUT :
PRELAB VIVA QUESTIONS :

1. What is the significance of understanding basic REST API operations for web development?
2. How do HTTP methods (GET, POST, PUT, DELETE) map to CRUD (Create, Read, Update,
Delete) operations in the context of REST APIs?
3. What are the key considerations when designing a simple REST API using Python?
4. How can you ensure proper error handling and status code responses in a REST API?
5. In what scenarios would you choose one HTTP method over another in a RESTful service
design?

POSTLAB VIVA QUESTIONS :

1. How does the chosen framework (Flask or Django) influence the simplicity and flexibility of the
REST API design?
2. Explain the role of HTTP status codes in conveying the success or failure of REST API
operations.
3. What measures can be taken to ensure the security of a simple REST API, considering common
vulnerabilities?
4. How would you handle authentication and authorization in the context of your RESTful service?
5. In what ways can the REST API design be improved to enhance scalability and maintainability?

MARKS ALLOCATION

Marks Marks
Details Allotted Awarded
Pre Lab Questions 10
Algorithm & Logic
Explanation 30
Coding 30
Execution & Output 20
Post Lab Questions (Viva) 10

Total 100

RESULT :

After running the program, you can use tools like curl or Postman to interact with the API, receiving
appropriate responses for each operation.
[Link] Install Burp Suite to do following vulnerabilities:
Date: a. SQL injection

b. cross-site scripting (XSS)

AIM :

Install and use Burp Suite to detect SQL injection and XSS vulnerabilities.

PROCEDURE :

Step 1 : Installation :
 Download and install Burp Suite from the official website ([Link]
 Configure your browser to use Burp as a proxy.
 Launch Burp Suite and intercept traffic.
 Navigate through the application to identify and analyze requests.

Step 2 : Intercept Requests:


 Intercept HTTP requests using Burp Proxy.
 Analyze requests for potential SQL injection and XSS vulnerabilities.

Step 3 :Use Scanner:


 Utilize Burp Scanner to automatically identify vulnerabilities.
 Focus on SQL injection and XSS scanning.

PROGRAM :

INSTALLATION OF BURP SUITE :


Step 1: Download the installer

 Download the installer for Burp Suite Enterprise Edition. The link below opens the
download page for the latest stable release in a new tab.
 Run the installer and click Next to display the Select Destination Directory page.

Step 2: Extract and run the installer

This step is specific to your operating system.

Windows:
 Extract the installer burpsuite_enterprise_windows-x64_vYYYY_MM.exe from the installer
zip file.
 Right-click the installer file and select Run as administrator.
Linux:
Extract the installer burpsuite_enterprise_linux_vYYYY_MM.sh from the installer zip file.
From the command line, run sudo sh <installer-sh-file> -c.

Step 3: Choose an install location

The destination directory is the directory in which the Enterprise server itself will be installed.

Enter or select a directory and then click Next to display the Installation options screen.

Step 4: Select the components to install


The Installation options screen enables you to choose which components of Burp Suite Enterprise
Edition you want to install on your machine.

Your choice depends on the scanning configuration you want to run:

 If you want to run the Enterprise server, web server, and scanning machines all on the same
computer, make sure that both the Running the Enterprise server and web server and
Running scans boxes are selected.
 If you want to run scans on a separate machine to the Enterprise server and web server,
uncheck the Running scans box.

Click Next to display the Logs Directory screen.

Step 5: Specify a logs directory

The logs directory is the folder that Burp Suite Enterprise Edition saves all generated logs to.

Enter or select a directory and then click Next to display the Data Directory page.
Step 6: Specify a data directory

The data directory is the folder that Burp Suite Enterprise Edition saves application data to.

Enter or select a directory and then click Next.

Step 7: Select a user to run processes

Enter the Username of the system user (that is, the user on your machine as opposed to a Burp
Suite Enterprise Edition user) that you want to run Burp Suite Enterprise Edition processes under.
If this user does not already exist on your system then the installer creates a user at the end of the
process with the default name burpsuite.

Click Next to display the Database screen.

Step 8: Select database options


Select whether you want to use the Embedded database or your own external database. Only use
the embedded database to evaluate Burp Suite Enterprise Edition. It is not intended for production
use.

Click Next to display the Web Server Port screen.

Step 9: Specify a web server port

The web server port is the port through which you can access the Burp Suite Enterprise Edition
application in your browser. By default, this is set to port 8080 if you are using the embedded
database or 8443 if you are using an external database. However, you can specify a different port
number if this port is not available on your machine.

Any port you specify must meet the following requirements:

 The port must be available for use on the machine that you want to install the
Enterprise and web servers on.
 The operating system user must be allowed to bind to that port. On Linux and
MacOS, low-privileged users are unable to bind to low port numbers (such as 80 or
433). If you want to use a low port number, you should configure port redirection at
the OS level.

Click Next.
Step 10: Specify a database backups directory

The database backups directory is the folder that Burp Suite Enterprise Edition backs up the
embedded database to.

Enter or select a directory and then click Next.

After installation :

Now that you have installed Burp Suite Enterprise Edition, you need to complete the final part of
the configuration in the app itself. Access the app in your browser. By default, this should be
[Link] or [Link]

Testing for SQL injection vulnerabilities with Burp Suite

SQL injection vulnerabilities occur when an attacker can interfere with the queries that an
application makes to its database. You can use Burp to test for these vulnerabilities:
Professional Use Burp Scanner to automatically flag potential SQL injection vulnerabilities.

 Use Burp Intruder to insert a list of SQL fuzz strings into a request. This may enable you to
change the way SQL commands are executed.

Steps

You can follow this process using a lab with a SQL injection vulnerability. For example, SQL
injection vulnerability in WHERE clause allowing retrieval of hidden data.

Scanning for SQL injection vulnerabilities

If you're using Burp Suite Professional, you can use Burp Scanner to test for SQL injection
vulnerabilities:
 Identify a request that you want to investigate.
 In Proxy > HTTP history, right-click the request and select Do active scan. Burp Scanner
audits the application.
 Review the Issues list on the Dashboard to identify any SQL injection issues that Burp
Scanner flags.

Manually fuzzing for SQL injection vulnerabilities

You can alternatively use Burp Intruder to test for SQL injection vulnerabilities. This process also
enables you to closely investigate any issues that Burp Scanner has identified:

1. Identify a request that you want to investigate.


2. In the request, highlight the parameter that you want to test and select Send to Intruder.
3. Go to the Intruder > Positions tab. Notice that the parameter has been automatically marked as
a payload position.
4. Go to the Payloads tab. Under Payload settings [Simple list] add a list of SQL fuzz strings.
 If you're using Burp Suite Professional, open the Add from list drop-down menu and
select the built-in Fuzzing - SQL wordlist.
 If you're using Burp Suite Community Edition, manually add a list.
5. Under Payload processing, click Add. Configure payload processing rules to replace any list
placeholders with an appropriate value. You need to do this if you're using the built-in wordlist:
 To replace the {base} placeholder, select Replace placeholder with base value.
 To replace other placeholders, select Match/Replace, then specify the placeholder and
replacement. For example, replace {domain} with the domain name of the site you're
testing.
6. Click Start attack. The attack starts running in a new dialog. Intruder sends a request for each
SQL fuzz string on the list.
7. When the attack is finished, study the responses to look for any noteworthy behavior. For
example, look for:
 Responses that include additional data as a result of the query.
 Responses that include other differences due to the query, such as a "welcome back"
message or error message.
 Responses that had a time delay due to the query.
If you're using the lab, look for responses with a longer length. These may include additional
products.

Using Burp to Manually Test for Reflected XSS :

First, ensure that Burp is correctly configured with your [Link] intercept turned off in the
Proxy "Intercept" tab, visit the web application you are testing in your browser.

Visit the page of the website you wish to test for XSS vulnerabilities
Return to [Link] the Proxy "Intercept" tab, ensure "Intercept is on".

Enter some appropriate input in to the web application and submit the request.
The request will be captured by Burp. You can view the HTTP request in the Proxy "Intercept"
[Link] can also locate the relevant request in various Burp tabs without having to use the
intercept function, e.g. requests are logged and detailed in the "HTTP history" tab within the
"Proxy" [Link] click anywhere on the request to bring up the context [Link] "Send to
Repeater"

Go to the "Repeater" [Link] we can input various XSS payloads into the input [Link] can test
various inputs by editing the "Value" of the appropriate parameter in the "Raw" or "Params" tabs.A
simple payload such as <s> can often be used to check for [Link] this example we have used a
payload that attempts to perform a proof of concept pop up in our [Link] "Go".

We can assess whether the attack payload appears unmodified in the response. If so, the application
is almost certainly vulnerable to [Link] can find the response quickly using the search bar at the
bottom of the response [Link] highlighted text is the result of our search.
Right click on the response to bring up the context [Link] "Show response in browser" to
copy the [Link] can also use "Copy URL" or "Request in browser".

In the pop up window, click "Copy".


Copy the URL in to your browser's address bar.
In this example we were able to produce a proof of concept for the vulnerability.
PRELAB VIVA QUESTIONS :

1. Why is it important to actively identify and address SQL injection and XSS vulnerabilities in
web applications?
2. What are the key features of Burp Suite that make it a popular tool for web application security
testing?
3. How does SQL injection differ from XSS in terms of attack vectors and potential impact on web
applications?
4. What are the common indicators of SQL injection and XSS vulnerabilities in web application
code?
5. How can web developers prevent SQL injection and XSS vulnerabilities during the software
development life cycle?

POSTLAB VIVA QUESTIONS :

1. How can the information obtained from Burp Suite be used to exploit SQL injection
vulnerabilities?
2. Explain the importance of false positive and false negative results when using Burp Scanner.
3. What steps can be taken to remediate SQL injection vulnerabilities identified by Burp Suite?
4. Describe the impact of a successful XSS attack on a web application and its users.
5. How can Burp Suite be integrated into the development process to ensure ongoing security
testing of web applications?

MARKS ALLOCATION

Marks Marks
Details Allotted Awarded
Pre Lab Questions 10
Algorithm & Logic
Explanation 30
Coding 30
Execution & Output 20
Post Lab Questions (Viva) 10

Total 100

RESULT :

Burp Suite provides detailed reports highlighting potential SQL injection and XSS vulnerabilities in
the tested web application.
[Link]: 5 Attack the website using Social Engineering method
Date:

AIM :

To Attack the website using Social Engineering method.

PROCEDURE :

Step 1: Understanding Domain and Email Conventions


Step 2: Generating Email Addresses
Step 3: Time to Go Phishing with GoPhish
 Linking GoPhish with an SMTP Server
 Spoofing the Sender
 Send a Test Email
 Upload the victims email list
 Perfecting the Spoofed Brand
Step 4: Creating the Phishing Site
Step 5: Timing the Campaign

PROGRAM :

Step 1: Understanding Domain and Email Conventions


Using tools such as [Link] and [Link], you can determine the domain and email
conventions of the organization you are targeting. For example, we entered
[Link] and quickly determined that the email convention at the
company is {firstname}.{lastname}@[Link].
Step 2: Generating Email Addresses

Now knowing how email addresses are structured, we can use Github Crosslinked. The program
will look up every person associated with the organization via LinkedIn and then generate an entire
list of email addresses to send a phishing email to.

Step 3: Time to Go Phishing with GoPhish

Armed with the list of targets, now we can go phishing. We can use GoPhish, which is essentially a
one-stop-shop for conducting a phishing campaign.
1: Linking GoPhish with an SMTP Server

SendinBlue is an email marketing platform for sending and automating email marketing campaigns.
Unlike other email marketing platforms, which requires you to authenticate your organization’s
domain, anyone can use SendInBlue with zero authentication requirements. Within SendInBlue, we
generated the SMTP server name, and port.
2: Spoofing the Sender :

Now GoPhish has the ability to send emails using SendInBlue’s SMTP server. Here is where we
configure who the email is “supposed” to be coming from. In this instance we are sending it
“from” Igal. If you want to make it look like it’s coming from the CEO of the company, all you
need is their email address and put it in the “From” field.

3: Send a Test Email

You can even send a test email within GoPhish to check your configurations.

And it works! We wanted to make it look like we were sending an email from Igal Iytzki, the only
notable difference is that it says it’s via [Link].

4: Upload the victims email list

Now we can upload the entire list of email addresses that GitHub CrossLinked generated.
5: Perfecting the Spoofed Brand

Let’s say we want to harvest credentials of the targets’ Facebook accounts. So we want to send
them an email that looks like it’s coming from Facebook. By importing a legitimate email from
Facebook requesting its users to reset their password, we can create a spoofing email that looks
almost indistinguishable from the real thing.

Notice the “Change Links to Landing Page,” GoPhish will automatically change all the links
within the email to point to the “fake” reset password page (otherwise known as a landing page).

After we imported the Facebook page, this is how it looks like in the email template editor.
Step 4: Creating the Phishing Site

Now we need to create the actual spoofed Facebook reset password website page. There are a few
ways to do this. More advanced attackers will buy a domain that is almost the same as the
legitimate site, e.g., [Link] as opposed to [Link]. Another way is to use a tool
called ZPhisher. Just press on Option 1, and it will generate the spoofed reset password page, and
will also allow you to choose where you want to host it, either Ngrok or CloudFlare.
In GoPhish, you can configure it so that the spoofed reset password landing page captures any
submitted data and passwords that the target enters and also redirect them to a legitimate page
within the next of the reset password flow so they do not suspect that their data has been stolen.

Step 5: Timing the Campaign

You are now ready to send out your phishing campaign. You can schedule when exactly you want
the email to go out, as timing of the campaign is another crucial factor in its success. Should the
email be sent out when members of the organization are just getting into the office, or during their
lunch break, or perhaps right before they sign off for the day?
PRELAB VIVA QUESTIONS :

1. Why is understanding social engineering crucial for cybersecurity professionals?


2. What ethical considerations should be emphasized when simulating social engineering attacks in
a controlled environment?
3. How can employees be educated and trained to recognize and resist social engineering attempts?
4. What legal implications should be considered before conducting any social engineering
exercises?
5. In what ways can organizations enhance their resilience against social engineering attacks
through technological solutions?

POSTLAB VIVA QUESTIONS :

1. Reflect on the ethical dilemmas and challenges encountered during the social engineering
simulation. How might these insights inform future security strategies?
2. Evaluate the effectiveness of the educational measures implemented to raise awareness about
social engineering among employees.
3. Discuss any legal or policy changes that could be recommended based on the outcomes and
findings of the social engineering lab exercise.
4. How can organizations continuously adapt and improve their defenses against evolving social
engineering tactics and techniques?
5. Reflect on the broader implications of social engineering attacks in the context of user trust,
privacy, and data protection. What measures can be taken to rebuild trust after such incidents?

MARKS ALLOCATION

Marks Marks
Details Allotted Awarded
Pre Lab Questions 10
Algorithm & Logic
Explanation 30
Coding 30
Execution & Output 20
Post Lab Questions (Viva) 10

Total 100

RESULT :

The social engineering lab exercise highlighted the importance of awareness, education, and ethical
considerations in addressing potential vulnerabilities within an organization's human factor,
contributing to a more resilient cybersecurity posture.

You might also like