Professional Documents
Culture Documents
Bachelor of Technology
in
Computer Science & Engineering
By
Signature of Supervisor
Dr. CARMEL MARY BELINDA.M.J
Professor
Computer Science & Engineering
School of Computing
Vel Tech Rangarajan Dr.Sagunthala R&D
Institute of Science and Technology
June,2021
i
DECLARATION
We declare that this written submission represents my ideas in our own words and
where others’ ideas or words have been included, we have adequately cited and ref-
erenced the original sources. We also declare that we have adhered to all principles
of academic honesty and integrity and have not misrepresented or fabricated or fal-
sified any idea/data/fact/source in our submission. We understand that any violation
of the above will be cause for disciplinary action by the Institute and can also evoke
penal action from the sources which have thus not been properly cited or from whom
proper permission has not been taken when needed.
(Signature)
(N SREERAM CHARAN)
Date: / /
(Signature)
(K MAHESHWAR REDDY)
Date: / /
(Signature)
(N SRAVAN SAI)
Date: / /
ii
APPROVAL SHEET
Examiners Supervisor
Date: / /
Place:
iii
ACKNOWLEDGEMENT
We express our deepest gratitude to our respected Founder Chancellor and Pres-
ident Col. Prof. Dr. R. RANGARAJAN B.E. (EEE), B.E. (MECH), M.S (AUTO).
DSc., Foundress President Dr. R. SAGUNTHALA RANGARAJAN M.B.B.S.
Chairperson Managing Trustee and Vice President.
We are very much grateful to our beloved Vice Chancellor Prof. S. SALIVA-
HANAN, for providing us with an environment to complete our project successfully.
We also take this opportunity to express a deep sense of gratitude to Our Internal
Supervisor Dr.CARMEL MARY BELINDA.M.J,M.Tech., ph.D for his/her cor-
dial support, valuable information and guidance, he helped us in completing this
project through various stages.
We thank our department faculty, supporting staff and friends for their help and
guidance to complete this project.
iv
ABSTRACT
v
LIST OF FIGURES
6.1 Output 1 . . . . . . . . . . . . . . . . . . . . . . . . 29
6.2 Output 2 . . . . . . . . . . . . . . . . . . . . . . . . 30
vi
LIST OF TABLES
vii
LIST OF ACRONYMS AND
ABBREVIATIONS
abbr Abbreviation
viii
TABLE OF CONTENTS
Page.No
ABSTRACT v
LIST OF FIGURES vi
1 INTRODUCTION 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Aim of the project . . . . . . . . . . . . . . . . . . . . 2
1.3 Project Domain . . . . . . . . . . . . . . . . . . . . . 2
1.4 Scope of the Project . . . . . . . . . . . . . . . . . . . 2
1.5 Methodology . . . . . . . . . . . . . . . . . . . . . . 3
2 LITERATURE REVIEW 5
3 PROJECT DESCRIPTION 8
3.1 Existing System . . . . . . . . . . . . . . . . . . . . . 8
3.2 Proposed System . . . . . . . . . . . . . . . . . . . . 8
3.3 Feasibility Study . . . . . . . . . . . . . . . . . . . . 9
3.3.1 Economic Feasibility . . . . . . . . . . . . . . 10
3.3.2 Technical Feasibility . . . . . . . . . . . . . . 10
3.3.3 Social Feasibility . . . . . . . . . . . . . . . . 10
3.4 System Specification . . . . . . . . . . . . . . . . . . 11
3.4.1 Hardware Specification . . . . . . . . . . . . . 11
3.4.2 Software Specification . . . . . . . . . . . . . 11
3.4.3 Standards and Policies . . . . . . . . . . . . . 11
4 MODULE DESCRIPTION 12
4.1 General Architecture . . . . . . . . . . . . . . . . . . 12
4.2 Design Phase . . . . . . . . . . . . . . . . . . . . . . 13
4.2.1 Data Flow Diagram . . . . . . . . . . . . . . . 13
4.2.2 UML Diagram . . . . . . . . . . . . . . . . . 14
4.2.3 Use Case Diagram . . . . . . . . . . . . . . . 15
4.2.4 Activity Diagram . . . . . . . . . . . . . . . . 16
4.2.5 Sequence Diagram . . . . . . . . . . . . . . . 17
4.3 Module Description . . . . . . . . . . . . . . . . . . . 17
8 PLAGIARISM REPORT 33
References 35
Chapter 1
INTRODUCTION
1.1 Introduction
1
discovering, and capturing the unknown similarities or patterns from
a dataset by using an ensemble combination of various analytical ap-
proaches. An educational institute can exploit this capability of data
mining to figure out the recruitment policy of a company from the pre-
vious year’s placement statistics and student datasets []. So, the place-
ment department of the university could prepare a predicted placement
database for the students. Thus, it is very important to conduct a study
on various placement prediction systems. This paper presents a review
of different placement prediction system models and their applications
for students .
2
cordance with the requirements of the public, private, and government
sectors. We can also forecast which company will select which student
group. Make a list of the skills that a specific organisation is seeking
for, and then we can train our students based on that list. These char-
acteristics will improve the accuracy of the prediction procedure.
1.5 Methodology
4
Chapter 2
LITERATURE REVIEW
7
Chapter 3
PROJECT DESCRIPTION
8
ing procedure must be found in order to lower the capital cost of this
procedure. Using various deep learning algorithms on the student de-
tails, effective student filtering can be accomplished. In the realm of
education, this system defined deep learning as a means of recognis-
ing, detecting, and capturing unknown similarities or patterns from a
dataset utilising an ensemble of diverse analytical methodologies.An
educational college could use this data mining capability to figure out
a company’s recruitment policy based on previous year’s placement
records and student data. As a result, the institution’s placement cell
might generate a placement anticipated list for current students. As
a result, a study of various placement prediction systems is critical.
The purpose of this work is to give an overview of various placement
prediction system models and their applications for students.
Advantages
The feasibility of the project and the possibility that the system
would be valuable to the company are examined during the prelimi-
nary research. The feasibility study’s major goal is to determine the
technical, operational, and financial viability of adding new modules
and troubleshooting an existing system.If there have infinite resources
and time, every system is viable.
9
3.3.1 Economic Feasibility
The following are some of the technical issues that are frequently
addressed during the feasibility stage of an investigation: The system
that has been created so far is technically workable. It’s a user interface
that’s accessible over the internet. As a result, people have simple
access to it. The database’s goal is to construct, build, and maintain
a workflow among multiple entities in order to make all users’ jobs
easier in their various capacities. Users would be granted permission
based on the roles assigned to them.
10
3.4 System Specification
11
Chapter 4
MODULE DESCRIPTION
Description
13
4.2.2 UML Diagram
Description
14
4.2.3 Use Case Diagram
Description
15
4.2.4 Activity Diagram
Description
16
4.2.5 Sequence Diagram
Description
17
Chapter 5
Fill in the blanks with a CSV file. In Data analytics, reading data from CSV
files (comma separated values) is a must. We frequently receive data from many
sources that can be exported to CSV format and used by other systems. The Panadas
library has capabilities that allow us to read a CSV file in its entirety or in sections
for a specific set of columns and rows. The CSV file is a text file with commas
separating the values in the columns. Consider the following data, which can be
found in the input.csv file. By copying and pasting this data into Windows Notepad,
you can produce this file. Using notepad’s save As All files(*.*) option, save the file
as input.csv.
import pandas as pd data= pd.readc sv(0 path/input.csv 0 )print(data)
18
5.1.2 Output Design
NumPy stands for ’Numerical Python’ and is a Python package. It is a library that
contains multidimensional array objects as well as a collection of array processing
routines. A developer can execute the following operations using NumPy: • Array
operations (mathematical and logical). • Shape modification using Fourier transfor-
mations and algorithms. • Linear algebra-related operations NumPy includes linear
algebra and random number generating functions.
5.2 Testing
We test each module separately and then incorporate them into the broader sys-
tem during unit testing. Unit testing concentrates verification efforts on the module’s
smallest unit of software design. Module testing is another name for this. Each sys-
tem module is tested separately. This testing takes place during the programming
stage. During the testing phase, each module is found to function satisfactorily in
terms of the intended output. There are also certain field validation tests. For exam-
ple, a validation check is performed to determine the legitimacy of the data entered
by altering the user input. It is quite simple to locate a mistake when the system is
first started.
19
Input
Test result
Data can be lost across an interface, and one module can have an undesirable
effect on the other sub functions, resulting in the primary functions not being pro-
duced. Integrated testing is a method of systematically identifying and correcting
faults in a user interface. The testing was carried out using fictitious data. For this
example data, the designed system worked perfectly. The purpose of an integrated
test is to determine the overall performance of a system.
Input
Navigate the admin and login pages, as well as the connection between the admin
and user login pages.
20
Test result
Example of a test To see if different URLs and their view functions are connected.
As a result, view functions render various pages when navigating to different URLs
from the navigation bar. The test case was successful.
Input
Checking each and every button on the website to see if it works or not?, Take,
for example, the admin page login button.
21
Test Result
22
user perspective. Whitebox testing, on the other hand, is centred on internal testing
and is based on the inner workings of an application. Because of the see-through
box concept, the name ”whitebox” was coined. The name ”clear box” or ”whitebox”
refers to the ability to see into the software’s inner workings via its outside shell (or
”box”). Similarly, the ”black box” in ”black box testing” denotes the inability to
observe the software’s inner workings, allowing only the end-user experience to be
tested.
Black box testing is a software testing approach that examines the functionality of
the programme under test (SUT) without examining the underlying code structure,
implementation details, or knowledge of the product’s internal routes. This form of
testing is solely focused on the software’s specs and requirements. We only focus on
the software system’s inputs and outputs in Black Box Testing, and we don’t worry
about the software program’s internal knowledge. Any software system you want to
test can be represented by the Black Box described above. For example, a Windows
operating system, a Google website, an Oracle database, or even your own unique
programme.
23
5.3.6 Test Result
The system testing approach incorporates system test cases and design approaches
into a well-planned series of processes that leads to successful software construction.
Test planning, test case design, test execution, and the resulting data collecting and
evaluation are all part of the testing strategy. Low-level tests that verify that a tiny
source code segment has been appropriately implemented, as well as high-level tests
that validate significant system functionalities against user requirements, must be in-
24
cluded in a testing strategy. Testing is an important part of software quality assurance
since it is the final assessment of the specification design.
25
Chapter 6
Placements are one of the most difficult difficulties a student will encounter in
their life. In order to develop a set of pupils fit for each company’s criteria, the
placement cell and teachers of an institute followed manual methods as well.
In the realm of education, this system defined deep learning as a means of recog-
nising, detecting, and capturing unknown similarities or patterns from a dataset util-
ising an ensemble of diverse analytical methodologies. An educational college could
26
use this data mining capability to figure out a company’s recruitment policy based
on previous year’s placement records and student data. As a result, the institution’s
placement cell might generate a placement anticipated list for current students. As a
result, a study of various placement prediction systems is critical.
. Among different algorithms available for data mining and prediction, the neural
network classifier outperformed other classifiers by a large margin among all metrics.
Thus, it was chosen for our application.
27
22 ‘ sno ‘ i n t ( 1 1 ) NOT NULL,
23 ‘ qn ‘ v a r c h a r ( 1 0 0 0 ) NOT NULL,
24 ‘A‘ v a r c h a r ( 5 0 0 ) NOT NULL,
25 ‘B‘ v a r c h a r ( 5 0 0 ) NOT NULL,
26 ‘C‘ v a r c h a r ( 5 0 0 ) NOT NULL,
27 ‘D‘ v a r c h a r ( 5 0 0 ) NOT NULL,
28 ‘E ‘ v a r c h a r ( 5 0 0 ) DEFAULT NULL,
29 ‘ Ans ‘ v a r c h a r ( 1 0 ) DEFAULT NULL,
30 ‘ L e v e l ‘ i n t ( 1 1 ) DEFAULT NULL
31 )
32 CREATE TABLE ‘ s t u d e n t ‘ (
33 ‘ name ‘ v a r c h a r ( 1 0 0 ) DEFAULT NULL,
34 ‘ s t d i d ‘ v a r c h a r ( 1 0 0 ) DEFAULT NULL,
35 ‘ pwd ‘ v a r c h a r ( 1 0 0 ) DEFAULT NULL,
36 ‘ e m a i l ‘ v a r c h a r ( 1 0 0 ) DEFAULT NULL,
37 ‘mno ‘ v a r c h a r ( 1 0 0 ) DEFAULT NULL
38 )
39 CREATE TABLE ‘ t e s t ‘ (
40 ‘ t e s t c o d e ‘ v a r c h a r ( 1 0 ) DEFAULT NULL,
41 ‘ s c o r e ‘ v a r c h a r ( 1 0 0 ) DEFAULT NULL
42 )
28
Output
29
Figure 6.2: Output 2
30
Chapter 7
7.1 Conclusion
We conclude that our application can provide several tests to improve the stu-
dent’s academic performance. For student placement prediction we are using ma-
chine learning techniques at the background level. Among different algorithms avail-
able for data mining and prediction, the neural network classifier outperformed other
classifiers by a large margin among all metrics. Thus, it was chosen for our appli-
cation. In machine learning algorithms, the multilayer perceptron of neural network
deep learning algorithms (NN) is used to classify the placement chance prediction.
In the prediction system, we require a training dataset to predict placement with cur-
rent participant students as a testing dataset. The experimental results show that our
proposed system is providing accurate results. This application can be adopted in
several colleges as it is cheap and will help the institutions in improving placements
31
7.2 Future Enhancements
It would be extremely beneficial if we could alter and update our curriculum and
other extracurricular activities for each semester in accordance with the requirements
of the public, private, and government sectors. We can also forecast which company
will select which student group. Make a list of the skills that a specific organisation
is seeking for, and then we can train our students based on that list. These character-
istics will improve the accuracy of the prediction procedure.
32
Chapter 8
PLAGIARISM REPORT
33
Chapter 9
34
”, testdata)testdata = testdata.reshape(len(testdata), −1)print(”P edd = ”, testdata)
ANN clf = MLPClassifier() clf.fit(trainset, yt rain)result = clf.predict(testdata)print(res
except Exception as e: print(”Error=” + e.args[0]) tb = sys.exci nf o()[2]print(tb.tbl ineno)
35
References
[1] Shukla, M., Malviya, A. K, “Modified classification and prediction model for
improving accuracy of student placement prediction”(2019), In Proceedings of
2nd International Conference on Advanced Computing and Software Engineer-
ing (ICACSE).
[4] Rajak, A., Shrivastava, A. K., Vidushi, “Applying and comparing machine learn-
ing classification algorithms for predicting the results of students”(2020), Journal
of Discrete Mathematical Sciences and Cryptography, vol 23(2), pp 419-427.
[5] Liao, S. N., Zingaro, D., Thai, K., Alvarado, C., Griswold, W. G., Porter, L, “A
robust machine learning technique to predict low-performing students”(2019),
ACM Transactions on Computing Education (TOCE), vol 19(3), pp 1-19.
[6] Astha Soni, Vivek Kumar, Rajwant Kaur, D. Hemavathi, “Predicting Student
Performance using Data Mining Techniques”(2018), International Journal of
Pure and Applied Mathematics, vol 119, No. 12.
36
[7] Ullmann, T. D, “Automated analysis of reflection in writing: Validating machine
learning approaches”(2019), International Journal of Artificial Intelligence in
Education, vol 29(2), pp 217-257.
[8] Ishizue, R., Sakamoto, K., Washizaki, H., Fukazawa, Y, “Student placement
and skill ranking predictors for programming classes using class attitude, psy-
chological scales, and code metrics”(2018), Research and practice in technology
enhanced learning, vol 13(1), pp 1-20.
[9] Kovanović, V., Joksimović, S., Mirriahi, N., Blaine, E., Gašević, D., Siemens,
G., Dawson, S, “Understand students’ self-reflections through learning analyt-
ics”(2018), In Proceedings of the 8th international conference on learning ana-
lytics and knowledge, pp. 389-398..
[10] Liu, M., Shum, S. B., Mantzourani, E., Lucas, C, “Evaluating machine learn-
ing approaches to classify pharmacy students’ reflective statements”(2019), In
International Conference on Artificial Intelligence in Education, pp. 220-230.
Springer.
[11] Abed, T., Ajoodha, R., Jadhav, A, “A prediction model to improve student
placement at a south african higher education institution”, In 2020 International
SAUPEC/RobMech/PRASA Conference, pp. 1-6. IEEE.
37