You are on page 1of 49

Abstract

Students’ informal conversations on social media (e.g. Twitter, Facebook)


shed light into their educational experiences opinions, feelings, and concerns about
the learning process. Data from such un-instrumented environments can provide
valuable knowledge to inform student learning. Analyzing such data, however, can
be challenging. The complexity of students’ experiences reflected from social
media content requires human interpretation. However, the growing scale of data
demands automatic data analysis techniques. In this paper, we developed a
workflow to integrate both qualitative analysis and large-scale data mining
techniques. We focused on engineering students’ Twitter posts to understand issues
and problems in their educational experiences. We first conducted a qualitative
analysis on samples taken from about 25,000 tweets related to engineering
students’ college life. We found engineering students encounter problems such as
heavy study load, lack of social engagement, and sleep deprivation. Based on these
results, we implemented a multi-label classification algorithm to classify tweets
reflecting students’ problems. We then used the algorithm to train a detector of
student problems from about 35,000 tweets streamed at the geo-location of Purdue
University. This work, for the first time, presents a methodology and results that
show how informal social media data can provide insights into students’
experiences.
TABLE OF CONTENTS
CHAPTER NO. TITLE PAGE NO.

ABSTRACT (ENGLISH)
LIST OF FIGURES
LIST OF ABBREVIATIONS
1 INTRODUCTION

1.1 Public Discourse on the Web

1.2 Mining Twitter Data

1.3 Learning Analytics and Educational Data Mining

1.4 Inductive Content Analysis

2 LITERATURE SURVEY

2.1 Penetrating the Fog: Learning and Education

2.2 Representation and Communication

2.3 Academic Pathways Study

2.4 Enabling Engineering Student Success

2.5 The State Of Learning Analytics In 2012

2.6 Micro Blogging in Classroom

3 SYSTEM ORGANIZATION

3.1 System Architecture Design

3.2 Feature Extraction


4 SYSTEM ANALYSIS

4.1. Existing System

4.2. Proposed System

4.3. Modules

4.3.1. Computational Attribute Management

4.3.2. Qualitative Attribute Analysis

4.3.3. Public Web Conversation Port

4.3.4. Mining Student Conversation

4.3.5. Text Preprocessing

5 REQUIREMENT SPECIFICATION

5.1 System Requirements

5.1.1 Hardware Required

5.1.2 Software Required

5.2 MICROSOFT.NET FRAMEWORK

6 IMPLEMENTATION RESULT

6.1 Coding

6.2 Screen Shots

7 CONCLUSION

8 REFERENCES
CHAPTER 1
INTRODUCTION

To learn student experience from social media like twitters using workflow.
To integrate both qualitative analysis and large-scale data mining techniques. To
explore engineering students’ informal conversations on Twitter. In order to
understand issues and problems students encounter in their learning experiences.
Twitter about problems in their educational experiences they are:- Heavy study
load, Lack of social engagement, Negative emotion, Sleeping problems. Learning
analytics and educational data mining has focused on analyzing structured data
obtained from course management systems (CMS). Classroom technology usage or
controlled online learning environments to inform educational decision-making.
Twitter is a popular social media site. Its content is mostly public and very concise
(no more than 140 characters per tweet). Twitter provides free APIs that can be
used to stream data. Therefore, I chose to start from analyzing students’ posts on
twitter. Naive Bayesian Classification algorithm to be used in these concepts. I
found Naive Bayes classifier to be very effective on dataset compared with other
state-of-the-art multi-label classifiers.

Social media sites such as Twitter, Facebook, and YouTube provide great
venues for students to share joy and struggle, vent emotion and stress, and seek
social support. On various social media sites, students discuss and share their
everyday encounters in an informal and casual manner. Students’ digital footprints
provide vast amount of implicit knowledge and a whole new perspective for
educational researchers and practitioners to understand students’ experiences
outside the controlled classroom environment. This understanding can inform
institutional decision-making on interventions for at-risk students, improvement of
education quality, and thus enhance student recruitment, retention, and success.
The abundance of social media data provides opportunities to understand students’
experiences, but also raises methodological difficulties in making sense of social
media data for educational purposes. Just imagine the sheer data volumes, the
diversity of Internet slangs, the unpredictability of locations, and timing of students
posting on the web, as well as the complexity of students’ experiences. Pure
manual analysis cannot deal with the ever growing scale of data, while pure
automatic algorithms usually cannot capture in-depth meaning within the data.
Traditionally, educational researchers have been using methods such as
surveys, interviews, focus groups, classroom activities to collect data related to
students’ learning experiences. These methods are usually very time consuming,
thus cannot be duplicated or repeated with high frequency. The scale of such
studies is also usually limited. In addition, when prompted about their experiences,
students need to reflect on what they were thinking and doing sometime in the past,
which may have become obscured over time. The emerging field of learning
analytics and educational data mining has focused on analyzing structured data
obtained from course management systems (CMS), classroom technology usage, or
controlled online learning environments to inform educational decision-making.
However, to the best of our knowledge, there is no research found to directly mine
and analyze student posted content from uncontrolled spaces on the social web
with the clear goal of understanding students’ learning experiences.

Fig.1. The workflow we developed for making sense of social media data
integrates qualitative analysis and data mining algorithms. The width of gray
arrows represents data volumes – wider indicates more data volume. Black arrows
represent data analysis, computation, and results flow. The dashed arrows represent
the parts that do not concern the central work of this paper. This workflow can be
an iterative cycle.
The research goals of this study are 1) to demonstrate a workflow of social
media data sense-making for educational purposes, integrating both qualitative
analysis and large-scale data mining techniques as illustrated in Fig. 1; and 2) to
explore engineering students’ informal conversations on Twitter, in order to
understand issues and problems students encounter in their learning experiences.
We chose to focus on engineering students’ posts on Twitter about problems in
their educational experiences mainly because:

1. Engineering schools and departments have long been struggling with


student recruitment and retention issues [8]. Engineering graduates constitute a
significant part of the nation’s future workforce and have a direct impact on the
nation’s economic growth and global competency.

2. Based on understanding of issues and problems in students’ life,


policymakers and educators can make more informed decisions on proper
interventions and services that can help students overcome barriers in learning.

3. Twitter is a popular social media site. Its content is mostly public and very
concise (no more than 140 characters per tweet). Twitter provides free APIs that
can be used to stream data. Therefore, we chose to start from analyzing students’
posts on Twitter.

1.1 PUBLIC DISCOURSE ON THE WEB

The theoretical foundation for the value of informal data on the web can be
drawn from Goffman’s theory of social performance. Although developed to
explain face-toface interactions, Goffman’s theory of social performance is widely
used to explain mediated interactions on the web today. One of the most
fundamental aspects of this theory is the notion of front-stage and back-stage of
people’s social performances. Compared with the frontstage, the relaxing
atmosphere of back-stage usually encourages more spontaneous actions. Whether a
social setting is front-stage or back-stage is a relative matter. For students,
compared with formal classroom settings, social media is a relative informal and
relaxing back-stage. When students post content on social media sites, they usually
post what they think and feel at that moment. In this sense, the data collected from
online conversation may be more authentic and unfiltered than responses to formal
research prompts. These conversations act as a zeitgeist for students’ experiences.

Many studies show that social media users may purposefully manage their
online identity to “look better” than in real life. Other studies show that there is a
lack of awareness about managing online identity among college students, and that
young people usually regard social media as their personal space to hang out with
peers outside the sight of parents and teachers. Students’ online conversations
reveal aspects of their experiences that are not easily seen in formal classroom
settings, thus are usually not documented in educational literature. The abundance
of social media data provides opportunities but also presents methodological
difficulties for analyzing large-scale informal textual data. The next section
reviews popular methods used for analyzing Twitter data.

1.2 MINING TWITTER DATA

Researchers from diverse fields have analyzed Twitter content to generate


specific knowledge for their respective subject domains. For example, Gaffney
analyzes tweets with hash tag #iran Election using histograms, user networks, and
frequencies of top keywords to quantify online activism. Similar studies have been
conducted in other fields including healthcare, marketing, and athletics, just to
name a few. Analysis methods used in these studies usually include qualitative
content analysis, linguistic analysis, network analysis, and some simplistic
methods such as word clouds and histograms. In our study, we built a classification
model based on inductive content analysis. This model was then applied and
validated on a brand new dataset. Therefore, we emphasize not only the insights
gained from one dataset, but also the application of the classification algorithm to
other datasets for detecting student problems. The human effort is thus augmented
with large-scale data analysis. Below we briefly review studies on Twitter from the
fields of data mining, machine learning, and natural language processing. These
studies usually have more emphasis on statistical models and algorithms. They
cover a wide range of topics including information propagation and diffusion,
popularity prediction, event detection, topic discovery, and tweet classification, to
name a few.

1.3 LEARNING ANALYTICS AND EDUCATIONAL DATA MINING

Learning analytics and educational data mining (EDM) are data-driven


approaches emerging in education. These approaches analyze data generated in
educational settings to understand students and their learning environments in
order to inform institutional decision-making. The present paper extends the scope
of these approaches in the following two aspects. First, data analyzed using these
approaches typically are structured data including administrative data (e.g., high
school GPA and SAT scores), and student activity and performance data from
CMS (Course Management Systems) or VLE (Virtual Learning Environments)
such as Blackboard (http://www.blackboard.com/). For example, researchers at
Purdue University created a system named Signals that mines student performance
data from Blackboard course system such as time spent reading course materials,
time spent engaging in course discussion forums, and quiz grades. Signals give
students red, yellow, or green alerts on their progress in the course taken in order to
promote self-awareness in learning. Our study extends the data scope of these data-
driven approaches to include informal social media data. Second, most studies in
learning analytics and EDM focus on students’ academic performance. We extend
the understanding of students’ experiences to the social and emotional aspects
based on their informal online conversations. These are important components of
the learning experiences that are much less emphasized and understood compared
with academic performance.
1.4. INDUCTIVE CONTENT ANALYSIS

Because social media content like tweets contain a large amount of informal
language, sarcasm, acronyms, and misspellings, meaning is often ambiguous and
subject to human interpretation. Rost et. al argue that in large scale social media
data analysis, faulty assumptions are likely to arise if automatic algorithms are
used without taking a qualitative look at the data. We concur with this argument, as
we found no appropriate unsupervised algorithms could reveal in-depth meanings
in our data. For example, LDA (Latent Dirichlet Allocation) is a popular topic
modeling algorithm that can detect general topics from very large scale data. LDA
has only produced meaningless word groups from our data with a lot of
overlapping words across different topics. There were no pre-defined categories of
the data, so we needed to explore what students were saying in the tweets. Thus,
we first conducted an inductive content analysis on the #engineering Problems
dataset. Inductive content analysis is one popular qualitative research method for
manually analyzing text content. Three researchers collaborated on the content
analysis process.
CHAPTER 2
LITERATURE SURVEY

Literature survey is the most important step in software development


process. Before developing the tool it is necessary to determine the time factor,
economy and company strength. Once these things are satisfied, then the next step
is to determine which operating system and language can be used for developing
the tool. Once the programmers start building the tool the programmers need lot of
external support. This support can be obtained from senior programmers, from
book or from websites. Before building the system the above consideration are
taken into account for developing the proposed system.

The major part of the project development sector considers and fully survey
all the required needs for developing the project. For every project Literature
survey is the most important sector in software development process. Before
developing the tools and the associated designing it is necessary to determine and
survey the time factor, resource requirement, man power, economy, and company
strength. Once these things are satisfied and fully surveyed, then the next step is to
determine about the software specifications in the respective system such as what
type of operating system the project would require, and what are all the necessary
software are needed to proceed with the next step such as developing the tools, and
the associated operations.

2.1 PENETRATING THE FOG: ANALYTICS IN LEARNING AND


EDUCATION - G. SIEMENS AND P. LONG - 2011

Attempts to imagine the future of education often emphasize new


technologies ubiquitous computing devices, flexible classroom designs, and
innovative visual displays. But the most dramatic factor shaping the future of
higher education is something that we can’t actually touch or see: big data and
analytics. Basing decisions on data and evidence seems stunningly obvious, and
indeed, research indicates that data-driven decision-making improves
organizational output and productivity.1 for many leaders in higher education,
however, experience and “gut instinct” have a stronger pull. Meanwhile, the move
toward using data and evidence to make decisions is transforming other fields.
Notable is the shift from clinical practice to evidence-based medicine in health
care. The former relies on individual physicians basing their treatment decisions on
their personal experience with earlier patient cases.

The latter is about carefully designed data collection that builds up evidence
on which clinical decisions are based. Medicine is looking even further toward
computational modeling by using analytics to answer the simple question “who
will get sick?” and then acting on those predictions to assist individuals in making
lifestyle or health changes.3Insurance companies also are turning to predictive
modeling to determine high-risk customers. Effective data analysis can produce
insight into how lifestyle choices and personal health habits affect long-term
risks.4 Business and governments too are jumping on the analytics and data-driven
decision-making trends, in the form of “business intelligence.”

Higher education, a field that gathers an astonishing array of data about its
“customers,” has traditionally been inefficient in its data use, often operating with
substantial delays in analyzing readily evident data and feedback. Evaluating
student dropouts on an annual basis leaves gaping holes of delayed action and
opportunities for intervention. Organizational processes—such as planning and
resource allocation—often fail to utilize large amounts of data on effective learning
practices, student profiles, and needed interventions. Something must change. For
decades, calls have been made for reform in the efficiency and quality of higher
education. Now, with the Internet, mobile technologies, and open education, these
calls are gaining a new level of urgency. Compounding this technological and
social change, prominent investors and businesspeople are questioning the time
and monetary value of higher education.5 Unfortunately, the crescendo of calls for
higher education reform lacks a foundation for making decisions on what to do or
how to guide change. It is here—as a framework for making learning-based reform
decisions—that analytics will have the largest impact on higher education.
2.2 REPRESENTATION AND COMMUNICATION: CHALLENGES IN
INTERPRETING LARGE SOCIAL MEDIA DATASETS - M. ROST, L.
BARKHUUS, H. CRAMER - 2013

Online services provide a range of opportunities for understanding human


behavior through the large aggregate data sets that their operation collects. Yet the
data sets they collect do not unproblematic ally model or mirror the world events.
In this paper we use data from Foursquare, a popular location check-in service, to
argue for the importance of analyzing social media as a communicative rather than
representational system. Drawing on logs of all Foursquare check-ins over eight
weeks we highlight four features of Four square’s use: the relationship between
attendance and check-ins, event check-ins, commercial incentives to check-in, and
lastly humorous check-ins These points show how large data analysis is affected
by the end user uses to which social networks are put.

2.3 ACADEMIC PATHWAYS STUDY: PROCESSES AND REALITIES - R.


STREVELER, AND K. SMITH - 2008

This system describes the evolution and implementation of the Academic


Pathways Study (APS), a five year, multi-institution study that addresses questions
about the education and persistence of undergraduates in engineering. The APS is
the largest element of the Center for the Advancement of Engineering Education
(CAEE), funded by NSF for the advancement of engineering learning and
teaching. Parts of this paper address questions that engineering education
researchers may have about the organizational and technical infrastructure that
supported this project, or about its overall implementation (e.g. subject
recruitment, data collection methods, and participation rates). Other parts of the
paper address questions that researchers and engineering faculty and administrators
might have regarding how to explore the findings and insights that are emerging
from this extensive longitudinal and cross-sectional study of students’ pathways
through engineering. In addition, the paper serves as a good starting point for
researchers who might be interested in doing additional analysis using APS data.
Specific topics include: research team and leadership; research design and
methodology; the four study cohorts and their respective contributions; some of the
challenges and solutions; and implications for engineering education and future
research.
2.4 ENABLING ENGINEERING STUDENT SUCCESS: THE FINAL
REPORT FOR THE CENTER FOR THE ADVANCEMENT OF
ENGINEERING EDUCATION - D. LUND - 2010

Today‘s engineering graduates will solve tomorrow‘s problems in a world


that is advancing faster and facing more critical challenges than ever before. This
situation creates significant demand for engineering education to evolve in order to
effectively prepare a diverse community of engineers for these challenges. Such
concerns have led to the publication of visionary reports that help orient the work
of those committed to the success of engineering education. Research in
engineering education is central to all of these visions. Research on the student
experience is fundamental to informing the evolution of engineering education. A
broad understanding of the engineering student experience involves thinking about
diverse academic pathways, navigation of these pathways, and decision points-how
students choose engineering programs, navigate through their programs, and then
move on to jobs and careers. Further, looking at students‘experiences broadly
entails not just thinking about their learning (i.e., skill and knowledge development
in both technical and professional areas) but also their motivation, their
identification with engineering, their confidence, and their choices after
graduation.

In actuality, there is not one singular student experience, but rather many
experiences. Research on engineering student experiences can look into systematic
differences across demographics, disciplines, and campuses; gain insight into the
experiences of underrepresented students; and create a rich portrait of how students
change from first year through graduation. Such a broad understanding of the
engineering student experience can serve as inspiration for designing innovative
curricular experiences that support the many and varied pathways that students
take on their way to becoming an engineer. However, an understanding of the
engineering student experience is clearly not enough to create innovation in
engineering education. We need educators who are capable of using the research
on the student experience. This involves not oly preparing tomorrow‘s educators
with conceptions of teaching that enable innovation but also understanding how
today‘s educators make teaching decisions.
We also need to be concerned about creating the capacity to do such
research in short, we need more researchers. One promising approach is to work
with educators who are interested in engaging in research, supporting them as they
negotiate the space between their current activities and their new work in
engineering education research. To fully support this process, we must also
investigate what is required for educators to engage in such a path.

2.5 THE STATE OF LEARNING ANALYTICS IN 2012: A REVIEW AND


FUTURE CHALLENGES - R. FERGUSON - 2012

Learning analytics is a significant area of technology‐enhanced learning that


has emerged during the last decade. This review of the field begins with an
examination of the technological, educational and political factors that have driven
the development of analytics in educational settings. It goes on to chart the
emergence of learning analytics, including their origins in the 20th century, the
development of data-driven analytics, the rise of learning-focused perspectives and
the influence of national economic concerns. It next focuses on the relationships
between learning analytics, educational data mining and academic analytics.
Finally, it sets out the current state of learning analytics research, and identifies a
series of future challenges.

2.6 MICROBLOGGING IN CLASSROOM: CLASSIFYING STUDENTS’


RELEVANT AND IRRELEVANT QUESTIONS IN A MICROBLOGGING
SUPPORTED CLASSROOM - S. CETINTAS, L. SI, H. AAGARD - 2011

Microblogging is a popular technology in social networking applications


that lets users publish online short text messages (e.g., less than 200 characters) in
real time via the web, SMS, instant messaging clients, etc. Microblogging can be
an effective tool in the classroom and has lately gained notable interest from the
education community. This paper proposes a novel application of text
categorization for two types of microblogging questions asked in a classroom,
namely relevant (i.e., questions that the teacher wants to address in the class) and
irrelevant questions. Empirical results and analysis show that using personalization
together with question text leads to better categorization accuracy than using
question text alone. It is also beneficial to utilize the correlation between questions
and available lecture materials as well as the correlation between questions asked
in a lecture. Furthermore, empirical results also show that the elimination of
stopwords leads to better correlation estimation between questions and leads to
better categorization accuracy. On the other hand, incorporating students’ votes on
the questions does not improve categorization accuracy, although a similar feature
has been shown to be effective in community question answering environments for
assessing question quality.
CHAPTER 3
SYSTEM ARCHITECTURE DESIGN

Model Computational
DATA Model Training
Adaptation
ANALYSIS & Load & Evaluation QUALITATIVE
RESULTS RESULTS

SENDING RESULT QUALITATIVE


SET BACK DATA
ANALYSIS

STUDENT’S SOCIAL
COMMUNITY

Result
Set

Data Storage
DATA VARIETY
Server
DATA COLLECTION

Figure 3.1.1 System Architecture Design


3.2 FEATURE EXTRACTION

To learn student experience from social media like twitters using workflow.
To integrate both qualitative analysis and large-scale data mining techniques. To
explore engineering students’ informal conversations on Twitter. In order to
understand issues and problems students encounter in their learning experiences.
Twitter about problems in their educational experiences they are:-Heavy study
load, Lack of social engagement, Negative emotion, Sleeping problems.
CHAPTER 4
SYSTEM ANALYSIS

4.1. EXISTING SYSTEM

 Data sets they collect do not unproblematic model or mirror the world
events.

 Aggression level of the network as high or as low as desired.

 The mass media in Twitter, unlike the traditional media networks.

 Very limited prior work on the categorization of relevant and irrelevant


question ranking capability that scores each question.

 System detects earthquakes promptly and notification is delivered much


faster than JMA broadcast announcements.

 Focused only electrical engineering module titled Electronics, which


serves as the case study.

 The manual inspection is very slow, costly, and dangerous.

 An average error of 2% for predicting meme volume and 17% for


predicting meme lifespan.
4.2. PROPOSED SYSTEM

 Foursquare, a popular location check-in service, the importance of


analyzing social media as a communicative rather than representational
system.

 A quantitative analysis to maximize the relevance of information in


networks with multiple information providers.

 Develop a computational framework that checks, for any given topic,


how necessary and sufficient each user group is in reaching a wide
audience.

 Larger depending on the classroom size, clarity level of the lecture,


participation level of students, etc.

 To monitor tweets and to detect a target event. To detect a target event,


we devise a classifier of tweets based on features such as the keywords in
a tweet, the number of words, and context.

 Developing intellectual skills in order to promote scholarly thinking


relating different ideas together to form a bigger picture.
4.3. MODULES

4.3.1. Computational Attribute Management


4.3.2. Qualitative Attribute Analysis
4.3.3. Public Web Conversation Port
4.3.4. Mining Student Conversation
4.3.5. Text Preprocessing

4.3.1. Computational Attribute Management

The attribute generation module fully operates on the Administrator end and
it gathers the attribute data from admin for maintaining the dataset for future
purpose. Once the attributes are generated it will be easy for analyze the taughts of
the student based on their given feedback from their portal. For all the attribute
management is a dataset management process which holds the information
regarding the keywords and the respective attributes for mining the student
performance and learning experiences.

4.3.2. Qualitative Attribute Analysis

Once the attributes are generated the student effective mining portal is in
other hand to collect the information from the student end. From in this port only
the administrator can analyze the taughts of the students. At each and every time
the student twit with some data that all will be compared with the created attribute
set and the mining process takes place for analyzing the qualitative summary of the
student. In particular the qualitative analysis module helps the administrator to
analyze the qualitative summary of the student performance.

4.3.3. Public Web Conversation Port

The theoretical foundation for the value of informal data on the web can be
drawn from Goffman’s theory of social performance. Although developed to
explain face-to-face interactions, Goffman’s theory of social performance is widely
used to explain mediated interactions on the web today. One of the most
fundamental aspects of this theory is the notion of front-stage and back-stage of
people’s social performances. Compared with the frontstage, the relaxing
atmosphere of back-stage usually encourages more spontaneous actions. Whether a
social setting is front-stage or back-stage is a relative matter. For students,
compared with formal classroom settings, social media is a relative informal and
relaxing back-stage. When students post content on social media sites, they usually
post what they think and feel at that moment. In this sense, the data collected from
online conversation may be more authentic and unfiltered than responses to formal
research prompts. These conversations act as a zeitgeist for students’ experiences.

4.3.4. Mining Student Conversation

From the diverse fields of many existing works researchers have analyzed
the mining content to generate specific knowledge for their respective subject
domains. For example, Gaffney analyzes tweets with hashtag iranElection using
histograms, user networks, and frequencies of top keywords to quantify online
activism. Similar studies have been conducted in other fields including healthcare,
marketing and athletics, just to name a few. Analysis methods used in these studies
usually include qualitative content analysis, linguistic analysis, network analysis,
and some simplistic methods such as word clouds and histograms. In this module,
a classification model is built based on inductive content analysis. This model was
then applied and validated on a brand new dataset. Therefore, not only the insights
gained from one dataset are emphasized, but also the application of the
classification algorithm to other datasets for detecting student problems. The
human effort is thus augmented with large-scale data analysis.

4.3.5. Text Preprocessing

Many social mining users use some special symbols to convey certain
meaning. For example, # is used to indicate a hashtag, @ is used to indicate a user
account, and RT is used to indicate a re-tweet. Social Mining users sometimes
repeat letters in words so that to emphasize the words, for example, “huuungryyy”,
“sooo muuchh”, and “Monnndayyy”. Besides, common stopwords such as “a, an,
and, of, he, she, it”, nonletter symbols, and punctuation also bring noise to the text.
So the pre-processed texts are required completely for analysing the student mining
data and through this module easily it will be attained.
CHAPTER 5

REQUIREMENT SPECIFICATION

5.1 System Requirements

5.1.1 Hardware Requirements

System : Pentium IV 2.4 GHz

     

Hard Disk : 40 GB

Floppy Drive : 1.44 Mb

       

Monitor : 15 VGA Colour

        

Mouse : Logitech

Ram : 512 Mb

5.1.2 Software Requirements

Operating system : Windows XP

       Technology Used : Microsoft .NET

Backend Used : SQL Server


      

5.2 MICROSOFT.NET FRAMEWORK

Features OF .Net

Microsoft .NET is a set of Microsoft software technologies for rapidly


building and integrating XML Web services, Microsoft Windows-based
applications, and Web solutions. The .NET Framework is a language-neutral
platform for writing programs that can easily and securely interoperate. There’s no
language barrier with .NET: there are numerous languages available to the
developer including Managed C++, C#, Visual Basic and Java Script. The .NET
framework provides the foundation for components to interact seamlessly, whether
locally or remotely on different platforms. It standardizes common data types and
communications protocols so that components created in different languages can
easily interoperate.

“.NET” is also the collective name given to various software


components built upon the .NET platform. These will be both products (Visual
Studio.NET and Windows.NET Server, for instance) and services (like
Passport, .NET My Services, and so on).

THE .NET FRAMEWORK

The .NET Framework has two main parts:


1. The Common Language Runtime (CLR).

2. A hierarchical set of class libraries.

The CLR is described as the “execution engine” of .NET. It provides the


environment within which programs run. The most important features are

 Conversion from a low-level assembler-style language, called


Intermediate Language (IL), into code native to the platform being
executed on.
 Memory management, notably including garbage collection.
 Checking and enforcing security restrictions on the running code.
 Loading and executing programs, with version control and other such
features.
 The following features of the .NET framework are also worth
description:
Managed Code

The code that targets .NET, and which contains certain extra Information -
“metadata” - to describe itself. Whilst both managed and unmanaged code can run
in the runtime, only managed code contains the information that allows the CLR to
guarantee, for instance, safe execution and interoperability.

Managed Data

With Managed Code comes Managed Data. CLR provides memory


allocation and Deal location facilities, and garbage collection. Some .NET
languages use Managed Data by default, such as C#, Visual Basic.NET and
JScript.NET, whereas others, namely C++, do not. Targeting CLR can, depending
on the language you’re using, impose certain constraints on the features available.
As with managed and unmanaged code, one can have both managed and
unmanaged data in .NET applications - data that doesn’t get garbage collected but
instead is looked after by unmanaged code.

Common Type System

The CLR uses something called the Common Type System (CTS) to strictly
enforce type-safety. This ensures that all classes are compatible with each other, by
describing types in a common way. CTS define how types work within the
runtime, which enables types in one language to interoperate with types in another
language, including cross-language exception handling. As well as ensuring that
types are only used in appropriate ways, the runtime also ensures that code doesn’t
attempt to access memory that hasn’t been allocated to it.

Common Language Specification

The CLR provides built-in support for language interoperability. To ensure


that you can develop managed code that can be fully used by developers using any
programming language, a set of language features and rules for using them called
the Common Language Specification (CLS) has been defined. Components that
follow these rules and expose only CLS features are considered CLS-compliant.

THE CLASS LIBRARY

.NET provides a single-rooted hierarchy of classes, containing over


7000 types. The root of the namespace is called System; this contains basic types
like Byte, Double, Boolean, and String, as well as Object. All objects derive from
System. Object. As well as objects, there are value types. Value types can be
allocated on the stack, which can provide useful flexibility. There are also efficient
means of converting value types to object types if and when necessary.

The set of classes is pretty comprehensive, providing collections, file,


screen, and network I/O, threading, and so on, as well as XML and database
connectivity.

The class library is subdivided into a number of sets (or namespaces),


each providing distinct areas of functionality, with dependencies between the
namespaces kept to a minimum.

LANGUAGES SUPPORTED BY .NET

The multi-language capability of the .NET Framework and Visual


Studio .NET enables developers to use their existing programming skills to build
all types of applications and XML Web services. The .NET framework supports
new versions of Microsoft’s old favorites Visual Basic and C++ (as VB.NET and
Managed C++), but there are also a number of new additions to the family.

Visual Basic .NET has been updated to include many new and
improved language features that make it a powerful object-oriented programming
language. These features include inheritance, interfaces, and overloading, among
others. Visual Basic also now supports structured exception handling, custom
attributes and also supports multi-threading.

Visual Basic .NET is also CLS compliant, which means that any CLS-
compliant language can use the classes, objects, and components you create in
Visual Basic .NET.
Managed Extensions for C++ and attributed programming are just
some of the enhancements made to the C++ language. Managed Extensions
simplify the task of migrating existing C++ applications to the new .NET
Framework.

C# is Microsoft’s new language. It’s a C-style language that is


essentially “C++ for Rapid Application Development”. Unlike other languages, its
specification is just the grammar of the language. It has no standard library of its
own, and instead has been designed with the intention of using the .NET libraries
as its own.

Microsoft Visual J# .NET provides the easiest transition for Java-


language developers into the world of XML Web Services and dramatically
improves the interoperability of Java-language programs with existing software
written in a variety of other programming languages.

Active State has created Visual Perl and Visual Python, which
enable .NET-aware applications to be built in either Perl or Python. Both products
can be integrated into the Visual Studio .NET environment. Visual Perl includes
support for Active State’s Perl Dev Kit.

Other languages for which .NET compilers are available include

 FORTRAN
 COBOL
 Eiffel
ASP.NET Windows Forms

XML WEB
SERVICES

Base Class Libraries

Common Language Runtime

Operating System

Figure 5.2.1 .Net Framework

C#.NET is also compliant with CLS (Common Language Specification) and


supports structured exception handling. CLS is set of rules and constructs that
are supported by the CLR (Common Language Runtime). CLR is the runtime
environment provided by the .NET Framework; it manages the execution of the
code and also makes the development process easier by providing services
C#.NET is a CLS-compliant language. Any objects, classes, or components that
created in C#.NET can be used in any other CLS-compliant language. In
addition, we can use objects, classes, and components created in other CLS-
compliant languages in C#.NET .The use of CLS ensures complete
interoperability among applications, regardless of the languages used to create
the application.

CONSTRUCTORS AND DESTRUCTORS:

Constructors are used to initialize objects, whereas destructors are used to


destroy them. In other words, destructors are used to release the resources
allocated to the object. In C#.NET the sub finalize procedure is available. The
sub finalize procedure is used to complete the tasks that must be performed
when an object is destroyed. The sub finalize procedure is called automatically
when an object is destroyed. In addition, the sub finalize procedure can be
called only from the class it belongs to or from derived classes.

GARBAGE COLLECTION

Garbage Collection is another new feature in C#.NET. The .NET


Framework monitors allocated resources, such as objects and variables. In
addition, the .NET Framework automatically releases memory for reuse by
destroying objects that are no longer in use. In C#.NET, the garbage collector
checks for the objects that are not currently in use by applications. When the
garbage collector comes across an object that is marked for garbage collection,
it releases the memory occupied by the object.

OVERLOADING

Overloading is another feature in C#. Overloading enables us to define


multiple procedures with the same name, where each procedure has a different
set of arguments. Besides using overloading for procedures, we can use it for
constructors and properties in a class.

MULTITHREADING:
C#.NET also supports multithreading. An application that supports
multithreading can handle multiple tasks simultaneously, we can use
multithreading to decrease the time taken by an application to respond to user
interaction.

STRUCTURED EXCEPTION HANDLING

C#.NET supports structured handling, which enables us to detect and


remove errors at runtime. In C#.NET, we need to use Try…Catch…Finally
statements to create exception handlers. Using Try…Catch…Finally statements,
we can create robust and effective exception handlers to improve the
performance of our application.

THE .NET FRAMEWORK

The .NET Framework is a new computing platform that simplifies


application development in the highly distributed environment of the Internet.

OBJECTIVES OF. NET FRAMEWORK

1. To provide a consistent object-oriented programming environment whether


object codes is stored and executed locally on Internet-distributed, or executed
remotely.

2. To provide a code-execution environment to minimizes software deployment


and guarantees safe execution of code.

3. Eliminates the performance problems.

There are different types of application, such as Windows-based applications


and Web-based applications.
CHAPTER 6

IMPLEMENTATION AND RESULTS

6.1 CODING

REGISTRATION
using System;
using System.Collections;
using System.Configuration;
using System.Data;
using System.Linq;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Xml.Linq;

using System.Data.SqlClient;
using System.Net;
using System.Net.Mail;

public partial class StaffRegistration : System.Web.UI.Page


{
CodeClass Ws = new CodeClass();
String S = "";

protected void Page_Load(object sender, EventArgs e)


{
if (Page.IsPostBack == false)
{
Clear();
}
}

private void IncrID()


{
SqlConnection Cn = new SqlConnection(Ws.Con);
Cn.Open();
SqlCommand Cmd = default(SqlCommand);
S = "Select Max(EmpID) From Register";
Cmd = new SqlCommand(S, Cn);
if ((Convert.ToString(Cmd.ExecuteScalar())) == "")
{
TxtUID.Text = "1";
}
else if (Convert.ToString(Cmd.ExecuteScalar()) != "")
{
TxtUID.Text =
(Convert.ToInt64(Convert.ToString(Cmd.ExecuteScalar())) + 1).ToString();
}
Cmd.Dispose();
Cn.Close();
}
protected void CmdSubmit_Click(object sender, EventArgs e)
{
String Str = "";
if (((TxtName.Text.Trim()) == ""))
{
Str = Str + "Enter Staff Name \\n";
}
if (((TxtMail.Text.Trim()) == ""))
{
Str = Str + "Enter Mail-ID\\n";
}
if (((TxtAge.Text.Trim()) == ""))
{
Str = Str + "Enter Age\\n";
}
if (((TxtDOB.Text.Trim()) == ""))
{
Str = Str + "Enter Date of Birth\\n";
}
if (((TxtMobile.Text.Trim()) == ""))
{
Str = Str + "Enter Mobile Number\\n";
}
if (((TxtQual.Text.Trim()) == ""))
{
Str = Str + "Enter Qualification\\n";
}
if (((TxtPwd.Text.Trim()) == ""))
{
Str = Str + "Enter Password\\n";
}
if (((TxtAddress.Text.Trim()) == ""))
{
Str = Str + "Enter Address\\n";
}
if (Str.Trim() != "")
{
ClientScript.RegisterStartupScript(this.GetType(), "myalert",
"alert('" + Str + "');", true);
return;
}
SqlConnection Cn = new SqlConnection(Ws.Con);
Cn.Open();
SqlCommand Cmd = default(SqlCommand);

S = "Select Count(*) From Register Where Mail=@UN1";


Cmd = new SqlCommand(S, Cn);
Cmd.Parameters.Add(new SqlParameter("@UN1", TxtMail.Text));
if (Convert.ToInt32((Convert.ToString(Cmd.ExecuteScalar()))) >= 1)
{
ClientScript.RegisterStartupScript(this.GetType(), "myalert",
"alert('Mail-ID already exists.');", true);
return;
}
Cmd.Dispose();
Cmd.Parameters.Clear();
Cn.Close();

Session["Nm"] = TxtName.Text;
Session["Age"] = TxtAge.Text;
Session["DOB"] = TxtDOB.Text;
Session["Qual"] = TxtQual.Text;
Session["Mb"] = TxtMobile.Text;
Session["Mail"] = TxtMail.Text;
Session["Pwd"] = TxtPwd.Text;
Session["Addr"] = TxtAddress.Text;

long RN = 0;
RN = Ws.RandomNumber(2332, 4548);
Session["Cd"] = RN.ToString();
Ws.MailSending(TxtMail.Text, "Admin - Security Code " + DateTime.Now +
"", "Your Security Code is : " + RN.ToString() + "");

P1.Visible = false;
P2.Visible = false;
P3.Visible = true;

}
protected void CmdReset_Click(object sender, EventArgs e)
{
Clear();
}

private void Clear()


{

P1.Visible = true;
P2.Visible = false;
P3.Visible = false;

IncrID();
TxtAddress.Text = "";
TxtAge.Text = "";
TxtDOB.Text = "";
TxtMail.Text = "";
TxtMobile.Text = "";
TxtName.Text = "";
TxtPwd.Text = "";
TxtQual.Text = "";
}
protected void CmdCheck_Click(object sender, EventArgs e)
{
if (Session["Cd"].ToString() == TxtSecCd.Text.ToString())
{
SqlConnection Cn = new SqlConnection(Ws.Con);
Cn.Open();
SqlCommand Cmd = default(SqlCommand);

S = "Insert Into Register


Values(@ID,@Name,@Age,@DOB,@Qual,@Mb,@Mail,@Pwd,@Addr,0,1)";
Cmd = new SqlCommand(S, Cn);
Cmd.Parameters.Add(new SqlParameter("@ID", TxtUID.Text));
Cmd.Parameters.Add(new SqlParameter("@Name",
Session["Nm"].ToString()));
Cmd.Parameters.Add(new SqlParameter("@Age",
Session["Age"].ToString()));
Cmd.Parameters.Add(new SqlParameter("@DOB",
Session["DOB"].ToString()));
Cmd.Parameters.Add(new SqlParameter("@Qual",
Session["Qual"].ToString()));
Cmd.Parameters.Add(new SqlParameter("@Mb",
Session["Mb"].ToString()));
Cmd.Parameters.Add(new SqlParameter("@Mail",
Session["Mail"].ToString()));
Cmd.Parameters.Add(new SqlParameter("@Pwd",
Session["Pwd"].ToString()));
Cmd.Parameters.Add(new SqlParameter("@Addr",
Session["Addr"].ToString()));
Cmd.ExecuteNonQuery();
Cmd.Dispose();
Cmd.Parameters.Clear();
Cn.Close();
Clear();

P1.Visible = false;
P3.Visible = false;
P2.Visible = true;

}
else
{
ClientScript.RegisterStartupScript(this.GetType(), "myalert",
"alert('Invalid Security Code');", true);
}
}
}
ATTRIBUTE MASTER
using System;
using System.Collections;
using System.Configuration;
using System.Data;
using System.Linq;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Xml.Linq;

using System.Data.SqlClient;
using System.Data;

public partial class AttributeMaster : System.Web.UI.Page


{
CodeClass Ws = new CodeClass();
String S = "";

protected void Page_Load(object sender, EventArgs e)


{
if (Page.IsPostBack == false)
{
Clear();
}
}

private void IncrID()


{
SqlConnection Cn = new SqlConnection(Ws.Con);
Cn.Open();
SqlCommand Cmd = default(SqlCommand);
S = "Select Max(AID) From Attribute";
Cmd = new SqlCommand(S, Cn);
if ((Convert.ToString(Cmd.ExecuteScalar())) == "")
{
TxtAID.Text = "1";
}
else if (Convert.ToString(Cmd.ExecuteScalar()) != "")
{
TxtAID.Text =
(Convert.ToInt64(Convert.ToString(Cmd.ExecuteScalar())) + 1).ToString();
}
Cmd.Dispose();
Cn.Close();
}
protected void CmdReset_Click(object sender, EventArgs e)
{
Clear();
}

private void Clear()


{
IncrID();
Disp();
TxtAttribute.Text = "";
}
protected void CmdSubmit_Click(object sender, EventArgs e)
{
if (TxtAttribute.Text.Trim() == "")
{
ClientScript.RegisterStartupScript(this.GetType(), "myalert",
"alert('Enter Attribute');", true);
return;
}
SqlConnection Cn = new SqlConnection(Ws.Con);
Cn.Open();
SqlCommand Cmd = default(SqlCommand);

S = "Select Count(*) From Attribute Where Attr=@UN1";


Cmd = new SqlCommand(S, Cn);
Cmd.Parameters.Add(new SqlParameter("@UN1", TxtAttribute.Text));
if (Convert.ToInt32((Convert.ToString(Cmd.ExecuteScalar()))) >= 1)
{
ClientScript.RegisterStartupScript(this.GetType(), "myalert",
"alert('Attribute already exists.');", true);
return;
}
Cmd.Dispose();
Cmd.Parameters.Clear();

S = "Insert Into Attribute Values(@AID,@A1,1)";


Cmd = new SqlCommand(S, Cn);
Cmd.Parameters.Add(new SqlParameter("@AID", TxtAID.Text));
Cmd.Parameters.Add(new SqlParameter("@A1", TxtAttribute.Text));
Cmd.ExecuteNonQuery();
Cmd.Dispose();
Cmd.Parameters.Clear();
Cn.Close();
Clear();

private void Disp()


{
DG.DataSource = null;
DG.DataBind();
SqlConnection Cn = new SqlConnection(Ws.Con);
Cn.Open();
SqlCommand Cmd = default(SqlCommand);
SqlDataReader dr = default(SqlDataReader);
DataTable dt = new DataTable();
DataRow row = null;

dt.Columns.Add("a");
dt.Columns.Add("b");

S = "Select * From Attribute Where Active=1";


Cmd = new SqlCommand(S, Cn);
dr = Cmd.ExecuteReader();
while (dr.Read())
{
row = dt.NewRow();
row[0] = dr["AID"].ToString();
row[1] = dr["Attr"].ToString();
dt.Rows.Add(row);
}
dr.Close();
Cmd.Dispose();

DG.DataSource = dt;
DG.DataBind();

int Flg = 0;
int i = 0;

for (i = 0; i <= dt.Rows.Count - 1; i++)


{
DG.Rows[i].Cells[0].Text = dt.Rows[i][0].ToString();
DG.Rows[i].Cells[1].Text = dt.Rows[i][1].ToString();
}
Cn.Close();
}
protected void DG_RowCommand(object sender, GridViewCommandEventArgs e)
{
SqlConnection Cn = new SqlConnection(Ws.Con);
Cn.Open();
SqlCommand Cmd = default(SqlCommand);

S = "Update Attribute Set Active=0 Where AID=" +


Convert.ToInt32(Convert.ToString(DG.Rows[Convert.ToInt16(e.CommandArgument)].C
ells[0].Text)) + "";
Cmd = new SqlCommand(S, Cn);
Cmd.ExecuteNonQuery();
Cmd.Dispose();
Cn.Close();

Disp();
}
}
EDIT PERSONAL DETAILS
using System;
using System.Collections;
using System.Configuration;
using System.Data;
using System.Linq;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Xml.Linq;
using System.Data.SqlClient;
using System.Net;
using System.Net.Mail;
public partial class EditStudentDetails : System.Web.UI.Page
{
CodeClass Ws = new CodeClass();
String S = "";

protected void Page_Load(object sender, EventArgs e)


{
if (Page.IsPostBack == false)
{
LblDisp.Text = Session["UN"].ToString();
Session["UN"] = LblDisp.Text;
Disp();
}
}

private void Disp()


{
SqlConnection Cn = new SqlConnection(Ws.Con);
Cn.Open();
SqlCommand Cmd = default(SqlCommand);
SqlDataReader dr = default(SqlDataReader);

S = "Select * From StuDtls Where Mail=@M1 And Active=1";


Cmd = new SqlCommand(S, Cn);
Cmd.Parameters.Add(new SqlParameter("@M1", LblDisp.Text));
dr = Cmd.ExecuteReader();
if (dr.Read())
{
TxtAge.Text = dr["Age"].ToString();
TxtCity.Text = dr["City"].ToString();
TxtCountry.Text = dr["Country"].ToString();
TxtCQual.Text = dr["CurQual"].ToString();
TxtDOB.Text = dr["DOB"].ToString();
TxtDOJ.Text = dr["DOJ"].ToString();
TxtFName.Text = dr["FName"].ToString();
TxtHighQual.Text = dr["HighQual"].ToString();
TxtLastStd.Text = dr["LastStudied"].ToString();
TxtMail.Text = dr["Mail"].ToString();
TxtMName.Text = dr["MName"].ToString();
TxtMobile.Text = dr["MbNo"].ToString();
TxtName.Text = dr["SName"].ToString();
TxtSID.Text = dr["SID"].ToString();
TxtState.Text = dr["State"].ToString();
}
dr.Close();
Cmd.Dispose();
Cn.Close();
}
protected void CmdSubmit_Click(object sender, EventArgs e)
{
String Str = "";
if (((TxtName.Text.Trim()) == ""))
{
Str = Str + "Enter Student Name \\n";
}
if (((TxtFName.Text.Trim()) == ""))
{
Str = Str + "Enter Father's Name \\n";
}
if (((TxtMName.Text.Trim()) == ""))
{
Str = Str + "Enter Mother's Name \\n";
}
if (((TxtAge.Text.Trim()) == ""))
{
Str = Str + "Enter Age\\n";
}
if (((TxtDOB.Text.Trim()) == ""))
{
Str = Str + "Enter Date of Birth\\n";
}
if (((TxtHighQual.Text.Trim()) == ""))
{
Str = Str + "Enter Student's Highest Qualification\\n";
}
if (((TxtLastStd.Text.Trim()) == ""))
{
Str = Str + "Enter Student's Last Studied Institution Name\\n";
}
if (((TxtCQual.Text.Trim()) == ""))
{
Str = Str + "Enter Student's Current Qualification\\n";
}
if (((TxtDOJ.Text.Trim()) == ""))
{
Str = Str + "Enter Date of Joining\\n";
}
if (((TxtMobile.Text.Trim()) == ""))
{
Str = Str + "Enter Mobile Number\\n";
}
if (((TxtMail.Text.Trim()) == ""))
{
Str = Str + "Enter Mail-ID\\n";
}
if (((TxtCity.Text.Trim()) == ""))
{
Str = Str + "Enter City\\n";
}
if (((TxtState.Text.Trim()) == ""))
{
Str = Str + "Enter State\\n";
}
if (((TxtCountry.Text.Trim()) == ""))
{
Str = Str + "Enter Country\\n";
}
if (Str.Trim() != "")
{
ClientScript.RegisterStartupScript(this.GetType(), "myalert",
"alert('" + Str + "');", true);
return;
}
SqlConnection Cn = new SqlConnection(Ws.Con);
Cn.Open();
SqlCommand Cmd = default(SqlCommand);

S = "Update StuDtls Set


SName=@Nm,FName=@FNm,MName=@MNm,Age=@Age,DOB=@DOB,HighQual=@HQ,LastStudied=@LS
,MbNo=@Mb,City=@City,State=@St,Country=@Cntry,CurQual=@CQ Where SID=@ID";
Cmd = new SqlCommand(S, Cn);
Cmd.Parameters.Add(new SqlParameter("@ID", TxtSID.Text));
Cmd.Parameters.Add(new SqlParameter("@Nm", TxtName.Text));
Cmd.Parameters.Add(new SqlParameter("@FNm", TxtFName.Text));
Cmd.Parameters.Add(new SqlParameter("@MNm", TxtMName.Text));
Cmd.Parameters.Add(new SqlParameter("@Age", TxtAge.Text));
Cmd.Parameters.Add(new SqlParameter("@DOB", TxtDOB.Text));
Cmd.Parameters.Add(new SqlParameter("@HQ", TxtHighQual.Text));
Cmd.Parameters.Add(new SqlParameter("@LS", TxtLastStd.Text));
Cmd.Parameters.Add(new SqlParameter("@Mb", TxtMobile.Text));
Cmd.Parameters.Add(new SqlParameter("@City", TxtCity.Text));
Cmd.Parameters.Add(new SqlParameter("@St", TxtState.Text));
Cmd.Parameters.Add(new SqlParameter("@Cntry", TxtCountry.Text));
Cmd.Parameters.Add(new SqlParameter("@CQ", TxtCQual.Text));
Cmd.ExecuteNonQuery();
Cmd.Dispose();
Cmd.Parameters.Clear();
Cn.Close();

ClientScript.RegisterStartupScript(this.GetType(), "myalert",
"alert('Personal Details Updated Successfully');", true);
Clear();
}
protected void CmdReset_Click(object sender, EventArgs e)
{
Clear();
}

private void Clear()


{
Disp();
}
}
6.2 SCREEN SHOTS

6.2.1 Main Page

Figure 6.2.1 Main Page


6.2.2 Registration
Figure 6.2.2 Registration
6.2.3 Administrator Login

Figure 6.2.3 View Doctors


6.2.4 Attribute Master
Figure 6.2.4 Attribute Master
CHAPTER 7
CONCLUSION

Our study is beneficial to researchers in learning analytics, educational data


mining, and learning technologies. It provides a workflow for analyzing social
media data for educational purposes that overcomes the major limitations of both
manual qualitative analysis and large scale computational analysis of user-
generated textual content. Our study can inform educational administrators,
practitioners and other relevant decision makers to gain further understanding of
engineering students’ college experiences. As an initial attempt to instrument the
uncontrolled social media space, we propose many possible directions for future
work for researchers who are interested in this area. We hope to see a proliferation
of work in this area in the near future. We advocate that great attention needs to be
paid to protect students’ privacy when trying to provide good education and
services to them.
CHAPTER 8
REFERENCES

[1] G. Siemens and P. Long, “Penetrating the fog: Analytics in learning and
education,” Educause Review, vol. 46, no. 5, pp. 30- 32, 2011.
[2] M. Rost, L. Barkhuus, H. Cramer, and B. Brown, “Representation and
communication: challenges in interpreting large social media datasets,” in
Proceedings of the 2013 conference on Computer supported cooperative work,
2013, pp. 357–362.
[3] M. Clark, S. Sheppard, C. Atman, L. Fleming, R. Miller, R. Stevens, R.
Streveler, and K. Smith, “Academic pathways study: Processes and realities,” in
Proceedings of the American Society for Engineering Education Annual
Conference and Exposition, 2008.
[4] C. J. Atman, S. D. Sheppard, J. Turns, R. S. Adams, L. Fleming, R. Stevens, R.
A. Streveler, K. Smith, R. Miller, L. Leifer, K. Ya suhara, and D. Lund, “Enabling
engineering student success: The final report for the Center for the Advancement
of Engineering Education,” Morgan & Claypool Publishers, Center for the
Advancement of Engineering Education, 2010.
[5] R. Ferguson, “The state of learning analytics in 2012: A review and future
challenges,” Knowledge Media Institute, Technical Report KMI-2012-01, 2012.
[6] R. Baker and K. Yacef, “The state of educational data mining in 2009: A
review and future visions,” Journal of Educational Data Mining, vol. 1, no. 1, pp.
3–17, 2009.
[7] S. Cetintas, L. Si, H. Aagard, K. Bowen, and M. Cordova- Sanchez,
“Microblogging in Classroom: Classifying Students’ Relevant and Irrelevant
Questions in a Microblogging-Supported Classroom,” Learning Technologies,
IEEE Transactions on, vol. 4, no. 4, pp. 292–300, 2011.

You might also like