You are on page 1of 20

The framework and methodology used here is balance scorecard where the manager of

CRM department is required to evaluate their performance and analyse their problems in the
Organisation and they rate themselves on the scale of 1-10.

SELF- EVALUATION FORM

APPRAISAL Rating (1-10) SELF-EVALUATION


S. No. CHARACTERISTICS
1. Ranking 8 I would like to rank myself at
3 among the 8 team members
2. Problem solving skills 9 If there is any error regarding
the product faced by customer
I make sure that they are
pleased with my service.
3. Setting up goals 9 I try best from my side that the
target goal is achieved by
providing premium customer
satisfaction
4. Customer Service 8 I try to give full support to the
customers when are looking
for a particular product.
5. Communication 9 I make sure that customers are
satisfied by the way I
communicate and makes sure
that I try explaining them their
product in the preferred local
language.
6. Building Trust 8 I ensure that Customer will
believe in me in rectifying the
product and returning him. So,
in this way Customer builds
trust on me.

Focus group discussion


Conversion into null and alternate hypothesis
Identification of research type - casual, exploratory etc
Techniques used for generating research ideas
Converting management problem into research problem, research design, research variables
and potential challenges
Sampling techniques to be used
Factors to be considered for research design

It has been a few years since GST was implemented and its benefits can be seen when a poor person
opinion that because of GST prices of various items essential for him have come down, and
commodities have become cheaper. Doing business has become so much easier. And most important
of all, the trust of customers for the traders is increasing. The GST has impacted the transport and
logistics sector; how the movement of trucks has increased. The time required to cover distances has
come down drastically. Highways have become clutter-free. Earlier, because of multiple tax
structures, maximum resources of the transport and logistics sector were expended in maintaining
paperwork, and that also led to the need for construction of new warehouses in each state. GST Good
and Simple Tax. GST has produced a significant positive effect on our economy in a very short span.
The speed at which the smooth transition has taken place, along with rapid migration and new
registrations, has instilled a new sense of confidence in the entire country.

            d. Identify the potential challenges/ problems related to current research. (CO2/L3)

Question1 : A Higher Education Institution (HEI) has an automated library that


meticulously handles library transactions and has good learning content in the form of
books, CD/DVDs, magazines, journals, several online references etc. In this Institution, most
of the data is stored in MS Excel spread sheet format. Some of the data is available in the
manual form. The librarian is facing a problem of analyzing the huge volume of data which
is generated due to the transactions as this HEI has about 20000 students and 600 faculty
members. The librarian would also like to track the borrowing of the various learning
content by the users on a weekly basis. 

a) Outline some of the analytics solutions that you would like to suggest the librarian. (4
marks)
b) What are the challenges you would face while implementing this solution? (6 marks) 

a.) There are various types of analytics used at different stages of the flow of data. The
some of the analytics solutions that I will suggest to Librarian includes – Descriptive
analytics, Predictive analytics and Prescriptive analytics. These analytics solutions are
interrelated with each other. With the help of these analytical tools, Librarian can
draw insights about students and faculty members and track the borrowing of all the
learners.

i. Descriptive analytics-

Doing anything without the permission from its owner is not right and is technically termed
as non ethical. As the online users clicks, search behavior and personal data gets captured and
monitored by the analytics companies without consent is unethical but with permission can
be said to be ethical since the user must be knowing that his data is being recorded and
studied.
These companies get data from people's usage and store it for analysis and then create
strategies to target each individual with their data. It is basically hindering privacy of a
person and this is an illegal practice which is termed to be unethical. Web surfing activities
of individuals are logged by these companies such as which sites the individual has visited,
what time they have spent on these sites, what kind of materials they have downloaded and
all their activities are tracked and recorded.
Suppose any person working in a company uses his/her office laptop for yahoo searches and
his searches are being tracked by the company. He has been threatened to get fired because of
the searches he did as his details of surfing behavior have been saved from server logs. This
is said be a hindrance to privacy and unethical behavior.
Suppose a person visits any e-commerce sites to purchase something. He each and every
click is recorded by the analytics team. They look for what are the preferences of the
customer, his buying behavior, what he is buying over a period of time, his choices, age
group etc. Even if the customer doesn’t buy anything, he is being shown no. of times
advertisement of things he has viewed on their sites in order to persuade customers to
purchase those products.
The company stores data for people to do analysis over the pattern of buying behavior and
also they see if their site is getting sufficient traffic or not.
Suppose a person has some medical issues for which his company pays reimbursement. Later
he gets an opportunity to travel far for some work, he will not be allowed by the company
since they have track of his data. This is called unethical behavior in an organization or a
business.

Business problems and Introduction to the Case:


The case deals with a leading telecom service provider
named ABC Limited. The company seems to have a huge customer base that deals with
multiple services. The back-office operations at ABC Limited usually deal in receiving
customer service requests related to telecommunication services. Recently due to a hike in the
number of requests, the company is facing a huge challenge in dealing with the increase in
service request volume and thus is not able to provide quick test case resolution to its large
customer base.
CRISP-DM Framework Solution:
Cross Industry Process for Data Mining is a technique
used to retrieve data from a firm for the purpose of providing analysis and better decision
support systems. It involves the following steps:
Business Understanding:
Global Tech Analytics Consulting Company must
understand the project objective before working with ABC Limited and must also analyse the
business model of its clients. This step will clearly set the goals for Global Tech as it involves
in-depth analysis of the problem and various alternatives to handle the issue and also to direct
the research towards a better outcome.

Data Understanding:
This process involves the collection of
relevant data from the ABC Limited like total number of customer base, customer request
transactions processing units, number of employees who attend to customer requests on a
daily basis and how much requests are being processed by the company on an average. By
collecting the data, Global Tech can gain more insight into what are the solutions it can
provide and how to move about with solving the crisis for ABC Limited.

Data Preparation:
The data collected is not reliable as it may be tampered or disoriented. So, the gathered data
must be organised, integrated and variables that might affect one another are standardised to
give overall solutions, cleaned of any errors or noise such as data non-correlation problems,
analysed, interpreted for information and knowledge discovery. It is further dealt with
Missing value Analysis, data imputation, Outliers Detection and Treatment. Finally the data
is passed into decision support systems for outcomes.

Modeling:
What kind of analysis to use to study the
processed data, assumptions regarding the model in a cost and time effective methodology
are selected. Interpretations and outcomes from the methodology and model is done from the
business point of view of ABC Limited and the telecommunication sector.

Evaluation and Deployment:


The model outcomes are rechecked and tested on a
small scale. Then on success is deployed to ABC Limited to deal with their issue of not being
able to attend to sudden hike in customer requests. If Failure, the model is reengineered and
tested until success.
The entire process is budgeted beforehand by the financial team and once the technology
solutions using CRISP-DM is deployed to ABC, from Global Tech, the payment is received,
Closing the project.

Computer Algorithms that improve automatically through experience and improve without
requiring an external program is called Machine learning. It is a subset of Artificial
Intelligence and requires initial data input for automated learning and outcome generation. In
the case of ABC Limited, once the data is structured, cleaned of noise, treating of missing
values can be fed into the Machine Learning Neuron. The system will assist in the following
ways:
There are visualization capabilities to understand the upcoming traffic.
The resources can be planned well in advance, anticipating the estimated traffic.
Wastage of resources can be prevented, that is resources can be allocated intelligently based
on the traffic forecast provided with the help of ML.
Data can be used to understand the request patterns that can be essential to expand the
business.
It can be used to understand the spam request patterns.
The customer requests can be segmented to different customer response teams instead of
dealing with them from a central system.

Existing Solutions to the Big Data Challenges 3.1. Potential Solutions for Data Volume
Challenge [1] 3.1.1. Hadoop Tools like Hadoop are great for managing massive volumes of
structured, semi-structured and unstructured data. Being a new technology and many
professionals are unfamiliar with Hadoop. To use this technology, lots of resources are
required to learn and this eventually diverts the attention from solving the main problem
towards learning Hadoop. 3.1.2. Visualization Another way to perform analyses and report
but sometimes granularity of data increases the problem of accessing the detail level needed.
3.1.3. Robust Hardware It is also a good way to handle volume problems. It enables
increased memory and powerful parallel processing to chew high volumes of data swiftly.
3.1.4. Grid Computing Grid computing is represented by a number of servers that are
interconnected by a high speed network; each of the servers plays one or many roles. The two
main benefits of Grid computing
are the high storage capability and the processing power, which translates to the data and
computational grids. 3.1.5. Spark Platforms like Spark use model plus in-memory
computing to create huge performance gains for high volume and diversified data. All these
approaches allow firms and organizations to explore huge data volumes and get business
insights from it. There are two possible ways to deal with volume problem. We can either
shrink the data or invest in good infrastructure to solve the problem of data volume and based
on our cost budget and requirements we can select technologies and methods described
above. If we have resources with expertise in Hadoop, we can always use it.

Potential Solutions for Data Variety Problem [2] 3.2.1. OLAP Tools (On-line Analytical
Processing Tools) Data processing can be done using OLAP tools and it establishes
connection between information and It eventually assembles data into a logical way in order
to access it easily and OLAP tools specialists can achieve high speed and low lagging time
for processing high volume data, OLAP tools process all the data provided to them no matter
they are relevant or not so, this is one of the drawbacks of OLAP tools. 3.2.2. Apache
Hadoop It is open source software and its main purpose is to manage huge amounts of data
in a very short span of time with great ease. The functionality of Hadoop is to divide data
among multiple systems infrastructure for processing it. A map of the content is created in
Hadoop so it can be easily accessed and found. 3.2.3. SAP HANA SAP HANA is an in-
memory data platform that is deployable as an on-premise appliance, or in the cloud. It is a
revolutionary platform that's best suited for performing real-time analytics, and developing
and deploying real-time applications. New DB and indexing architectures make sense of
disparate data sources swiftly. 3.2.4. Redundant physical infrastructure The flawless IT
infrastructure greatly enhances Big Data implementation after determining and setting all the
requirements against each of the following criteria:  Performance: System performance
and cost are directly proportional, as the system performance increases the cost of
infrastructure also increases.  Availability: To keep the system running 24 hours for all the
days, we need availability of expensive infrastructure.
 Scalability: We need to take care of the storage capacity and the computing power to
enhance scalability. We can resolve the variety of problems using ETL tools, visualization
tools, and OLAP tools and by having the robust infrastructure. It is hard to say just one
among them could solely resolve the problem or more than one is needed or there could be
some algorithm which could synchronize the data varieties in a uniform format, it depends on
particular case or problem. We can’t have a generalized solution for all.

A development role in DSS, under an open or community source model, would be


advantageous to the library community, specifically enabling:

maximization of local data reserves,


effective use and development of domain expertise,
financial and functional sustainability, and
infrastructure required for collaborative research and development.

Community-sourcing does not exclude commercial interests, but changes the fundamental
dynamics of the library market, allowing vendors and libraries to forge new relationships
around the support of software and the extension of that intellectual property for the best
interests of the community. Open development of a metrics framework insulates libraries
from a destabilizing reliance on vendors for product development and support, while also
building a knowledge base that strengthens intra- and inter-institutional cooperation around
strategic problems. Open development can also spur competency-building within the library
community, encouraging the acquisition of statistical skills and creating professional
opportunities around data modeling, metadata design, and data governance, in addition to
statistical methods and presentation
(Point no 2) MetriDoc provides simple tools to extract useful information from various data
sources, transform, resolve and consolidate that data, and finally store them in a repository.
The repository is comprised of various storage mechanisms to make it easy to extract data for
reports and statistical processes. With this in mind, the Penn Libraries are designing
MetriDoc to meet the following requirements:
create a simple framework that handles the complexities of extracting, resolving and storing
data
provide hooks into the framework so non-enterprise programmers can use Metridoc with a
combination of scripting languages, XML and project schemas
create reusable solutions specific to the library space, such as extracting data from popular
ILS systems, handling COUNTER data, resolving EZproxy logs, etc.
follow best practices when storing and curating data in the repository to enable the widest
possible distribution of decision-support information so that data analysis can become a
routine and continuous facet of organizational administration and culture.

RDBMS - Relational Database Management System


(RDBMS). In this system, data is indexed to enable faster search and retrieval. On the basis
of
some value in the data, index is defined which is nothing but an identifier and represents the
large record in the data set. In the absence of an index, the whole data set/document will be
scanned for retrieving the desired information.
In the case of unstructured data too, indexing helps in searching and retrieval. Based on text
or some other attributes, e.g. file name, like student details and all then the unstructured data
is indexed. Indexing in unstructured
data is difficult because neither does this data have any pre-defined attributes nor does it
follow any pattern or naming conventions. Text can be indexed based on a text string but in
case
of non-text based files, e.g. audio/video, etc.
Unstructured data may be stored in relational databases which support
BLOBs (Binary Large Objects). While unstructured data such as video or image file cannot
be stored fairly neatly into a relational column, there is no such problem when it comes to
storing
its metadata, such as the date and time of its creation, the owner or author of the data, etc.

Challenges faced are - Deriving meaning


File formats
Classification/
Taxonomy
Interpretation
Tags
Indexing
But these can be overcome by some measures like Tags
Text mining
Application
platforms
Classification/
taxonomy
Naming conventions/
The problem which the librarian is facing can easily be rectified using a library management
system.
Solutions
Implementing a SQL based Relational database management software system instead of the
Excel format where a simple query can retrieve most of the information a person is looking
for. Where the data is structured and can be retrieved easily.
Once the data is computerized, she should establish a data warehouse. A data warehouse
helps keep track of all the buying behaviour of its students and faculty with respect to which
books are being issued and for how long.
Data Marts: It is a scaled down version of data warehouse. This can be used to look into
specific aspects like track the borrowing of the various learning content by the users on a
weekly basis. This specific method of focusing on a particular field of the data is called slice
and dice of the data.
Data warehouse also helps track patterns and trends that help the librarian in understanding
which books are issued most, around which period of the month are books issued for the
longest, how often are books not returned, etc. such information is very useful in giving both
the overview of the functioning of the library and about specific details
Adding a barcode and Unique library Management System Number to tag and identify and
therefore categorise different books, CD/DVDs, magazines, journals, several online
references.
Have a student and staff login for making them available the online resources.
The library management system would have a lot of use cases. Each books, CD/DVDs,
magazines, journals, several online references will have a separate or unique Library
Management System Number where just by entering the same, Any library member should
be able to search books by their title, author, subject category as well by the publication date.
Finally OLAP and OLTP can be integrated to this framework to do simple and complex data
analysis.

B.
The unique identifying number should contain other details, including a rack number that will
assist in finding the book physically. But if someone takes the book from one rack manually
and places it in another rack without knowledge, the books location cannot be detected by the
system. The librarian will then have to manually search for the same book.
More than one edition of the same item in the library the Unique Number must be different
and they should be able to track which book was borrowed by which person accordingly.
Librarian who Primarily responsible for books, book objects, and users being introduced and
updated. Manual operations should also be made available to the librarian in case of any
system crash.
The machine should be able to collect information from a single library member, such as who
took a particular book or what books are checked-out.
After the due date, the system should be allowed to recover fines on books returned. It should
be possible for members to reserve books which are not actually eligible. Whenever the
reserved books become available, and when the book is not returned within the due date, the
device should be able to submit reminders.
There will be a special barcode in each book and member pass. The machine would be able to
read barcodes from library cards for books and members.
OLAP and OLTP data analysis can be done only if the entire system is in a RDBMS format,
the manual entry has to stop and every transaction(In this case borrowing, renewing, etc) has
to be computerised.
Additional Problems faced is the Library management system
Deriving meaning
File formats
Classification/
Taxonomy
Interpretation
Tags
Indexing
But these can be overcome by some measures like Tags
Text mining
Application
platforms
Classification
taxonomy
Naming conventions
Introduction:
The case portrays the protagonist Diana, setting out to
buy herself a suit for the interview she is about to take up in the coming days. She first surfs
online to get an idea of dresses for interviews and then decides to go to a retail store in order
to check the dress for herself to be in the right fit and the fabric she expects. She is said to
have taken her highly fashion quotient friend Veneela, along with her for shopping. Diana
visits a mall nearby on remembering an Ad from the Times of India Newspaper and tries out
3 of the clothes her friend and she along with the suggestions of the salesperson had given.
She buys one of the 3 dresses after trial and along the way to the billing is manipulated to buy
a scarf by the salesperson that she is least aware of.

From the above case, Analyze the stages in the consumer buying process.
Identify the Need:
Any purchase process begins predominantly with
identifying the need for the buying. In this context, Diana tries on her 3 year old coat and
finds it worn out and not fashionable. She identifies the need to buy a suit for herself so that
she is adorable and presentable to her interview for the banking job she has been called for.
She is convinced that a good attire will make the first impression right for her.
Search for Information and Evaluation of Alternatives:
After identifying the need to buy, Diana begins to
search for information on where she can get a perfect suit for her interview. She first surfs
online in order to get an idea about what are the best suits to be worn for an impressionable
interview. She later decides to go to The Central Mall nearby on remembering an ad about
women’s suits from the Show offs in the Times of India Newspaper, so that she can buy a
dress at the right fit and the most preferred fabric she wants.
Point of Incidence:
The purchase place where Diana and her friend come into contact with the products, they
have been gathering information about is termed as the “Point of Incidence.” The customers
usually try out the products for better insight about them and also are influenced by
salesperson and comments from other customers into buying something that they never
would have intended to, like Diana was manipulated into buying the second dress she tried
out among the 3 and a scarf for her suit, based on compliments from her friend, the
salesperson and another passerby.
Purchase Decision and Purchase:
If the product (in this case the suit) is satisfactory for
the price, the customer (Diana) decides to purchase the product keeping in mind the outcomes
from all the other alternatives they had seen before coming to this particular point of
incidence. If this seemed to be of best interest and value for money, the product is bought and
the transactional side of the purchase is complete.
Post-Purchase Experience:
It is very natural for Diana to expect a service like dry
cleaning from the shop after the purchase she had made from them. The purchase cycle is
complete only after feedback and post purchase services. This is very important in customer
retention strategies. Also, if Diana made the right impression in her interview, she would
definitely recommend the shop and the suit to her friends. If she found the fabric had some
defects, she would register a negative feedback about the same and would expect money back
from the store.

Analyzing the marketing environment for the cosmetic products industry or skincare industry
in India.

The cosmetics industry in India is worth Rs. 1197.495 crores, but the market for skincare
products is at a nascent stage. The business is expanding at very high growth rates and it
includes increasing progress in technology, awareness amongst consumers and price,
promotion concepts too. With the adoption of high lifestyles, availability of a greater number
of choices in the market, and an increase in incomes the general population and many social
media platforms where there can now follow their favorite celebrities and idols more closely
and are taking an interest in personal grooming. Therefore, the cosmetic market is booming.

SWOT ANALYSIS MARKETING helps us to learn the industry’s internal and external
factors.

Strengths:

Producing high-quality products.

Giving widespread promotion and ensure its availability.

Producing Safe skin products.

Having Pioneer Advantage


Weaknesses:

It’s a highly competitive market.

Difficulty to create a high-adoption rate for new product.

High promotion cost.

High corporate and legal expenses.

Opportunities:

Having a product with many variations that can be for any segment of customers.

Having both genders product.

Threats:

The threat of the competitors about copying the product.

Differences in taste and preferences of the same segment customers.

Following Government rules and laws.

PEST ANALYSIS ( external macro-environment)

PEST analysis gives insight into an overall environmental scan.


POLITICAL FACTORS

TAX POLICY: the companies have to pay a tax rate of 30%, which is the highest amongst all
the corporate tax rate. This leads to a reduction in the profits of the company.

TRADE RESTRICTIONS& TARIFFS: the companies have to pay charges for crossing the
state borders for the delivery of the finished products and purchase of raw materials

ECONOMIC FACTORS

INFLATION RATE: the inflationary period is affecting the companies as the raw materials,
equipment, etc. has been purchased at a higher price.

INTEREST RATES: the company has to pay a tax of 60 lakhs per year as tax and the rate of
interest on the corporate loan is 12%.

SOCIAL FACTORS
Demographic and cultural aspects form a part of the social factors.

HEALTH CONSCIOUSNESS: keeping the safety of the consumers and responsibility in


mind the company has used chemicals that are not harmful to the skin.

ENVIRONMENT NORMS: following the environmental norms in mind, the company has
converted 30% of its land into a green area and also used eco-friendly paper for the purpose
of packaging.

TECHNOLOGICAL FACTORS
Technological factors include ecological and environmental aspects.

RESEARCH AND DEVELOPMENT: the company will have to spend a lot on the research
and development of the product to be chemical-free and have a competitive advantage in the
market. Thus, the company will have to continue investing in R&D activities to keep a firm
hold in the market.
However, the penetration of markets in urban and rural is relatively low.

The main reason behind this is the Socio-cultural aspect:

The Main Reason being that People’s home-made and traditional products are still preferred
by people to cure skin ailments in India.
The other reasons being
1) culture; 2) language; 3) religion; 4) level of education, 5) customer preferences, and
6) the attitude of the society towards foreign goods and services.
Indian railways was facing a huge problem of funding which they were not able to get from
FDI. The main problem was that FDI did not allow to build infrastructure and safety features.
But in August 2018, cabinet cleared FDI provision in the Indian Railways and key areas were
notified to the general public. The main target of Indian Railways will be to build Bio-toilets
in passenger trains, automated laundry system at every stations, appropriate fooding facility.
After the funding from FDI, Tejas Express was managed to run by Private entity using Indian
Railways Infrastructure. Thus, Indian Indian Railways is gearing up for the government’s
route to remodelling of trains and railway station through a “first-of-its-kind” public-private
partnership (PPP). 
Pinacle, Inc., an organization that markets painless hypodermic needles to hospitals, would
like to decrease its inventory cost by determining the optimal number of hypodermic needles
to obtain per order. The annual demand is 1,000 units; the setup or ordering cost is $10 per
order; and the holding cost per unit per year is $.50. Additionally, Pinacle, Inc. has a 250-day
working year.
a. Determine the Optimal Order Size (3 marks)
b. Determine the number of orders and the expected time between orders. (3 marks)
(CO3/ RBT L4)
c. Determine the combined annual ordering and holding costs (4 marks) 

Optimal order size is calculated by calculating Economic Order Quantity (EOQ)


EOQ = Sqrt ( 2 x A x O / C)
Where,
A = Annual Demand = 1,000 units
O = Ordering cost per order = $10
C = Carrying cost per unit per annum = $0.50
So, EOQ = Sqrt ( 2 x 1,000 x 10 / 0.50)
= Sqrt (40,000)
= 200 units per order

Number of orders = A / EOQ


= 1,000 / 200
= 5 orders
Time between orders
= Number of Days / Number of orders
= 250 / 5
= 50 days
Total cost = Storage Cost + Ordering Cost = Average Inventory * holding cost per unit per
year + Number of Orders * Ordering cost per order = 200 / 2 * $ 0.50 + 1,000 / 200 * $ 10 =
$ 100
Kerala is the widely known country for coir products manufacturing as it has wealthy raw
materials as natural resources While for establishing a coir plant in other southern states, the
company needs to overcome some regulation issues and a location analysis should be done
based on the following factors:
Location Factor rating
The organization should list down all relevant factors in location decisions like a
methodology to assess the attractiveness of each potential location. The advantage of this
method is to incorporate any factor in the analysis
Market-Related Issues
In a location analysis market for the coir products makes an important parameter like
Kerala. Similarly, in Tamil Nadu, there is quite a potential market for its demand over the
coir products
Core related issues
Core related issues capture the desirability of competing location in the cost of operating the
system in alternative locations. Typically logistics and distribution costs are often considered
for the analysis just because these are easy, direct, and tangible to measure and analyze.
Wages of input cost like labor demand in the available state, tax, and other tariffs
Regulatory and policy issues
The quality of the legal judicial jurisdictions in institutions for intellectual property will be a
major element for good governance quality, availability of free markets, public finances,
robust financial institutions will increase the attractiveness of the location
So consider the following factors, I would say it would be advisable to establish the coir plant
in Tamil Madu and in cities like Pollachi as it has a wide distribution of coconut trees which
would be suitable and favorable for the industry to sustain and grow.

You might also like