You are on page 1of 10

Q1.Explain Data computational techniques conventional & modern.

AI is one of the fascinating and universal fields of Computer science which has a great scope
in future. AI holds a tendency to cause a machine to work as a human. Artificial Intelligence
is composed of two words Artificial and Intelligence, where Artificial defines "man-made,"
and intelligence defines "thinking power", hence AI means "a man-made thinking power."
So, we can define AI as:
"It is a branch of computer science by which we can create intelligent machines which can
behave like a human, think like humans, and able to make decisions."
Artificial Intelligence exists when a machine can have human based skills such as learning,
reasoning, and solving problems

With Artificial Intelligence you do not need to preprogram a machine to do some work,
despite that you can create a machine with programmed algorithms which can work with
own intelligence, and that is the awesomeness of AI.
It is believed that AI is not a new technology, and some people says that as per Greek myth,
there were Mechanical men in early days which can work and behave like humans.

Q2 Explain Artificial Intelligence


"It is a branch of computer science by which we can create intelligent machines which can
behave like a human, think like humans, and able to make decisions." Artificial Intelligence
exists when a machine can have human based skills such as learning, reasoning, and solving
problems With Artificial Intelligence you do not need to preprogram a machine to do some
work, despite that you can create a machine with programmed algorithms which can work
with own intelligence, and that is the awesomeness of AI. It is believed that AI is not a new
technology, and some people says that as per Greek myth, there were Mechanical men in
early days which can work and behave like humans. Goals of AI : Following are the main
goals of Artificial Intelligence: 1. Replicate human intelligence 2. Solve Knowledge-intensive
tasks 3. An intelligent connection of perception and action 4. Building a machine which can
perform tasks that requires human intelligence such as: o Proving a theorem o Playing chess
o Plan some surgical operation o Driving a car in traffic 5. Creating some system which can
exhibit intelligent behavior, learn new things by itself, demonstrate, explain, and can advise
to its user.
Q3.Explain Machine Learning big data?
Machine Learning is the field of study that gives computers the capability to learn without
being explicitly programmed. ML is one of the most exciting technologies that one would
have ever come across. As it is evident from the name, it gives the computer that makes it
more similar to humans: The ability to learn. Machine learning is actively being used today,
perhaps in many more places than one would expect.
The field of study known as machine learning is concerned with the question of how to
construct computer programs that automatically improve with experience.
Examples • Handwriting recognition learning problem
• Task T : Recognizing and classifying handwritten words within images
• Performance P : Percent of words correctly classified

• Training experience E : A dataset of handwritten words with given classifications


• A robot driving learning problem
• Task T : Driving on highways using vision sensors
• Performance P : Average distance traveled before an error
• Training experience E : A sequence of images and steering commands recorded while
observing a human driver
Classification of Machine Learning

Machine learning implementations are classified into four major categories,


depending on the nature of the learning “signal” or “response” available to a learning
system which are as follows: A. Supervised learning:

Supervised learning is the machine learning task of learning a function that maps an input
to an output based on example input-output pairs. The given data is labeled . Both
classification and regression problems are supervised learning problems .

B. Unsupervised learning:
Unsupervised learning is a type of machine learning algorithm used to draw inferences from
datasets consisting of input data without labeled responses. In unsupervised learning
algorithms, classification or categorization is not included in the observations.

C. Reinforcement learning: Reinforcement learning is the problem of getting an agent to act


in the world so as to maximize its rewards. A learner is not told what actions to take as in
most forms of machine learning but instead must discover which actions yield the most
reward by trying them.
D. Semi-supervised learning: Semi-supervised learning is an approach to machine learning
that combines small labeled data with a large amount of unlabeled data during training.
Semi-supervised learning falls between unsupervised learning and supervised learning.
Q4 Explain Parallel computing and Algorithms

It is the use of multiple processing elements simultaneously for solving any problem.
Problems are broken down into instructions and are solved concurrently as each resource
that has been applied to work is working at the same time.

Advantages of Parallel Computing over Serial Computing are as follows:


1. It saves time and money as many resources working together will reduce the time and
cut potential costs.
2. It can be impractical to solve larger problems on Serial Computing.
3. It can take advantage of non-local resources when the local resources are finite.
4. Serial Computing ‘wastes’ the potential computing power, thus Parallel Computing
makes better work of the hardware.
Applications of Parallel Computing:

• Databases and Data mining.


• Real-time simulation of systems.
• Science and Engineering.
• Advanced graphics, augmented reality, and virtual reality.
Why parallel computing?

• The whole real-world runs in dynamic nature i.e. many things happen at a certain time
but at different places concurrently. This data is extensively huge to manage.
• Real-world data needs more dynamic simulation and modeling, and for achieving the
same, parallel computing is the key.
• Parallel computing provides concurrency and saves time and money.
• Complex, large datasets, and their management can be organized only and only using
parallel computing’s approach.
• Ensures the effective utilization of the resources. The hardware is guaranteed to be used
effectively whereas in serial computation only some part of the hardware was used and the
rest rendered idle.
• Also, it is impractical to implement real-time systems using serial computing.
Q5 Explain big data in detail?
Big data refers to data sets that are too large or complex to be dealt with by traditional
data-processing application software. Data with many fields (rows) offer greater statistical
power, while data with higher complexity (more attributes or columns) may lead to a higher
false discovery rate. Big data analysis challenges include capturing data, data storage, data
analysis, search, sharing, transfer, visualization, querying, updating, information privacy,
and data source. Big data was originally associated with three key concepts: volume,
variety, and velocity.
Source of Big Data:
• Social Media: Today’s world a good percent of the total world population is engaged with
social media like Facebook, WhatsApp, Twitter, YouTube, Instagram, etc. Each activity on
such media like uploading a photo, or video, sending a message, making comment, putting
like, etc create data.
• A sensor placed in various places: Sensor placed in various places of the city that gathers
data on temperature, humidity, etc. A camera placed beside the road gather information
about traffic condition and creates data. Security cameras placed in sensitive areas like
airports, railway stations, and shopping malls create a lot of data.
• Customer Satisfaction Feedback: Customer feedback on the product or service of the
various company on their website creates data. For Example, retail commercial sites like
Amazon, Walmart, Flipkart, and Myntra gather customer feedback on the quality of their
product and delivery time. Telecom companies, and other service provider organizations
seek customer experience with their service. These create a lot of data.
• E-commerce: In e-commerce transactions, business transactions, banking, and the stock
market, lots of records stored are considered one of the sources of big data. Payments
through credit cards, debit cards, or other electronic ways, all are kept recorded as data.
• Global Positioning System (GPS): GPS in the vehicle helps in monitoring the movement of
the vehicle to shorten the path to a destination to cut fuel, and time consumption. This
system creates huge data on vehicle position and movement.
Q6 Managing Big data and techniques?

The term “big data” usually refers to data stores characterized by the high volume, high
velocity and wide variety.

Big data management is a broad concept that encompasses the policies, procedures and
technology used for the collection, storage, governance, organization, administration and
delivery of large repositories of data. It can include data cleansing, migration, integration
and preparation for use in reporting and analytics.
Big data management is closely related to the idea of data lifecycle management (DLM).
This is a policy-based approach for determining which information should be stored where
within an organization’s IT environment, as well as when data can safely be deleted.Within
a typical enterprise, people with many different job titles may be involved in big data
management. They many include a chief data officer (CDO), chief information officer (CIO),
data managers, database administrators, data architects, data modelers, data scientists,
data warehouse managers, data warehouse analysts, business analysts, developers and
others

Q7 Research methodology basic and importance?


Research methodology is a way of explaining how a researcher intends to carry out their
research. It's a logical, systematic plan to resolve a research problem. A methodology
details a researcher's approach to the research to ensure reliable, valid results that address
their aims and objectives. It encompasses what data they're going to collect and where
from, as well as how it's being collected and analyzed.

Types of research methodology


When designing a research methodology, a researcher has several decisions to make.
One of the most important is which data methodology to use, qualitative, quantitative or a
combination of the two. No matter the type of research, the data gathered will be as
numbers or descriptions, and researchers can choose to focus on collecting words, numbers
or both.

Here are the different methodologies and their applications:


Qualitative:
Qualitative research involves collecting and analyzing written or spoken words
and textual data. It may also focus on body language or visual elements and help to create
a detailed description of a researcher's observations. Researchers usually gather qualitative
data through interviews, observation and focus groups using a few carefully chosen
participants.
Quantitative:
Researchers usually use a quantitative methodology when the objective of the
research is to confirm something. It focuses on collecting, testing and measuring numerical
data, usually from a large sample of participants. They then analyze the data using statistical
analysis and comparisons. Popular methods used to gather quantitative data are:
Surveys
Questionnaires
Test
Databases

Organizational records
Mixed-method:
This contemporary research methodology combines quantitative and qualitative
approaches to provide additional perspectives, create a richer picture and present multiple
findings. The quantitative methodology provides definitive facts and figures, while the
qualitative provides a human aspect. This methodology can produce interesting results as
it presents exact data while also being exploratory.

Research methodology importance


A research methodology gives research legitimacy and provides scientifically sound
findings. It also provides a detailed plan that helps to keep researchers on track, making the
process smooth, effective and manageable. A researcher's methodology allows the reader
to understand the approach and methods used to reach conclusions.

Having a sound research methodology in place provides the following benefits:


1. Other researchers who want to replicate the research have enough information to do so.
2. Researchers who receive criticism can refer to the methodology and explain their
approach.
3. It can help provide researchers with a specific plan to follow throughout their research.
4. The methodology design process helps researchers select the correct methods for the
objectives. 5. It allows researchers to document what they intend to achieve with the
research from the outset.

Q8 Explain data Science in detail?

Data science is the study of data. Like biological sciences is a study of biology, physical
sciences, it’s the study of physical reactions. Data is real, data has real properties, and we
need to study them if we’re going to work on them. Data Science involves data and some
signs.
It is a process, not an event. It is the process of using data to understand too many
different things, to understand the world. Let Suppose when you have a model or
proposed explanation of a problem, and you try to validate that proposed explanation or
model with your data.
What is Data science?
Data Science is an interdisciplinary field that focuses on extracting knowledge from
data sets which are typically huge in amount. The field encompasses analysis, preparing
data for analysis, and presenting findings to inform high-level decisions in an organization.
It is a field containing many elements like mathematics, statistics, computer science,
information visualization, graphics, business etc. Those who are good at these respective
fields with enough knowledge of the domain in which you are willing to work can call
themselves as Data Scientist. It’s not an easy thing to do but not impossible too. You need
to start from data, it’s visualization, programming, formulation, development, and
deployment of your model. In the future, there will be great hype for data scientist jobs.
Taking in that mind, be ready to prepare yourself to fit in this world.
Applications of Data Science:

Following are some of the applications that makes use of Data Science for it’s services:
• Internet Search Results (Google)
• Recommendation Engine (Spotify)
• Intelligent Digital Assistants (Google Assistant)
• Autonomous Driving Vehicle (Waymo)

• Spam Filter (Gmail) • Abusive Content and Hate Speech Filter (Facebook)
• Robotics (Boston Dynamics)
• Automatic Piracy Detection (YouTube)

Q9 Various Applications of data Science ?


Following are some of the applications that makes use of Data Science for it’s services:
1. In Search Engines
The most useful application of Data Science is Search Engines. As we know
when we want to search for something on the internet, we mostly used Search
engines like Google, Yahoo, Safari, Firefox, etc. So Data Science is used to get
Searches faster.
For example, when we search something suppose “python courses” then at
the time on the internet explorer we get first link of most visited website. So this
analysis is done using data science and we get topmost visited links.
2. In Transport
Data Science also entered into the Transport field like Driverless Cars. With
the help of Driverless Cars, it is easy to reduce the number of Accidents.
For Example, In Driverless Cars the training data is fed into the algorithm and with
the help of Data Science techniques, the Data is analyzed like what is the speed
limit in Highway, Busy Streets, Narrow Roads, etc. And how to handle different
situations while driving etc.
3. In Finance
Data Science plays a key role in Financial Industries. Financial Industries always
have an issue of fraud and risk of losses. Thus, Financial Industries needs to automate risk
of loss analysis in order to carry out strategic decisions for the company. Also, Financial
Industries uses Data Science Analytics tools in order to predict the future. It allows the
companies to predict customer lifetime value and their stock market moves.
For Example, In Stock Market, Data Science is the main part. In the Stock Market, Data
Science is used to examine past behavior with past data and their goal is to examine the
future outcome. Data is analyzed in such a way that it makes it possible to predict future
stock prices over a set timetable. 4. In E-Commerce

E-Commerce Websites like Amazon, Flipkart, etc. uses data Science to make a better
user experience with personalized recommendations.
For Example, When we search for something on the E-commerce websites we get
suggestions similar to choices according to our past data and also we get
recommendations according to most buy the product, most rated, most searched, etc.
This is all done with the help of Data Science.

5.In Health Care


In the Healthcare Industry data science act as a boon. Data Science is used for:
• Detecting Tumor.
• Drug discoveries.
• Medical Image Analysis.
• Virtual Medical Bots.
• Genetics and Genomics.
• Predictive Modeling for Diagnosis etc.

6. Image Recognition
Currently, Data Science is also used in Image Recognition. For Example, When we upload
our image with our friend on Facebook, Facebook gives suggestions Tagging who is in the
picture. This is done with the help of machine learning and Data Science. When an Image
is Recognized, the data analysis is done on one’s Facebook friends and after analysis, if the
faces which are present in the picture matched with someone else profile then Facebook
suggests us auto-tagging.
7. Targeting Recommendation
Targeting Recommendation is the most important application of Data Science.
Whatever the user searches on the Internet, he/she will see numerous posts everywhere.
This can be explained properly with an example: Suppose I want a mobile phone, so I just
Google search it and after that, I changed my mind to buy offline. Data Science helps
those companies who are paying for Advertisements for their mobile. So everywhere on
the internet in the social media, in the websites, in the apps everywhere I will see the
recommendation of that mobile phone which I searched for. So this will force me to buy
online.
8. Data Science in Gaming
In most of the games where a user will play with an opponent i.e. a Computer
Opponent, data science concepts are used with machine learning where with the help of
past data the Computer will improve its performance. There are many games like Chess,
EA Sports, etc. will use Data Science concepts.
9. Autocomplete
AutoComplete feature is an important part of Data Science where the user will
get the facility to just type a few letters or words, and he will get the feature of auto-
completing the line.
For example, In Google Mail, when we are writing formal mail to someone so at that time
data science concept of Autocomplete feature is used where he/she is an efficient choice
to auto-complete the whole line. Also in Search Engines in social media, in various apps,
AutoComplete feature is widely used.

Q10 Programming Paradigm and algorithm ?


Programming paradigm
Paradigm can also be termed as method to solve some problem or do some task.
Programming paradigm is an approach to solve problem using some programming
language or also we can say it is a method to solve a problem using tools and techniques
that are available to us following some approach. There are lots for programming
language that are known but all of them need to follow some strategy when they are
implemented and this methodology/strategy is paradigms. Apart from varieties of
programming language there are lots of paradigms to fulfill each and every demand.
Algorithm
The word Algorithm means ” A set of rules to be followed in calculations or other
problem-solving operations ” Or ” A procedure for solving a mathematical problem in a
finite number of steps that frequently by recursive operations “.
Therefore Algorithm refers to a sequence of finite steps to solve a particular problem.
Types of Algorithms:

There are several types of algorithms available. Some important algorithms are:
1. Brute Force Algorithm: It is the simplest approach for a problem. A brute force
algorithm is the first approach that comes to finding when we see a problem.
2. Recursive Algorithm: A recursive algorithm is based on recursion. In this case, a
problem is broken into several sub-parts and called the same function again and again.
3. Backtracking Algorithm: The backtracking algorithm basically builds the solution by
searching among all possible solutions. Using this algorithm, we keep on building the
solution following criteria. Whenever a solution fails we trace back to the failure point and
build on the next solution and continue this process till we find the solution or all possible
solutions are looked after.

4. Searching Algorithm: Searching algorithms are the ones that are used for searching
elements or groups of elements from a particular data structure. They can be of different
types based on their approach or the data structure in which the element should be
found.
5. Sorting Algorithm: Sorting is arranging a group of data in a particular manner according
to the requirement. The algorithms which help in performing this function are called
sorting algorithms. Generally sorting algorithms are used to sort groups of data in an
increasing or decreasing manner. 6. Hashing Algorithm: Hashing algorithms work similarly
to the searching algorithm. But they contain an index with a key ID. In hashing, a key is
assigned to specific data. . Divide and Conquer Algorithm: This algorithm breaks a problem
into subproblems, solves a single sub-problem and merges the solutions together to get
the final solution. It consists of the following three steps:

• Divide
• Solve
• Combine
8. Greedy Algorithm: In this type of algorithm the solution is built part by part. The
solution of the next part is built based on the immediate benefit of the next part. The one
solution giving the most benefit will be chosen as the solution for the next part.
9. Dynamic Programming Algorithm: This algorithm uses the concept of using the already
found solution to avoid repetitive calculation of the same part of the problem. It divides
the problem into smaller overlapping subproblems and solves them.
10. Randomized Algorithm: In the randomized algorithm we use a random number so it
gives immediate benefit. The random number helps in deciding the expected outcome.
Q11 Explain data structure?
A data structure is a storage that is used to store and organize data. It is a way of
arranging data on a computer so that it can be accessed and updated efficiently.
A data structure is not only used for organizing the data. It is also used for processing,
retrieving, and storing data. There are different basic and advanced types of data
structures that are used in almost every program or software system that has been
developed. So we must have good knowledge about data structures.
• Linear data structure: Data structure in which data elements are arranged sequentially
or linearly, where each element is attached to its previous and next adjacent elements, is
called a linear data structure. Examples of linear data structures are array, stack, queue,
linked list, etc.
• Static data structure: Static data structure has a fixed memory size. It is easier to access
the elements in a static data structure. An example of this data structure is an array.
• Dynamic data structure: In dynamic data structure, the size is not fixed. It can be
randomly updated during the runtime which may be considered efficient concerning the
memory (space) complexity of the code.
Examples of this data structure are queue, stack, etc.
• Non-linear data structure: Data structures where data elements are not placed
sequentially or linearly are called non-linear data structures. In a non-linear data
structure, we can’t traverse all the elements in a single run only.
Examples of non-linear data structures are trees and graphs.

You might also like