You are on page 1of 5

University of Education Lahore D.G.

Khan (Campus)

Assignment

Name: Maryam Saeed

Roll No:
Class: BS-IT

Department: Information Technology

Submitted to:
Q;1- What is Internet traffic measurement and statistical analysis?
In computer networks, network traffic measurement is the process of measuring the amount and
type of traffic on a particular network. Network analysis could be measured by active technique
and passive techniques. Active techniques are more intrusive but are arguably more accurate.
Passive techniques are of less network overhead and hence can run in the background to be used
to trigger network management actions . A limitation of active measurement is that it may
disturb the network by injecting artificial probe traffic into the network and the main drawback
of using this passive measurement is that he assumed that he “owns” all networks.
In the network traffic measurement there are mainly two challenges like 1) Flow statistics
computation time and 2) Single node failure. To address these challenge, I want to implement
the internet traffic measurement and analysis using MapReduce programming model of Hadoop
framework. Apache Hadoop is an open source software frame work for storage and large scale
processing of Netflow datasets. MapReduce is a programming model and an associated
implementation for processing and generating large datasets that is amenable to a broad
variety of real-world tasks.

statistical analysis

Statistical analysis is a component of data analytics. In the context of business intelligence (BI),
statistical analysis involves collecting and scrutinizing every data sample in a set of items from
which samples can be drawn. A sample, in statistics, is a representative selection drawn from a
total population. 

Statistical analysis can be broken down into five discrete steps, as follows:

 Describe the nature of the data to be analyzed.

 Explore the relation of the data to the underlying population.

 Create a model to summarize understanding of how the data relates to the underlying
population.

 Prove (or disprove) the validity of the model.

 Employ predictive analytics to run scenarios that will help guide future actions.
The goal of statistical analysis is to identify trends. A retail business, for example, might use
statistical analysis to find patterns in unstructured and semi-structured customer data that can be
used to create a more positive customer experience and increase sales.

Which software is used to measure Internet traffic measurement…??


NetLimiter is a client-side traffic shaping, monitoring and firewall (computing) software for
the Windows operating system. Unlike most traffic-shaping utilities, which are based on
centrally managed hardware, NetLimiter is a software-only solution. This has the advantage of
being less expensive to deploy, but can result in being more difficult to manage its use across
more than one computer
While it has a significant market among more technically minded computer users, in medium-to-
large networks it becomes difficult for administrators to maintain multiple copies of
configuration files. It is, however, useful for simulating slow links between departments showing
how the applications will work when deployed to slower sites.
It is often lumped together with other free or shareware in articles that present the reader with
'essential' applications and poweruser-type utilities.
Feature and Version.
The software is available in three versions: the freeware Monitor and two paid for
versions, Lite and Pro. Monitor provides real-time monitoring and statistics. Lite provides
monitoring and limits, while the Pro version includes all Monitor and Lite features together with
additional features including the ability to act as a firewall, remote administration via a webpage,
and filtering.
The product has its own programming interface, allowing integration with other software.
Version 1.3 of the software was criticized by cNet in 2009 as using too much memory (12 MB).
Version 3 was released on August 31, 2010.
Version 4.0.13 is a first NetLimiter 4 final release on August 5, 2015.[

Which software is used to measure Statistical Analysis…??


Statistical Analysis Software (SAS)
SAS stands for Statistical Analysis Software and is used all over the world in approximately
118 countries to solve complex business problems. Much of the software is either menu driven
or command driven. Like the other programming software, SAS has its own language that can
control the program during its execution.
The professionals at Statistics Solutions are experts in Statistical Analysis Software (SAS) and
have helped thousands of doctoral candidates, masters candidates and researchers. Contact
Statistics Solutions today to learn how we can help you.
SAS is the name of the software and the name of the company that created it in 1970. By 1980, it
added graphics, online data entry and compiled the C programming as well. In the 1990’s, SAS
added tools like visualizing data, administering, storing data warehouses, and building interfaces
to the World Wide Web, etc.

SAS is so powerful that it can understand any type of data and it can access data from any
software and any format. Logical operation can also be performed in SAS by using if –then
statements. SAS runs all statements in a loop, step by step, and executes the program very
quickly. ODS procedure is used to take the output in other formats. Examples of this include
HTML, RTF, excel, etc. We can also make a macro from the SAS program to meet various
research needs.

SAS Window:
SAS has the following main windows:

 Editor window
 Log window
 Output window
 Results window
 Explorer window

Q;2- Explain Skeleton Parallel programing for multi core architecture..??


Definition

A parallel skeleton is a programming construct (or a function in a library), which abstracts a


pattern of parallel computation and interaction. To use a skeleton, the programmer must provide
the code and type definitions for various application-specific operations, usually expressed
sequentially. The skeleton implementation takes responsibility for composing these operations
with control and interaction code in order to effect the specified computation, in parallel, as
efficiently as possible. Abstracting from parallelism in this way can greatly simplify and
systematize the development of parallel programs and assist in their cost-modeling,
transformation, and optimization. Because of their high level of abstraction, skeletons have a
natural affinity with the concepts of higher-order function from functional programming and of
templates and generics from object-oriented programming, and many concrete skeleton systems
exploit these mechanisms.

Q;3- Explain 3d Object Modeling using range scanner.???


Range scanners
Computer vision researchers have long studied the problem of determining the shape of a scene
from a set of photographs. In a sense, they attempt to take advantage of a wealth of visual cues
present in the human visual system: stereo and motion parallax, occlusion, perspective, shading,
focus, and so on. These methods generally assume that the sensor simply records light that
already
exists in the scene; i.e., the sensor is passive. In this issue, Debevec s discusses a few of these
passive techniques. The accuracy of such methods is generally limited by a number of factors
that
can frequently be resolved by carefully controlling how the scene is illuminated. If we permit
ourselves to project special light patterns on our scene, we enter the realm of active or structured
light sensing.
One of the most common forms of active range sensing is optical triangulation. The
fundamental
principle is illustrated in Figure 1a. A focused beam of light illuminates a tiny spot on the surface
of an object. For a fairly matte surface, this light is scattered in many directions, and a camera
records an image of the spot. We can compute the center pixel of this spot and trace a line of
sight
through that pixel until it intersects the illumination beam at a point on the surface of the object.
This technique gives us a single range point, but how do we modify the design to scan the
surface
of an object? Many approaches have been developed to answer this question. One method is to
scan the light spot over the surface of the object using mirrors. Another approach is to fan the
beam
into a plane of laser light, as shown in Figure 1b. This light will cast a stripe onto the surface of
the
object which is then imaged by a conventional video camera. We can treat each camera scanline
separately, find the center of the imaged light, and intersect the line of sight with the laser plane.
Thus, each image gives us a range profile (one point per scanline), and by sweeping the light
over
the surface of the object, we can capture its shape.

You might also like