You are on page 1of 9

Charina Marie L.

Cadua February 4, 2020

Assignment:

Web Designer, Web Developer, Http, Https, LAN, MAN, WAN, URI, URL, URN, Cluster
Computing?

WEB DESIGNER

The web designers are responsible for the visual aspect, which includes the layout, coloring
and typography of a web page. Web designers will also have a working knowledge of markup
languages such as HTML and CSS, although the extent of their knowledge will differ from one web
designer to another.

The Duties and Responsibilities of a Designer

Professionals who specialize in web design will organize information, create content and
design the layout of the content on a web medium. It is the designer’s job to review the needs of
their client or the goals of their assigned project to design images and web pages that will create a
user with a unique experience while still communicating a message. The scope of the project will be
dependent on the communication problems a client wants to solve or the current status of a
company’s website. Some other responsibilities include:

* Using appropriate underlying technologies for website functionality


* Designing navigational elements
* Translating needs of clients and users into concepts
* Turning brand into graphics, colors, layout and fonts
* Using HTML coding to layout the website
* Presenting content
* Designing to search engine optimization and rankings
* Updating the website as needed

HTML (Hyper Text markup Language) is a set of “markup” tags that are responsible for structuring all the various elements of a
webpage. It designates headers, footers, paragraphs, links, images, and everything in between. HTML is what search engine
crawlers “read” when they index your website. Proper HTML is critical to a professional, functioning website. HTML mistakes will
almost universally result in visual anomalies on a website, apparent even to users. At worst, improper HTML can essentially break an
entire website.

You can think of CSS (Cascading Style Sheets) as supplements to HTML. CSS is responsible for the styling of HTML elements – in
other words, CSS controls how website elements look to end users. CSS lets a decent developer style the content and change things
like colors, sizes, and borders.

JavaScript is yet another supplementary language to HTML and CSS. JavaScript allows for the enhanced manipulation of website
elements. It gives designers advanced control over the elements of a website. For example, designers can use JavaScript to define
that “when the user does X, Y will happen,” where Y is a functional complexity that can’t be handled by simple HTML and CSS.
WEB DEVELOPER

A web developer is a programmer who specializes in, or is specifically engaged in, the
development of World Wide Web applications using a client–server model. The applications
typically use HTML, CSS and JavaScript in the client, PHP, ASP.NET (C#) or Java in the server,
and http for communications between client and server.

Modern web applications often contain three or more tiers, and depending on the size of the
team a developer works on, he or she may specialize in one or more of these tiers - or may take a
more interdisciplinary role. A web developer is usually classified as a Front-end web development
or a Back-End Web Developer.

Front-End Developer

A front-end developer is someone who takes a client or design team’s website design
and writes the code needed to implement it on the web. A decent front-end web developer
will be fluent in at least three programming languages – HTML, CSS, and JavaScript.

So, what do web developers do when they work on the front end of a website?

What is a web developer responsible for is that they make sure that all of the content that is
needed for the website is clear, visible, and found in the right place. In some cases front-end
developers may also have content writing skills, allowing them to create the content for the website
as they go.
They make sure that the right colors are in the right places, especially concerning text colors,
background colors, and headers. Some of the best front-end developers are also very good
designers, allowing them to tweak things as they go.
They make sure that all outbound links are correctly formatted, that all buttons work properly,
and that the website is responsive and attractive. Mobile design is usually a big part of the job, while
it is also important to make sure that a website will display correctly on all web browsers.

Back-End Developer

While front-end developers are responsible for client-side programming, back-end developers
have to deal with the server-side.

This means that they have to create the code and programs which power the website’s server,
databases, and any applications that it contains. The most important thing as a back-end developer is
the ability to be able to create a clean, efficient code that does what you want it to in the quickest way
possible. Since website speed is a major consideration when it comes to search engine optimization
(SEO), it is a large factor when developing the back-end.

Back-end developers use a wide range of different server-side languages to build complicated
programs. Some of the most popular languages used include PHP, Python, Java, and Ruby.
JavaScript is also becoming increasingly widespread as a back-end development language, while
SQL is commonly used to manage and analyze data in website databases.

Since different websites have different needs, a back-end developer must be flexible, able to
create different programs, and they absolutely must have a clear, in-depth understanding of the
languages that they use. This is very important to make sure that they can come up with the most
efficient method of creating the required program while making sure that it is secure, scalable, and
easy to maintain.
HTTP (HYPERTEXT PROTOCOL)

The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative,
hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web,
where hypertext documents include hyperlinks to other resources that the user can easily access, for
example by a mouse click or by tapping the screen in a web browser.

Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989. Development of early
HTTP Requests for Comments (RFCs) was a coordinated effort by the Internet Engineering Task Force
(IETF) and the World Wide Web Consortium (W3C), with work later moving to the IETF.

HTTP/1.1 was first documented in RFC 2068 in 1997. That specification was obsoleted by RFC 2616
in 1999, which was likewise replaced by the RFC 7230 family of RFCs in 2014.

HTTP/2 is a more efficient expression of HTTP's semantics "on the wire", and was published in 2015;
it is now supported by major web servers and browsers over Transport Layer Security (TLS) using an
Application-Layer Protocol Negotiation (ALPN) extension[2] where TLS 1.2 or newer is required.

HTTP/3 is the proposed successor to HTTP/2, which is already in use on the web, using UDP instead
of TCP for the underlying transport protocol. Like HTTP/2, it does not obsolete previous major
versions of the protocol. Support for HTTP/3 was added to Cloudflare and Google Chrome (Canary
build) in September 2019, and can be enabled in the stable versions of Chrome and Firefox (since
version 72,[8] January 2020).

HTTP functions as a request–response protocol in the client–server computing model. A web


browser, for example, may be the client and an application running on a computer hosting a website may be
the server. The client submits an HTTP request message to the server. The server, which provides
resources such as HTML files and other content, or performs other functions on behalf of the client, returns a
response message to the client. The response contains completion status information about the request and
may also contain requested content in its message body.

HTTP is designed to permit intermediate network elements to improve or enable communications


between clients and servers. High-traffic websites often benefit from web cache servers that deliver content
on behalf of upstream servers to improve response time. Web browsers cache previously accessed web
resources and reuse them, when possible, to reduce network traffic. HTTP proxy servers at private network
boundaries can facilitate communication for clients without a globally routable address, by relaying messages
with external servers.

HTTP is an application layer protocol designed within the framework of the Internet protocol suite. Its
definition presumes an underlying and reliable transport layer protocol,[9] and Transmission Control Protocol
(TCP) is commonly used. However, HTTP can be adapted to use unreliable protocols such as the User
Datagram Protocol (UDP), for example in HTTPU and Simple Service Discovery Protocol (SSDP).
HTTPS (HYPERTEXT PROTOCOL SECURE)

Hypertext Transfer Protocol Secure (HTTPS) is an extension of the Hypertext Transfer Protocol
(HTTP). It is used for secure communication over a computer network, and is widely used on the Internet. In
HTTPS, the communication protocol is encrypted using Transport Layer Security (TLS) or, formerly, its
predecessor, Secure Sockets Layer (SSL). The protocol is therefore also often referred to as HTTP over
TLS, or HTTP over SSL.

The principal motivations for HTTPS are authentication of the accessed website, protection of the
privacy and integrity of the exchanged data while in transit. It protects against man-in-the-middle attacks. The
bidirectional encryption of communications between a client and server protects against eavesdropping and
tampering of the communication.[4][5] In practice, this provides a reasonable assurance that one is
communicating without interference by attackers with the website that one intended to communicate with, as
opposed to an impostor. HTTPS is now used more often by web users than the original non-secure HTTP,
primarily to protect page authenticity on all types of websites; secure accounts; and to keep user
communications, identity, and web browsing private.

HTTPS signals the browser to use an added encryption layer of SSL/TLS to protect the traffic.
SSL/TLS is especially suited for HTTP, since it can provide some protection even if only one side of the
communication is authenticated. This is the case with HTTP transactions over the Internet, where typically
only the server is authenticated (by the client examining the server's certificate).

HTTPS creates a secure channel over an insecure network. This ensures reasonable protection from
eavesdroppers and man-in-the-middle attacks, provided that adequate cipher suites are used and that the
server certificate is verified and trusted. Because HTTPS piggybacks HTTP entirely on top of TLS, the
entirety of the underlying HTTP protocol can be encrypted. This includes the request URL (which particular
web page was requested), query parameters, headers, and cookies (which often contain identity information
about the user). However, because host (website) addresses and port numbers are necessarily part of the
underlying TCP/IP protocols, HTTPS cannot protect their disclosure. In practice this means that even on a
correctly configured web server, eavesdroppers can infer the IP address and port number of the web server
(sometimes even the domain name e.g. www.example.org, but not the rest of the URL) that one is
communicating with, as well as the amount (data transferred) and duration (length of session) of the
communication, though not the content of the communication.

Web browsers know how to trust HTTPS websites based on certificate authorities that come pre-
installed in their software. Certificate authorities (such as Let's Encrypt, Digicert, Comodo, GoDaddy and
GlobalSign) are in this way being trusted by web browser creators to provide valid certificates. Therefore, a
user should trust an HTTPS connection to a website if and only if all of the following are true:

 The user trusts that the browser software correctly implements HTTPS with correctly pre-installed
certificate authorities.
 The user trusts the certificate authority to vouch only for legitimate websites.
 The website provides a valid certificate, which means it was signed by a trusted authority.
 The certificate correctly identifies the website (e.g., when the browser visits "https://example.com", the
received certificate is properly for "example.com" and not some other entity).
 The user trusts that the protocol's encryption layer (SSL/TLS) is sufficiently secure against
eavesdroppers.
HTTPS is especially important over insecure networks (such as public Wi-Fi access points), as
anyone on the same local network can packet-sniff and discover sensitive information not protected by
HTTPS. Additionally, many free to use and paid WLAN networks engage in packet injection in order to serve
their own ads on webpages. However, this can be exploited maliciously in many ways, such as injecting
malware onto webpages and stealing users' private information.

HTTPS is also very important for connections over the Tor anonymity network, as malicious Tor
nodes can damage or alter the contents passing through them in an insecure fashion and inject malware into
the connection. This is one reason why the Electronic Frontier Foundation and the Tor project started the
development of HTTPS Everywhere,[4] which is included in the Tor Browser Bundle.

As more information is revealed about global mass surveillance and criminals stealing personal
information, the use of HTTPS security on all websites is becoming increasingly important regardless of the
type of Internet connection being used.[10][11] While metadata about individual pages that a user visits is not
sensitive, when combined, they can reveal a lot about the user and compromise the user's privacy.

Deploying HTTPS also allows the use of HTTP/2 (or its predecessor, the now-deprecated protocol
SPDY), that are new generations of HTTP, designed to reduce page load times, size and latency.

It is recommended to use HTTP Strict Transport Security (HSTS) with HTTPS to protect users from
man-in-the-middle attacks, especially SSL stripping.

LAN (LOCAL AREA NETWORK)

A local area network (LAN) is a computer network that interconnects computers within a limited area
such as a residence, school, laboratory, university campus or office building.[1] By contrast, a wide area
network (WAN) not only covers a larger geographic distance, but also generally involves leased
telecommunication circuits. Ethernet and Wi-Fi are the two most common technologies in use for local area
networks. Historical network technologies include ARCNET, Token ring, and AppleTalk.

MAN (METROPOLITAN AREA NETWORK)

It covers the largest area than LAN such as: Small towns, City etc. MAN connects 2 or a lot of
computers that area unit apart however resides within the same or completely different cities. MAN is
expensive and should or might not be owned by one organization.

WAN (WIDE AREA NETWORK)

It covers the large area than LAN as well as MAN such as: Country/Continent etc. WAN is expensive
and should or might not be owned by one organization. PSTN or Satellite medium are used for wide area
network.
LAN MAN WAN

LAN stands for Local MAN stands for Metropolitan

Area Network. Area Network. WAN stands for Wide area network.

LAN’s ownership is MAN’s ownership can be While WAN also might not be owned

private. private or public. by one organization.

The transmission While the transmission speed Whereas the transmission speed of

speed of LAN is high. of MAN is average. WAN is low.

The propagation delay There is moderate Whereas there is long propagation

is short in LAN. propagation delay in MAN. delay.

There is less While there is more Whereas there is more congestion

congestion in LAN. congestion in MAN. than MAN in WAN.

While MAN’s design and Whereas WAN’s design and

LAN’s design and maintenance is difficult than maintenance is also difficult than LAN

maintenance is easy. LAN. as well MAN.

There is more fault While there is less fault In WAN, there is also less fault

tolerance in LAN. tolerance. tolerance.


URI (UNIFORM RESOURCE IDENTIFIER)

A Uniform Resource Identifier (URI) is a string of characters that unambiguously identifies a particular
resource. To guarantee uniformity, all URIs follow a predefined set of syntax rules, but also maintain
extensibility through a separately defined hierarchical naming scheme (e.g. http://). Such identification
enables interaction with representations of the resource over a network, typically the World Wide Web, using
specific protocols. Schemes specifying a concrete syntax and associated protocols define each URI. The
most common form of URI is the Uniform Resource Locator (URL), frequently referred to informally as a web
address. More rarely seen in usage is the Uniform Resource Name (URN), which was designed to
complement URLs by providing a mechanism for the identification of resources in particular namespaces.

URN (UNIFORM RESOURCE NAME) & URL (UNIFORM RESOURCE LOCATOR)

A Uniform Resource Name (URN) is a URI that identifies a resource by name in a particular
namespace. A URN may be used to talk about a resource without implying its location or how to
access it. For example, in the International Standard Book Number (ISBN) system, ISBN 0-486-
27557-4 identifies a specific edition of Shakespeare's play Romeo and Juliet. The URN for that
edition would be urn:isbn:0-486-27557-4. However, it gives no information as to where to find a copy
of that book.

A Uniform Resource Locator (URL) is a URI that specifies the means of acting upon or
obtaining the representation of a resource, i.e. specifying both its primary access mechanism and
network location. For example, the URL http://example.org/wiki/Main_Page refers to a resource
identified as /wiki/Main_Page, whose representation, in the form of HTML and related code, is
obtainable via the Hypertext Transfer Protocol (http:) from a network host whose domain name is
example.org.

A URN may be compared to a person's name, while a URL may be compared to their street address.
In other words, a URN identifies an item and a URL provides a method for finding it.

Technical publications, especially standards produced by the IETF and by the W3C, normally reflect a
view outlined in a W3C Recommendation of 2001, which acknowledges the precedence of the term URI
rather than endorsing any formal subdivision into URL and URN.

“URL is a useful but informal concept: a URL is a type of URI that identifies a resource via a
representation of its primary access mechanism (e.g., its network "location"), rather than by some other
attributes it may have.”

As such, a URL is simply a URI that happens to point to a resource over a network. However, in non-
technical contexts and in software for the World Wide Web, the term "URL" remains widely used.
Additionally, the term "web address" (which has no formal definition) often occurs in non-technical
publications as a synonym for a URI that uses the http or https schemes. Such assumptions can lead to
confusion, for example, in the case of XML namespaces that have a visual similarity to resolvable URIs.

Specifications produced by the WHATWG prefer URL over URI, and so newer HTML5 APIs use URL
over URI.

“Standardize on the term URL. URI and IRI [Internationalized Resource Identifier] are just confusing.
In practice a single algorithm is used for both so keeping them distinct is not helping anyone. URL also easily
wins the search result popularity contest.”

While most URI schemes were originally designed to be used with a particular protocol, and often
have the same name, they are semantically different from protocols. For example, the scheme http is
generally used for interacting with web resources using HTTP, but the scheme file has no protocol.

CLUSTER COMPUTING
Cluster computing follows distributed systems as its principle. LAN acts as the connection unit here.
The Clustering methods encompass HPC IAAS, HPC PAAS, which are further luxurious and hard to set up
and preserve than a single computer. A computer cluster helps to largely reduce the unavailability of these
systems and provides larger storage to other desktop workstation or computer.

Some of the most widely used Cluster Computers are Petroleum Reservoir Simulation, Google
Search Engine, Earthquake Simulation, and Weather Forecasting.

Understanding Cluster computing

Clusters are widely used with respect to the criticality of the data or content handled and the
processing speed expected. Sites and applications which expect extended availability without the downtime
and expecting heavy load balancing ability use these cluster concepts to a large extent.

High-Availability (HA):

Computers face failure very often. High Availability is concurrent in a straight line to our increasing
dependence on computers because at the present they include a vital role mainly in companies whose most
important functionality is accurately the offer of some stable computing service, such as e-business,
databases, among others.

An elevated accessibility Cluster aspires to uphold the availability of services offered by a computer
system by server replication and services from side to side superfluous hardware and software
reconfiguration. here multiple computers stage together as one, each one observes the others and
captivating their services if some among them fail. processing power loss happens here but availability is the
key perspective. Fault forbearance is attained through supplies and redundant boards, also publishing
alternative paths through fully connected systems which are extremely networked.

Cluster Load Balancing:

In an increased network usage and internet usage load balancing acts as a key factor within these
clusters. though these clusters network capacity and increased performance are easily achieved. here all
nodes remain integrated with all instances so that all these node entities are aware of the requests in their
network. The systems do not work jointly in a solitary procedure but readdress requests separately as they
turn up based on a scheduler algorithm. Another important factor on cluster management is scalability as it
largely achieved when each of its servers is completely utilized.

During load balancing amid of servers that hold the same capability in client response, a lot of
problems are raised because multiple requests may be addressed by the servers which may lead to
confusion between themselves. So the element that will apply the balancing among servers and users, and
construct it to do so, however, we can put multiple servers on one side that, for the customers, they appear
to be only one address. A common example of these scenarios is the Linux user servers.

Types of Cluster computing

1. Load-balancing clusters: Here workload is equally distributed across multiple installed servers in the
cluster network.
2. High availability (HA) clusters: A group of clusters which ensure to maintain very high availability.
computers pulled from these systems are considered to be very much reliable and may not face a downtime
even possibly on any instance.

3. High performance (HP) clusters: This computer networking tactic use supercomputers and Cluster
computing to resolve complex and highly advanced computation problems.

Advantages of using Cluster computing

1. Cost efficiency: Compared to highly stable and more storage mainframe computers these form of cluster
computing systems are considered to be largely cost-efficient and cheaper. Moreover, most of these systems
offer higher performance than mainframe computer systems.

2. Processing speed: The speed of processing also equitable to the mainframe systems and other forms of
supercomputers in the market.

3. Expandability: Scalability and expandability is the next key advantage of these clustered systems.
because they instantiate the opportunity to add any number of additional resources or systems to the existing
computer network.

4. High availability of resources: computers face failure very often. High Availability is concurrent in a
straight line to our increasing dependence on computers because at the present they include a vital role
mainly in companies whose most important functionality is accurately the offer of some stable computing
service, such as e-business, databases, among others. Availability plays the next key role in these systems.
failure of one of the currently active nodes may be passed on to the other live nodes and on receiving this
message the other set of the node will operate as a proxy for the dead node. so this ensures enhanced
availability of these systems.

Cluster computing is a loosely connected or tightly coupled computers that effort together so that they
can be worked as a single system by the end users. on top of this logic, these computing systems ensure
sustained performance and availability which make these computers vastly popular and client attractive in
these competitive markets.

You might also like