You are on page 1of 35

GROUP ASSIGNMENT

TECHNOLOGY PARK MALAYSIA


CT050-3-2-WAPP

Data Center Infrastructures


NP2F2009IT
HAND OUT DATE: 21 OCTOBER 2021
HAND IN DATE: 13 MARCH 2022
WEIGHTAGE: 50%

Submitted By: Submitted to:


Amit Thakur (NP000384) Mr. Jay Neupane
Dinanath Dahal (NP000391)
Prajwal Ghale (NP000477)

INSTRUCTIONS TO CANDIDATES:
1. Submit your assignment at the administrative counter.

2 Students are advised to underpin their answers with the use of references (cited using
the Harvard Name System of Referencing).

3 Late submission will be awarded zero (0) unless Extenuating Circumstances (EC) are
upheld.

4 Cases of plagiarism will be penalized.

5 The assignment should be bound in an appropriate style (comb bound or stapled).

6 Where the assignment should be submitted in both hardcopy and softcopy, the
softcopy of the written assignment and source code (where appropriate) should be
on a CD in an envelope / CD cover and attached to the hardcopy.

7 You must obtain 50% overall to pass this module.

I
Acknowledgement
At first, I would like to concern your attention to express my sincere gratitude to the
Asia Pacific University (APU) for giving this opportunity to research on this project.
I great fully thank (LBEF) Lord Buddha Education Foundation for the support and
guidance during this project.
For the continuous support on my bachelor study and research (Mr. Jay Neupane)
guide me in this project. Lastly, I would like to thank my friends and parents to
inspire me to complete this project.

Dinanath Dahal (NP000391)


Amit Thakur (NP000384)
Prajwal Ghale (NP000477)
BSc. IT (4th semester)

II
Table of Contents
1 Introduction................................................................................................................1
1.1 Data center...........................................................................................................1
1.2 Data center Requirements...................................................................................1
2 Project statement.........................................................................................................2
3 Literature Review.......................................................................................................2
4 Data Centre Technology.............................................................................................3
4.1 IT Hardware........................................................................................................3
4.1.1 Data Center Standards................................................................................4
4.2. Location...............................................................................................................5
4.2 Critical building system.......................................................................................6
4.3 Raised floor and dropped ceiling.........................................................................7
4.4 HVAC System/Cooling system............................................................................8
4.4.1 Cooling methods...........................................................................................9
4.5 Fire protection system.......................................................................................13
4.6 Rack Design.......................................................................................................13
4.7 Building Architecture........................................................................................16
5. Power.........................................................................................................................18
5.1. Transformers.....................................................................................................19
5.2. Automatic transfer switch.................................................................................19
5.3. Uninterruptible Power Supply (UPS) and Generators.....................................19
6. Fire Suppression........................................................................................................20
6.1. Water Fire Suppression System........................................................................20
7. Energy Management and Green Computing............................................................23
8. Overview of suggested Data center...........................................................................25
9. Limitations................................................................................................................27
10. Conclusion.............................................................................................................28
11. Reference…………………………………………………………………………….....29

III
Figure 1 Location....................................................................................................................6
Figure 2:Building System.......................................................................................................7
Figure 3: hot Aisle Containment System............................................................................11
Figure 4: Cold Aisle Containment System..........................................................................11
Figure 5: Building Architecture...........................................................................................18
Figure 6: Floor Plan..............................................................................................................20
Figure 7: Interlock System...................................................................................................22
Figure 8: Data Center...........................................................................................................27

IV
1 Introduction
1.1 Data center
A data center is a physical facility where companies store mission-critical
programs and data to assure their availability and security. Access points,
switching, surveillance systems, memory devices, computers, and other control
system design are all housed in datacenters. These components are responsible for
storing and managing the corporation's vital systems. A data center is a location
where storage arrays, administration, backing, and recovery equipment, such as
routers, switches, firewalls, servers, cables, and other things, are housed and then
used. A data center also contains hardware equipment including power supply,
cooling systems, racks, generators, and cables.

1.2 Data center Requirements


A safe, defendable, and cost-effective data center can be built with proper
planning and architecture. The following are some of the most important factors to
consider while constructing a safe data center:

 Site selection
The location of the data center should be carefully determined in order to
obtain the optimum combination of optimal attributes. The location should
be secure and free of natural environmental hazards such as flooding or
landslides.
 Facility Security/ Monitoring
Security systems for the data center should be numerous. Card or
biometric access systems should be used to allow people in and out of the
facility. Only the areas required by their jobs should be accessible to the
employees. Security monitoring equipment and personnel should be kept
separate from the main computer system, and important screens should not
be visible to passers-by or via windows.
 Power Systems
The generators should automatically switch over if there is a power outage,
here is what you should do. Following, the critical structures would be

1
linked to twin uninterruptible power supply (UPS) to ensure that they were
not affected during the generator start-up phase.
 Flexibility
A Systems for data center infrastructure should be constructed with
components that can be modified or moved easily. Inflexible infrastructure
leads to higher long-term costs. The scalability of a datacenter is however
determined by whether it has adequate infrastructure to manage growth
prospects.
 Modularity
The space should be designed with interchangeable parts. Install the same
infrastructure in all server cabinets. Modularity makes it possible to build a
Data Center infrastructure that is both modest and mountable. On a lesser
scale, this also improves reliability. Consumers may quickly connect to the
same infrastructure in another place if a failure happens inside one part of
a Data Center, then they'll be back in operation under no minutes.

2 Project statement
We proposed a data center infrastructure design for this project. We're working on
the full architecture, which includes site selection, cooling, fire protection, rack
design, and power supply. Physical and logical security, as well as energy
management and green computing, are all included.

3 Literature Review
A data center is a location where an administration's mutual IT duties and
hardware are combined aimed at the purposes of removing defects, preparing, and
disseminating information and applications. Server centers are critical to the
coherence of daily work since they manage and maintain an organization's
generally basic and restricted resources. It also oversees IT infrastructure tasks
such as computer networking and data storage.

"All electrical devices can be energy efficient that range up to presumably


reachable limitations," say Navneet Singh and Dr. Jatinder Singh Bal. Data is
known to flow through a portion of the electric force. This generated energy has a
more negative influence on the environment. As a result, there is a strong need to

2
distinguish data centers as energy efficient. A sequence of phases can be used to
solve the problem of high electric power operation. Cooperation between the
network infrastructure regulation and the energy production assets visible
controller should allow for a range of solidification and freezing alternatives, as
well as the ability to modify the state of the device. While saving money on force
utilization by (server farm), extreme caution should be exercised to ensure that it
never has an impact on the cost of services offered to end clients, such as Service
Level Agreement SLA violations, which should be maintained to a minimum.
(Navneet Singh, Dr. Jatinder Singh Bal, 2017)

This demonstrates the need of having the right mix of servers, computers,
networking equipment, and software to administer software applications in data
centers.

Cisco believes that the ideal data center is easily accessible. Which also has several
entrances and outlets, as well as restricted property and access, which may impair
material delivery on a daily basis, as well as increasing reaction time for services
that should be delivered quickly. To fix an issue in an emergency, we need
examine the nearest location and network latency, where staff should arrive within
the required time frame. We should have additionally taken into account the offsite
storage capacity and facility, and kept it close by so that data could be retrieved in
a timely manner. We should have additionally examined the cooling capacity of a
possible data center location, which required particular cooling rooms in addition
to the quantity of office space. (Cisco, 2021)

4 Data Centre Technology


4.1 IT Hardware
One of ZiPro Company's Information Technology (IT) Hardware consultants has
created an excellent data center design solution for SPIC. Health Care's data
center construction. The consultant has created and recommended the best server
type, storage system, data center standard, and network architecture design for
SPIC in this part.

4.1.1 Data Center Standards


When it comes to important business activities, a Tier 3 or Tier 4 data center is
typically suggested. Because Spic is a large-scale company provider to their

3
partners and customers (patients), justifying and recommending the best tier
design is necessary in order to get Uptime Institute accreditation. Tier 3 and Tier 4
data centers serve different purposes and provide different solutions. The
arguments and recommendations for both tiers are as follows:

 Tier 3 – N+1
a. 99.982% Downtime Accuracy
b. Single Power Source
c. Single redundancy solution for power feeds, diverse network paths,
multiple Uninterruptible Power Supply (UPS), backup generators
and cooling system
 Tier 4 – 2N+1
a) 99.995% Downtime Accuracy
b) Multiple redundancy for power feeds, diverse network paths, multiple
Uninterruptible Power Supply (UPS), cooling system and backup
generators
c) Parallel Power Source (Two Electric Utilities)

Tier 4 facilities are much more effective than Tier 3 capabilities, particularly in terms of
reaching the highest uptime (0.8 hours vs. 1.6 hours for a Tier 3 site). Datacenter
redundancies is also more reliable than Tier 3 redundancies. , which includes multiple power
feed redundancy, various network pathways, and numerous Uninterruptible Power Supplies
(UPS). 2N+1, or backup for all backups, is another name for it.

Recommendation for Data Center Standard based on Uptime Institute

However, for Tier 4, a parallel power supply, such as 2N+1, is necessary to run the data
center. This is due to the need for power supply redundancy. Tier 3 is the recommended
option for constructing data centers for SPIC, and it is qualified to attain the Uptime Tier 3
certification. Tier 3 is suggested and certified for SPIC's data center due to the necessity to
attain Tier 3 for power utility.

4
4.2. Location
We must analyze many weather maps and other earth-related data in order to
determine the places. We must pick a site that has a lower likelihood of flooding,
tremendous rain, earthquakes, and other calamities based on many documents for
earthquakes, floods, rain, and other weather reports. We must also avoid areas
where terrorists are attacking or threatening to attack.

Figure 1 Location

It's also important to think about the country's backbone of tier three networks.
Because the datacenter must be linked to the internet, it is vital to have gigabit
network connectivity. If the network connection to a center is farther off, the
amount increases.

Labor expenses, operational expenses, government incentives, domestic


appliances and taxes, quality of life, environmental considerations, protection and
national protection, interaction systems, industry for critical parts, proximity to
providers, and other factors all play a role in determining the locations.

As a result of these considerations, we recommend that data centers be located


away from residential areas. We need to select a place where we have enough land
to increase flexibility, as well as at-grade expansion of parking, storage, and other
staff amenities. Commuting distances of no and over each hrs are recommended.
Because energy is really important, the area should have a number of backup
power networks.. In order to transport components, datacenters require warehouse

5
and truck access facilities. Vibration should be avoided near nearby rail lines and
airports so that mechanical components are not harmed. In addition, the location
must be lawful, and authorities must provide authorization.

4.2 Critical building system

Figure 2: Building System

Source of power, security system, fire suppression system, and heating the key 5
important building systems for datacenters include ventilation and air conditioning
(HVAC) systems, as well as raised floor systems. Datacenters must be designed in
such a manner that systems utilize the least amount of electricity possible,
adequate space is given, and mechanical systems operate efficiently.

The raised floor system, which has various specifications according to datacenter
space, is the first thing we consider when establishing a datacenter. Local and
central security management are also important considerations. Physical and
logical security are two types of security.

6
4.3 Raised floor and dropped ceiling
The raised floor aids with cable management and optimum airflow to the
datacenter. The actual elevated height is determined by the size of the datacenter
that will be built. Cast aluminum floor tiles are recommended for this data center
project because of their high floor loading capability. According to Michal A.
Bell's research, a 12-inch increased floor height is ideal for spaces smaller than
1,000 square feet. From 1000 to 5000 square feet, a raised floor of 18-24 inches is
recommended. In addition, for areas larger than 10,000 square feet, a 24-inch
height is required.

By better signal referencing grounds, the elevated floor may also automatically manage
the referencing matrix (SRG). Grid has the ability to bond in an access floor system.
As a result, the elevated floor transports cooled air to computer systems via
bottom-to-top circulation.

Raises floor types

The following are the two types of elevated floors:

Low profile raised floor:

Reduced floors are being used to manage connections, wiring, and pipelines and
are less than six inches in height. Due to the low board level, there is no way for
air to circulate underneath the. This sort of elevated floor cannot be used in our
situation since it prevents air flow, which is required by SPIC.

Standard raised floor:

Levels with standard access are generally 12" or else higher, with some floors
reaching 6 feet or higher. Under bottom management, these floors allow for cable
management for instance ventilation. This type of concrete slab is used in the
majority of data centres.

Chosen raised floor and justification

Since evaluating both access floors and doing a thorough analysis, the optimal
option for the SPIC data centre was selected. The conventional access floor is the
greatest solution a for a variety of reasons. This style of elevated floor can take p

7
to 300 kg of weight, making it extremely sturdy and capable of holding as many
servers as desired.

Furthermore, it is suggested by most data centres and is utilized in the majority of


them. One of the key advantages for using this kind for SPIC is that it provides for
cable management and air flow, both of which are necessary for cooling and
allowing air to flow through the elevated floor.
The cooling is set up by arranging under the appliance perforated floor tiles. Cold
air is blown downwards and warmth is sent up the walls. Furthermore, such kind
shall adhere to the TIA-942 specification. Therefore, the chosen type will be very
suitable for SPIC as it will allow airflow and cable management.in contrast, low
profile raised floors don’t allow airflow which won’t be suitable for SPIC.
Furthermore, a check to the Approved datacentre revealed that they use this kind,
confirming the study and decision. Finally, a chosen type will be standard raised
floor which will be more effective and efficient for SPIC data centre.

4.4 HVAC System/Cooling system


Due to the convenience of usage, HVAC is a simple version of heating,
ventilation, and air conditioning that is popular among datacenter employees and
engineers.

Data centers require a lot of energy, which results in a lot of heat being created by
the computer machines in the datacenter. Inside the Hvac, different rooftop
cooling units, raised floor cooling units, and dispersion units provide concentrated
cooling towers. With help of a raised floor, air flow across the whole floor
becomes consistent and useful for conditioning. The heat generated by racks,
mechanical hard drives, and other equipment is enormous. As a result, a separate
cooling unit is required in the space between the racks. The quantity of heat
generated by direct access storage devices and current blade servers is the most
significant issue and the most costly for the datacenter.

As a result, suitable vents apertures on various parts of the datacenter, such as the
top sides, elevated floor, and so on, are used to ventilate. Even ventilation and
cooling equipment generates heat, which must be taken into account when
installing an HVAC system.

8
Since cooling is such an important aspect in datacenters, various cooling models
and technologies were used. Electrical power supplied to datacenter components
is converted to heat. Heat is measured in British Thermal Units (BTUs) per hour
(BTU/hr), whereas energy is defined in watts (W). The majority of a equipment
will generate 341.2 BTU/hr. of heat, which will be converted to latent heat at a
price of 100W of electric power, necessitating the use of a hot or cold Aisle
containment system to chill the datacenter.

The system's temperature should not exceed 70-74 degrees F, and the relative
humidity should not exceed 50%. Because we recommended a raised floor, the
airflow in the system should be from the bottom up. Because we recommended a
hot aisle rack layout, hot aisle cooling solutions should be used. It's important to
avoid air leaks in elevated floors. Because evaporation is dangerous to electrical
equipment, a vapour barrier should be used. Only 5% above ambient temperature
is required for static pressure to be maintained.

4.4.1 Cooling methods


In conventional datacentres, two cooling technologies were primarily employed.
The hot/cold aisle cooling system is an air-based cooling system. It distinguishes
between hot and cold air. A fluid refrigeration system is another option.

Hot Aisle Containment System

That one separates the heated strip among the computer stands, making the rest of
the data center environment chilly. As a result of the separation of the hot and cold
air watercourses, a large cold air return plenum is created, which helps data center
cooling. However, adequate rack organization based on hot aisle requirements is
required. Otherwise, the heated air will mix with the rest of the atmosphere,
making appropriate flow impossible.

9
Figure 3: hot Aisle Containment System

Cold Aisle Containment System

Parallel slats create a level roof throughout the chilly corridor, preventing cold air
from entering the higher floor environment, where total air conditioner is
provided. For usage the frigid isle is made possible by conditioning ductwork is
expanded to the ceilings by vertical panels (Peterson, 2015).

Figure 4: Cold Aisle Containment System

10
Experts plus Cons of Hot Aisle

Expert plus cons of Cold Aisle

11
Although hot corridor is a bit more expensive, we recommend it because of its efficiency.
Equipment resolve last longer if the aisle is kept hot. The entire reason was provided
in the section on device racking.

After weighing the pros and cons of both containment systems, the Hot Aisle
containment system was chosen for the SPIC data centre. The hot aisle was chosen for
a variety of reasons, giving it an edge over the icy corridor among the most significant
factors for the SPIC data center is controlling its storage center's heat, which must
adhere to the typical data centre temperature of 21-24 degrees. As a result, the chosen
aisle containment system will provide a chilly, human-friendly climate throughout the
space. The cold aisle containment system, on the other hand, will generate a hot area
across the room that will be tough for humans to stay on over time, making it
unfriendly to people. It will also violate the TIA-942 regulation, whose stipulates that
the temperature must be greater than air tempA hot aisle containment system, which is
more efficient and reliable, is used in most modern data centres. Moreover, the heat
disparity, or δ T, would increase, making the room very hot. Furthermore, the cold
Aisle will interfere the fire detection, rendering it useless and in violation of industry
regulations. On either side, regular fire detection systems will be used by Hot Aisle.
The chosen system may be less costly than warm lane, irrespective of a start-up costs,
because the hot aisle will require too much maintenance and will not be efficient for
the SPIC data center  For accomplish green manufacturing, server farms must reduce,
if not remove, heat. The heated corridor will be used to accomplish green computing
since it will reduce the heat by stopping it from entering a room and would not cause
pollution. Lastly, the hot aisle will be more efficient, removing heat and separating it
from people, resulting in an ecologically friendly data centre. However, this procedure
will adhere to the TIA- 942 standard, which stipulates that the room temperature
should be between 21 and 24 degrees. Furthermore, ordinary fire detection systems
will be able to operate without difficulty. In addition, a visit to the HDC data centre
revealed that they use this type, confirming our study and choice. SPIC's air
conditioning system, depicted in the image beneath, will avoid the warm aisles from
forming inside, making it more effective, cost-effective, and environmentally
beneficial.

12
4.5 Fire protection system
The basic causes of fire are electricity and heat. Heat and electricity are both used in
datacenters. As a result, there is always the possibility of an electrical fire in a
datacenter. Installation of a firewall is recommended in conjunction with a thorough
fire suppression and detection system. To protect the datacenter from an emergency
fire, sprinklers, smoke and heat detection systems, chemicals, and several manual
emergency systems must be installed.

As a result, in order to prevent future fire emergencies and protect property, data, and
lives, we must install a fire detection and suspension system in the datacenter. We
recommend installing a heat and smoke detection system beneath the raised floor.
Alarms (local), fire suppression systems, and monitoring departments or stations
should all be linked into the detection system. Detectors should be placed in
connection to the datacenter's exterior airflow and probable fire locations, as
recommended by experts. This aids in the early detection of a fire.

The National Fire Protection Association in the United States has established specific
requirements for fire prevention systems, and we recommend that we apply those
standards to the datacenter those we build building. The NPFA 72E and NPFA 75
standards are recommended. The NPFA 75 standard fire-rated wall is fire-resistant
and will not catch fire. Manual water dispensing systems, fire extinguishers, pull
stations, and other items should be controlled such that in the event of a minor fire,
any employee be able to handle it. Clean agents or chemical suppression systems are
chosen as the first line of defense in case of major fires or inaccessible places.

4.6 Rack Design


Recommendation for Rack model

Racking, since we all understand, is indeed a type of physical assistance for specific
types of physical infrastructure, and it serves as a house for critical equipment such as
computers or circuit boards., as well as providing diverse physical support such as
server racks.

13
Equipment racking is the process of placing equipment as the most important
equipment, taking into account many criteria such as the demands of the system you
use and the necessity for a computer. Different computer systems are installed
according to the requirements and required capabilities.

Repair panel,   Some of the applications that are created in datacenters by installing
racks are to link two networks or guide it form its source to a certain location.

There remain numerous factors to consider while selecting rails in terms of their
compatibility for the chosen server. First from outset, employees inside the Current
stage use rectangular hole type vertical rack rails on their rack mounts. If we don't
have the necessary rail and mounting, we may have to respond to various challenges,
as recommended by utilizing is excluding. This is because it renders the machinery
and staff hard to govern and, in some situations, endless while necessitating the
removal of other equipment or the immolation of space. The proposed rack is mobile,
making the situation for the workers easier to manage, especially in the rail chosen.
As a result, when a problem arises, the expert's job becomes much easier.

We should be cautious and examine the materials used in the rack's construction, as
well as if it conforms to the Limitation of Dangerous Compounds Regulation, which
limits the use of hazardous substances in the making of products. A chosen bracket
has the capacity to hold IT materials weighing up to 3000 pounds. If the rack does not
meet the necessary requirements, it may lose consumers and trust.

Finally, this demonstrates that the most important aspect to consider when planning a
data center is rack modeling and equipment racking. Different physical requirements,
including environmental requirements, must be met in order to maintain, scale, and
provide effective and efficient electricity, as illustrated in the diagram.

14
Advantage

The advantages are:

Hot Aisle Containing System

A Hot Aisle authority framework (HACS) stops the heated route to gather all high
temperature air from the IT equipment, letting the server’s farms to change into a
massive cold-air bringing plenary back. As it passes so over warm, the hot and cold
air streams are split.

The Hot Aisle Containing System is a cool data center or dedicated server that thwarts
the bride's detection that IT equipment was not being chilled properly and considers
thin portions to be uncontrolled if necessary. The fact that the hot air is confined
rather than dispersed around allows the reps to operate normally without feeling
uncomfortable. Furthermore, the precise distribution of air supply across the room is
less important. For as long as cool air is provided into the general room, Expensive
elevated flooring provision dispersing buildings and large supply exposed to breath
activity can be prevented, though it takes a long path from employee verification and
hot air is honestly restored.

Cold Aisle Containing System

A shivering Its disconnected path is covered by a containment system or regulatory


framework (CACS), which allows the rest of the server or data center to transform it
into a massive hot-air return plenum. Both hot and cold air streams are segregated
when it contains the chilly aisle or walkway. For this control technique, the rack
sections should be arranged in a consistent hot-aisle/cold-aisle manner.

Cold Aisle Containment systems can be employed with raised floor supply plenums
or overhead ducted supply with no raised floor. In the event, controls are precisely
designed, which may provide some suitable benefits by providing more imperative
ability to control supply air to arrange worker wind stream. Cold Aisle Containment
does not require an air pipe plan and is typically less expensive to operate. All you
have to do now is turn off the highest resolution of the cupboards/cabinets and add

15
entryways. This also makes switching from a more well-known server to this
framework less difficult.

Disadvantages

The disadvantages are:

Hot Aisle Containment System

The Hot Aisle Containment System, on the other hand, has a few flaws. Functioning
in a restricted hot Aisle system is difficult, and extreme temps inside enclosed hot
passageways should be avoided, especially in workplaces that need equipment. This
could remain more expensive plus inconvenient at work.

Cold Aisle Containment System

Moreover, there are a few flaws in the regulation framework for the Cold Aisle
Containment System. Allowing hot air to escape into a chilly corridor can provide
heat to the living animals. Once it occurs, Data Center personnel may feel all is alright
in a cold Aisle Containment System, but numerous sorts of employee racks or
communication to transmission rack in a cold Aisle Containment Dome might be
subjected to warm moist air.

Recommendation

We may propose a hot aisle containment system for your datacenter after analyzing
all of the issues. Although it is the more expensive program, it has better capacity than
that of the cold storage system. It also enables for higher temperatures and longer
periods of time to cool water temps, as well as being more competent and financially
worthwhile in terms of power costs and finance. And the heat may be adjusted to suit
the situation.

4.7 Building Architecture

If leasing the datacenter building appears to be the most cost-effective option, we


recommend it. However, we recommend designing your own datacenter building for
long-term use. A single-story datacenter building will save money on cooling. The
usage of small floor plates for datacenters should be avoided entirely. Large floor

16
plates provide adequate cooling, flexibility, and physical security due to the lack of
windows.

Furthermore, because we recommend elevated floor architecture, the minimum height


should be 13-14 feet. Generators must be installed, as well as several backup lines. As
a result, the location and cabling for those electrical supply should be examined.

Figure 5: Building Architecture

The above image depicts the building's physical design as well as its cabling
architecture. There is only one entrance in the first, and physical security will be
implemented. There will be a backup equipment room in that location. The office is
isolated from the datacenter, which will be where all of the monitoring will take place.

17
5. Power
Data centers should be operational 24 hours a day, seven days a week. Power is
essential for the system to work and have a manageable uptime. Data centers are now
designed to have less than 1% downtime or no downtime at all. In the event of a
power outage, a sophisticated electric supply plan, reliable supplies and systems, and
multiple power and storage device redundancies all were prepared to go. For this
datacenter, we recommend utilizing 380V DC.

The following is a list of the voltages used by both IT and non-IT equipment:

According to our estimates, the cooling system consumes more than 40% of the
electricity in this datacenter. The remaining 50-60% of electricity is consumed by the
24 hour uptime server after the cooling system has been turned off. In the event of a
power outage, we must consider the cooling system and server. In the event of a
power outage, the backup Uninterruptible Power Supply (UPS) should be able to keep
all of the equipment running for at least one hour.

18
The datacenter's floor plan is shown below. This diagram shows how energy
pathways can be maintained so no failure in one point either, and that switching
between electrical and UPS power can be done smoothly without affecting datacenter
service.

Figure 6: Floor Plan

5.1. Transformers
Modern data centers required transformers. Electricity is transmitted and distributed
through them. In this data center, a transformer is used to boost and decrease voltage
as needed by electrical equipment.

5.2. Automatic transfer switch


In the event of a blackout, an automatic transfer switch (ATS) can be used. It has the
ability to switch from a primary to a backup power source. If there is a power outage,
it detects it and switches the source to a UPS or another power grid if one is available.
We recommended a UPS as well as a second power grid for our datacenter. As a
result, if another grid is available, the system will automatically transition to it.

19
5.3. Uninterruptible Power Supply (UPS) and Generators.
UPS refers for power supply, which uses a batteries to store energy. In the case of a
power failure, it immediately switches on and supplies standby power to systems. In
addition, depending on capacity, it can manage roughly 10-20 minutes of backup
electricity. If the energy does not come back on in a timely manner, there will be
enough time to turn on generators so that the backup can manage the situation for a
longer period of time.

In this data center, generators are also preferred. We recommend employing two grid
lines, but we must plan ahead of time and be prepared to provide backup electricity
for at least 24 hours. Power stations, as a consequence, are indeed the solution. Fossil
fuel, charcoal, and oil may all be used to generators. Petrol may be bought and put
into turbines. Allowing it to be used as needed. As a result, generators are an essential
component of any datacenter.

6. Fire Suppression
The majority of the time, fire suppression systems are used in circumstances where
there is a lot of power hardware (apex fire, 2018A fire control system is needed in
data centers. If there is a problem with the data center, it is usually communicated
during downtime. As a response, to limit the danger of injury and delay, a true fire
hiding strategy should be deployed in the datacenter. Fire suppression is an important
part of any data center disaster recovery strategy; it's important to consider what type
of fire the data center is vulnerable to, as well as the size of the data center (titan
power, 2017). Water fire suppression systems and clean agent fire suppression
systems are two types of data center fire suppression agents to consider.

6.1. Water Fire Suppression System

The primary objective of a water sprinkler is not to kill a fire, but to contain it and
keep it from spreading. Water is used in the sprinkler system, with a flow rate of 25
gallons per minute on average. The main difficulty with sprinklers is that they can
severely destroy equipment. This forth to discharge water until it is shut off,
regardless of the fact that it is switched on. Sprinklers can occasionally be

20
accidentally triggered, inflicting unnecessary harm. Water damage to the structure and
machinery can be serious in the case of a water pipe activation, and the clean-up
needed after a water pipe stimulation could be significant (Robin, 2019). Sprinkler
regulations such as NFPA 13 typically require a 30-minute water supply. Thermally
sensitive frangible bulbs or fusible links will activate sprinkler heads, releasing water
only when the head reaches a particular minimum temperature. By that time, the fire
will have grown in size, producing more smoke and possibly inflicting water damage
(Robin, 2015). Despite the goal of providing excellent operating and uptime while
preventing fire, significant water damage could still cause downtime
(titanpower2019).

There are a few distinct fire mitigation systems that use water instead of gases to put
out fires. A sprinkler system is the most frequent of these water systems, and the type
of head and how the water is kept within the pipes might vary. There are two types of
pipes in sprinkler systems: wet pipes, which hold water within the pipe until a heat
and/or smoke signal enables it to release, and dry pipes, which control water back by
a secondary or pre-action sprinkler valve until a heat and/or smoke signal allows it to
release. Pre-action pipes are the industry standard in the IT industry because they keep
the equipment dry and prevent moisture from dripping on it. Single interlocked
systems and double interlocked systems are the two most popular action pipe systems
(Tsohost2018).

1. Single Interlocked System


2. Double Interlocked System

21
Figure 7: Interlock System

Recommendation for Fire Suppression System:

To determine the best fire suppression system, a comparison of the many types and
how they work and operate is made, as well as which one is most needed. Both clean
agent and water systems have their own set of benefits and drawbacks. Water is kept
out of the server room using clean agent systems. However, because sprinkler
systems are put overhead, they are far more expensive and take up much more floor
space. Clean agent systems, on the other hand, keep water out of the server room.
They don't leave any chemical residue on equipment and aren't a fire threat. Which is
the better fire mitigation strategy depends on the data center in question.

The clean agent system comes highly recommended for data centers. The clean agent
system can assist safeguard a specific room or section in the building, as well as a
data center's underfloor and electrical and mechanical rooms. The suppression
product is "flooding" the room after the clean agent is discharged from nozzles in the
ceiling. Clean agents penetrate equipment, floors, and barriers to extinguish the fire
since they are released as gaseous materials. They also function very quickly; clean
agents can reach extinguishing levels in 10 seconds or less after deployment,
reducing the data center's fire damage. Furthermore, the approach has no negative
impact on hardware operations or data saved on hard drives or servers. Finally, when
a clean agent is deployed, there will be no residue to clean up. A clean agent system
helps to secure the data center at every stage by running efficiently and effectively.

 Fire Detection System


22
It is important to be knowledgeable with the various fire phenomena, fire spread, and
possible false phenomena in order to assure reliable, early detection of fire. Even fire
detection systems that can distinguish between a gas leak and smoke from a fire
should be installed in data centers. Air-sampling smoke detection systems are a great
choice since they detect fires early on, alerting employees and avoiding costly
suppression system discharges. Smoke detection, the main types of fire detecting
systems are smoke detecting, fire detection, and flame detection. For obvious
reasons, smoke detectors are employed in data centers. The 2 categories of smoke
alarms are visual fire sensors and ionization fire detectors. By enhancing the
sensitivities of fire detectors, they could be able to detect fire in its initial stages.

 Emergency Power Off

It's necessary to have an emergency power button in a room full of gadgets so that all
devices can be turned off fast. Also, in the odd event that the suppression system
issues a warning, it's critical to have an emergency suppression system cancel-switch
on hand to safely demonstrate the system. Groups of data centers to reduce the risk of
someone mistakenly pushing the EPO button and causing unintentional harm, UN
agencies should clearly designate the button's presence and position it under a clear,
lift-cover box, preferably with an integrated alarm. Within previous, it's been a
typical cause of data center outages, particularly so when key was mistaken for such
a gate opening. (SearchDataCenter2018).

 Fire Alarm and Emergency Lights


A fire alarm's aim is to alert people to the presence of a fire on the premises. In the
context of a data center, these individuals can be split into three groups:

 Selected members of staff


 General alarms
 Staged alarms

 Fire Prevention and Protection Services

The major goal is to minimize any operating disruptions, even in the event of a fire,
as well as to successfully protect people and property. To ensure a data center's

23
business continuity, fire safety is a long-term investment that must be carefully
planned. If a destructive fire cannot be avoided, the fire's effects must be minimized
as much as possible. Environmental damage, for example due to extinguishing water
must be avoided.

7. Energy Management and Green Computing


When capacity planning decisions are made correctly, the energy management system
in data centers may be handled. More efficient and effective use of electrical
resources is required. It is possible to improve uptime while lowering capital costs.
Operating costs should be decreased, and the effectiveness of power usage should be
correctly assessed in order to manage the data center and move toward green
computing

a. Control costs & heat with natural climate control


Cooling the IT hardware consumes around 40% of the total energy utilized in
a data center. As a result, one of the major supporters of a large Data center's
total power bill is cooling costs. The financial effects of cooling power
operation are comparable in both frameworks since the effects of cooler and
economizer hours are separated. The findings reveal that diverse
environmental conditions, energy costs, and cooling improvements have a
significant impact on cooling efficiency and working expenses. Because the
environment condition is the most important factor influencing the airside
economizer, using the airside economizer in a cold environment results in
much lower energy consumption and activity expenses.
b. Heat Control
Holed air flow panels are arranged on the floor along the sides of the racks to
allow cold air to enter The cold air must ideally start flowing from of the inlet
air screens, entering and leaving the narrow gaps between the servers from one
side. As a consequence, cooler air from one side creates a cold air corridor,
while hot air from another creates a warm air corridor, dividing the cold and
hot air pathways and prevents hot air through one rack from reaching the input
of the other. Because cold air is much easier to flow up than in between the
servers, which makes cooling ineffective, the cold and warm air will be mixed

24
on the up side of the racks. A larger pressure drop is created due to
horizontally flow across servers and air flow via the design of holed panels,
resulting in high flow resistance and restricting thermal performance among
cooling air with information hosts.
c. Turn off idle IT equipment (EMS)
Atmosphere review remains essential aimed at collecting data plus assessing
the suitability in atmosphere in which varieties are stored. Climate records will
provide a baseline for present circumstances where collections are housed. To
ensure optimal activity and execution, data focus should be verified on a
regular basis. In the long run, the inspection is beneficial since it detects any
minor and major faults before they deteriorate and do considerable damage to
the Data Center office. We should also turn off any inactive IT equipment
since it uses extra power, degrades the computing infrastructure, generates
more warmth, and may be harmful.

d. Use IT devices and Equipment with certified energy


Efficiency (energy star)
The Data center can function at a lower cost by using energy-efficient
equipment or gear. Money being spent on equipment in order to cut costs on a
device that consumes less energy might quickly mount up over time. As a
result, a more expensive energy-efficient equipment is a much more expense
option in the long term. This shows that the same action may be done with the
very same equipment while consuming less energy. This also helps with green
computing by reducing greenhouse gas emissions and unused energy
consumption, allowing for energy conservation and appropriate production.
e. Virtualize server and storage
Server virtualization is the process for separating a physical server into a
number of distinct virtual servers that can be used in a commercial application.
Virtual employees are able to set up their own working environments. This
also helps with virtualization of storage areas, which run on the range of size
devices but appear to be a unified saving pooled, substantially enhancing
proficiency and sensitivity. Capability virtualization that isolates real-world
data-storage resources to make them look like if they had been one. Reducing

25
the amount of power required and the emission of various gases. This
demonstrates how virtualizing servers and storage, utilizing different IT assets
that are certified for energy efficiency, and turning off different idle equipment
may assist control costs while also promoting green computing in data centers.

8. Overview of suggested Data center

Data
Center
Spaces

Figure 8: Data Center

Justification

As noted in section 2.2 of this study, the data center design suggested for SPIC
adheres to the TIA-942 standard. The five basic data center areas are the entrance
room, major distribution area, horizontally redistribution area, zone distributing area,

26
the item formats area. As seen in the image above, a quick explanation from each of
the data center's parts follows.

a) Entrance Room: It's where access provider equipment and demarcation


points are kept.
b) Main Distribution Area (MDA): It includes of the LAN and SAN
infrastructure's basic routers and switches.
c) Horizontal Distribution Area (HDA): It comprises of LAN, SAN, and KVM
switches, and it serves as a distribution point for horizontal cabling to the
apparatus distribution part.
d) Zone Distribution Area (ZDA): It's a point of flat wiring between HDA and
EDA that can be used as a connecting point. It can hold mainframes and
servers, among other things.
e) Equipment Distribution Area (EDA): It is made up of alternating rows of
racks and cabinets that produce hot and cold lanes to distribute heat (Accu-
tech.com, n.d.).

Backbone cabling connects the Entrance Room, MDA, and HDA, while horizontal
wiring attaches HDA, ZDA, and EDA, as described earlier in the report.

f) Smoke detector: It is vital to install in a data center because it decreases


business interruption and property damage caused by fire.
g) Uninterruptable power supply (UPS): It's an important part of data center
backup power. The goal of a UPS keeped the substructure running pending
reliable power is restored or the generator kicks in.
h) UPS Battery: When the system detects a power outage, the UPS battery kicks
in.
i) Raised floor: Its purpose is to provide adequate room to accommodate the
amount of communications cable as well as the power needed to keep the data
center running. The floor is also required to transport cool air, particularly to
all of the cabinets.
j) Generators: During a power loss, generators will immediately take over.
k) CO2 Fire Extinguishers: In the data center, the clean agent system will be
used, as suggested. As a result, CO2 fire extinguisher are a viable remedy for
fire hazards requiring significant, passive elements This one leaves no residue,

27
is non-conductive, and does not pollute the environment. It suffocates the fire
by depriving it of oxygen, disturbing and extinguishing it.
l) Power distribution unit: Its purpose is to provide electricity to the SPIC data
center, which is linked to the main electric poles. The data center would be
redundant since every of the equipment has a backup, so if one fails, the other
will take over.

9. Limitations
Most of the datacenter's components are flawless. However, the liquid cooling system
in our proposed datacenter is missing. This system lacks liquid cooling because of its
high cost and difficulty of implementation. Furthermore, a datacenter can be divided
into numerous portions, such as racks for application servers and storage servers. Our
data center, on the other hand, is not divided into sections based on the type of server
being used.

10. Conclusion
The suggested data center will be built in accordance with the TIA-942 standard.
Power supply, UPS, and cooling were all properly installed. The projected scheme is
effective and works well to meet the needs of the provided firm. In addition, the
equipment employed is dependable and energy efficient. Because it utilizes all
safeguards, such as biometric technologies, as well as all surveillance systems were
regulated 24 hours a day, the SPIC Data Center is a very secure place, seven days a
week. In low-light situations, motion sensors and CCTV systems will detect
movement. Only the sites required for their employment will be accessible to the
employees. The visitor's appointments will be checked prior to escorted entrance to
the appropriate location.

28
11. Reference

Sunbird Inc. (2022). Sunbird. Retrieved from www.sunbirddcim:


https://www.sunbirddcim.com/
Technopedia. (2021). Power Distribution Unit. Retrieved from Technopedia:
https://www.techopedia.com/definition/1751/power-distribution-unit-pdu
testguy.net. (n.d.). Power Distribution Unit (PDU) Applications, Testing & Maintenance.
Retrieved from Testguy: https://testguy.net/content/295-Power-Distribution-Unit-(PDU)-
Applications-Testing-Maintenance
UPS System PLC. (n.d.). Data Centre UPS (Uninterruptible Power Supply). Retrieved from
UPS system PLC: https://www.upssystems.co.uk/data-centre-ups
Vertiv Group Corp. (n.d.). What is Rack PDU. Retrieved from Vertiv Group Corp:
https://www.vertiv.com/en-in/about/news-and-insights/articles/educational-articles/what-is-a-
rack-pdu
ycict.net. (n.d.). Cisco Firepower 9300. Retrieved from ycict Co Limited:
https://www.ycict.net/products/cisco-firepower-9300/

Balodis, R., & Opmane, I. (2012). History of data centre development. In Reflections on the
History of Computing (pp. 180–203). Springer.
Chan, N. W. (2015). Impacts of disasters and disaster risk management in Malaysia: The case
of floods. In Resilience and recovery in Asian disasters (pp. 239–265). Springer.
Draper III, M. L. (2017). Clean agent systems have changed over the years due to a variety of
reasons, including safety and product updates. Know the design parameters and codes and
standards that dictate the specification of clean agent systems. Consulting Specifying
Engineer, 54(4), 68–74.
Lebovic, M., & Eckholm, W. A. (2001). Clean Agents: The Next Generation of Fire
Protections. ASSE Professional Development Conference and Exposition.
Niemann, J., Brown, K., & Avelar, V. (2011). Impact of hot and cold aisle containment on
data center temperature and efficiency. Schneider Electric Data Center Science Center, White
Paper, 135, 1–14.

29
Santana, G. A. (2013). Data center virtualization fundamentals: Understanding techniques
and designs for highly efficient data centers with Cisco Nexus, UCS, MDS, and beyond. Cisco
Press.
Shrivastava, S. K., & Ibrahim, M. (2013). Benefit of cold aisle containment during cooling
failure. International Electronic Packaging Technical Conference and Exhibition, 55768,
V002T09A021.
Srinarayana, N., Fakhim, B., Behnia, M., & Armfield, S. (2012). A comparative study of
raised-floor and hard-floor configurations in an air-cooled data centre. 13th InterSociety
Conference on Thermal and Thermomechanical Phenomena in Electronic Systems, 43–50.

30
31

You might also like