Professional Documents
Culture Documents
Data Center
Data Center
1.
INTRODUCTION
2.
3.
4.
5.
DCDA Consulting is
proud to be one of the
very few local companies
who specializes in Data
Center Design and
Assessment .
HIGH
AVAILABILITY
NETWORK
EXPERT
Trained Employees
PROCESSES
Standardization, Simplicity, Documentation
TECHNOLOGY
Data Processing, Communication, Data Storage
Reliability
Basis
Services
that meets TIA-942 standards and best practices
Extended Services
To provide you with comprehensive assistance, we follow
through our design and aid you with the implementation
of your Data Center project.
Scope of Services
1. Floor Plan (Layout) of a Data Center (Architecture &
Interior)
No
Client
Dimension
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
ICON+ PLN
Sigma
Bentoel Malang
HMSampoerna
Hortus Securitas
Lintas Arta JTL
BII Data Center
Depdiknas Puspitek
Medco HP,
Teradata, Bandung
Bukopin DC Renovation
ICON+ PLN
Barclay Bank
Bakri Bumi Resources
ICON+ PLN
PT. Dizamatra
Bank Indonesia
PT. POS Indonesia (Persero)
Design
Design
Design
Design
Server Room Design
Design
Assessment Services (AUDIT)
Design
Design
Design
Project Management
Tender Doc
DC & SCS Project Management
Design
Project Management
Business Plan & DC Design
Design & Modelling
Design, Assessment
800 m2
800 m2
100 m2
100 m2
30 m2
100 m2
500 m2
120 m2
135 m2
100 m2
200 m2
800 m2
150 m2 &100 m2
80 m2
800 m2
500 m2
1451 m2
300 m2 & 200 m2
19
20
21
22
23
Project Management
Design Server Room
Project Management
Data Center Design
Data Center Assesment and Design
80 m2
15 m2
576 m2
1000 m2
1070m2
Tier
Tier3 to 4
Tier3
Tier2
Tier2
Tier2
Tier2
Tier1
Tier2
Tier2
Tier2
Tier2
Tier3 to 4
Tier2
Tier2
Tier3 to 4
Tier2
Tier3
Tier2
Tier2
Tier1
Tier2
Tier2
Tier3
7
I.
Our
Approach
TIA-942 Telecommunications
Infrastructure Standards for Data Centers
Data Center
Design &
Assessment
Reference
Data Center
1.
INTRODUCTION
2.
3.
4.
5.
10
11
SECURE-24
OR
PARIS TELECITY
13
14
15
Most of Mobile-Phone
Facilities, Features and
Applications use Data
Center Services
(from SMS, Chat, Social
Networkingetc)
16
17
Radio
38 years
35
Years
30
25
20
15
10
5
0
18
Television
13 years
Internet
4 years
iPod
3 years
Facebook
2 years
19
users
2009
20
The millions of
Facebook users
are supported by
servers and their
supporting
infrastructure.
Every one of
The 30+ billion
Google searches
performed every
month are run on
servers and their
supporting
infrastructure
21
The billions of
text messages
sent every day
are supported
by servers and
their
supporting
infrastructure.
22
Data Center
23
1.
INTRODUCTION
2.
3.
4.
5.
24
Data Center is the description given when the entire site and building are utilized
exclusively as a data center site.
Computer Room(s)
Telecommunication Room(s)
Entrance Room(s)
Mechanical Room(s)
Electrical Room(s)
Network Operations Center
Staging Area, Storage, Loading Dock
Common Areas
General Office Space
26
27
TIA-942 Telecommunications
TIAInfrastructure Standards for Data Centers
Intended for use by data center designers early in the building development process,
Tiered reliability
Site space and layout
Cabling management & infrastructure
Environmental considerations
29
1. TIA
TIA--942 Data Center Tiers classifications
What does they mean?
A simple explanation is:
Guidelines to build a DC based on the level of performance as desired/dictated by the business
requirement
Tier 1
Tier I, single path for power and cooling distribution, NO redundant components;
99.671% availability (max cumulative annual downtime is 346 hrs or 14.5 days).
Tier II, single path for power and cooling distribution, WITH Redundant
components; 99.741% availability (max cumulative annual downtime is 272 hrs or
11.5 days).
Tier 2
Tier III, is composed of multiple active power and cooling distribution paths, but
onlyIII,
one
active,of
redundant
components,
andcooling
is concurrently
maintainable,
Tier
is path
composed
multiple active
power and
distribution
paths, but only
(max cumulative
annual
downtime
is 19 hrsmaintainable,
or < 1 day).
99.982%
availability
one
path active,
has redundant
components,
and
is concurrently
Tier 3
Tier 4
Tier IV, is composed of multiple active power and cooling distribution paths, has
Tier
IV, is composed
of multiple
active
power and
coolingavailability
distribution(max
paths,
has
redundant
components,
and is fault
tolerant;
99.995%
cumulative
redundant
components,
and is fault tolerant; 99.995% availability (max cumulative
annual downtime
is 5hrs)
annual downtime is 5hrs)
Source: Uptime Institutes
30
TIER II
TIER III
TIER IV
Building Type
Tenant
Tenant
Stand-alone
Stand-alone
Staffing
None
1 shift
1 + shitfs
24 by forever
Only 1
Only 1
1 active
1 passive
2 active
Redundant Components
N+1
N+1 or N + 2
2 (N + 1) or S + S
20%
30%
80% - 90%
100%
Initial watts/ft2
20-30
40-50
40-60
50-80
Ultimate watts/ft2
20-30
40-50
100-150
150+
12" / 30cm
18/45cm
30"
30"--36" /
75
75--90cm
30"-36" /
75
75--90cm
150
85 (=734)
(=415)
175
100(=857)
=488)
250
150 (=1225)
(=732)
250 (=1225)
150+
(GE 732)
Utility voltage
208, 480
208, 480
12-15kV
12-15kV
Months to implement
3 to 6
15 to 20
15 to 20
1965
1970
1985
1995
$450
$600
$900
$1,100+
28.8 hrs
22.0 hrs
1.6 hrs
0.4 hrs
Site Availability
99.671%
99.749%
99.982%
99.995%
31
Tier-II
Tier-III
Tier-IV
ARCHITECTURAL
Multi-tenant occupancy within
building
no restriction
Allowed only if
occupancies are nonhazardous
Ceiling Height
Operations Center
no requirement
no requirement
yes
Yes
150 (=734)
175 (=857)
250 (=1225)
250 (=1225)
Structural
Floor loading
pounds/ft2 (kg/sqm)
TIA-942 Telecommunications
TIAInfrastructure Standards for Data Centers
Intended for use by data center designers early in the building development process,
Tiered reliability
Site space and layout
Cabling management & infrastructure
Environmental considerations
33
The standard
34
Backbone cabling
provides connections
between ER, MDA, and
HDAs
ER (entrance room)
is the location for access
provider equipment and
demarcation points.
Horizontal cabling
provides connections
between the HDA, ZDA,
and EDA.
Offices, Operation
Center, Supports
Room
Entrance Room
(Career Equipment
& Demarcation)
Access Provider
Backbone
Cabling
Horizontal Cabling
Telecommunication
Room
(Office & Operation
Center LAN Switches)
(ISP/Co-Lo)
MAIN DISTRIBUTION
AREA
(Router, Backbone, LAN/SAN
switches, PBX, M13 Muxes
COMPUTER
ROOM
Backbone Cabling
Horizontal Cabling
Horizontal
Distribution Area
Horizontal
Distribution Area
Horizontal
Distribution Area
Horizontal
Distribution Area
Equipment Distribution
Area
(Rack/Cabinets Servers etc)
Equipment Distribution
Area
(Rack/Cabinets Servers etc)
Equipment Distribution
Area
(Rack/Cabinets Servers etc)
Equipment Distribution
Area
(Rack/Cabinets Servers etc)
36
Entrance Room
(Career Equipment
& Demarcation)
Access Provider
Most common
DC Topology
Consolidate MC
Horizontal cable
Offices, Operation
Center, Supports
Room
MAIN DISTRIBUTION
AREA
(Router, Backbone,
LAN/SAN switches, PBX,
M13 Muxes)
COMPUTER
ROOM
Backbone Cabling
Horizontal Cabling
Zone Distribution
Area
Equipment Distribution
Area
Equipment
Distribution Area
Equipment
Distribution Area
37
Telecommunication
Cable tray
38
TIA-942 Telecommunications
TIAInfrastructure Standards for Data Centers
Intended for use by data center designers early in the building development process,
Tiered reliability
Site space and layout
Cabling management & infrastructure
Environmental considerations
39
40
2. Interconnection
3. CrossCross-connection
Switches
Rack
Servers
Rack
41
Patch Cord
into Server
Servers
Patch Cord
into Switch
Switches
HDA or MDA
EDA
Cabling Infrastructure
3. CrossCross-Connection
Permanent cable to EDA or SAN storage
(Backbone and Horizontal cabling)
CORE
SWITCHES
Core
Switches
EDA
(Server Room)
43
Offices, Operation
Center, Supports
Room
Entrance Room
(Career Equipment
& Demarcation)
MAIN DISTRIBUTION
AREA
Access Provider
COMPUTER
ROOM
Zone Distribution
Area
Equipment Distribution
Area
Equipment Distribution
Area
Equipment
Distribution Area
Core
Aggregation
Access
Storage
Access Switch
Core Switch
46
A core switch is located in the core of the network and serves to interconnect edge switches. The core layer routes
traffic from the outside world to the distribution layer and vice versa. Data in the form of ATM, SONET and/or
DS1/DS3 will be converted into Ethernet in order to enter the Data Center network. Data will be converted from
Ethernet to the carrier protocol before leaving the data center.
Distribution Switch
An access switch or an edge switch is the first point of user access (and the final point of exit) for a network. An edge
switch will allow the servers to connect to the network. Multimode optical fiber is the typical media that connects
the edge devices to the servers within the data center. Edge switches are interconnected by core switches.
Distribution switches are placed between the core and edge devices. Adding a third layer of switching adds
flexibility to the solution. Firewalls, load balancing and content switching, and subnet monitoring take place,
aggregating the VLANs below them. Multimode optical fiber will be the typical media running from the distribution
layer to the core and edge devices.
Not every data center will have all three layers of switching. In smaller Data Centers the core and
distribution layer are likely to be one and the same.
Maps to
Aggregation
A good data center layout adapts flexibly to new needs and enables a high degree of
documentation and manageability at all times.
middle of row
top of rack, or
In two Racks
In One Rack
End of Row
Middle of Row
Sample!!! Implementation of
LOGICAL ARCHITECTURES TO TIA-942
using ZDA in Middle of Row
LAN Switch
Nexus 7000
Data Center Core/Aggregation
Nexus 5000
Unified Server Access
Nexus 2000
Remote Module
Nexus 1000V
VM-Aware Switching
LAN Fabric
Extender
Nexus 5000
Two Row
(traditional implementation early TIATIA-942 implementation)
implementation)
Sample 50
50%
% Filled Under Raised floor Tray
60
Benefits
Alleviates congestion beneath access floor
Creation of segregated pathways
Minimize obstruction to cold air
Concerns
61
Benefits
Pedestals create infrastructure
pathways
Utilization of real estate
Cabling is hidden
Concerns
Accessibility to cables
62
63
64
UTP Main/
Horizontal
crossconnect
65
TIA-942 Telecommunications
Infrastructure Standards for Data Centers
Intended for use by data center designers early in the building development process,
Tiered reliability
Site space and layout
Cabling management & infrastructure
Environmental considerations
66
Power requirements:
is based on the desired reliability tier and may include two or more power feeds from the utility,
UPS, multiple circuits, and on site generator.
Power needs is what is required for all existing devices and anticipated in the future. Power
requirements must also include all support equipment such as UPS, generators, conditioning
electronics, HVA, lighting, etc.
Cooling:
The standard recommends the use of adequate cooling equipment as well as a raised floor systems for
more flexible cooling.
Additionally standard states that cabinets and racks should be arranged in alternating patterns to
create hot and cold aisle.
67
Development of IT Equipment
TODAY
68
Blade Servers
in the 42U
Racks
7U Blade each
having 14 Servers
69
70
71
Data Center
72
1.
INTRODUCTION
2.
3.
4.
5.
Data Center
1960s
1960s
Prior to 1960 (1945), the Army developed a huge
machine called ENIAC (Electronic Numerator,
Integrator, Analyzer, and Computer):
Weighed 30 tons
Took up 1,800 sq ft of floor space
Required 6 full-time technicians to keep it running.
73
Punch cards
Early computers often used punch cards for input both of
programs and data. Punch cards were in common use until the
mid-1970s. It should be noted that the use of punch cards predates
computers. They were used as early as 1725 in the textile industry
(for controlling mechanized textile looms).
From PINGDON
Above left: The magnetic Drum Memory of the UNIVAC computer. Above
right: A 16-inch-long drum from the IBM 650 computer. It had 40 tracks, 10 kB
of storage space, and spun at 12,500 revolutions per minute.
Data Center
1970s
1970s
78
79
The first hard drive to have more than 1 GB in capacity was the
IBM 3380 in 1980 (it could store 2.52 GB). It was the size of a
refrigerator, weighed 550 pounds (250 kg), and the price when it
was introduced ranged from $81,000 to $142,400.
Data Center
1990s
1990s
82
83
84
85
86
88
89
Cloud Provider
90
91
92
EXPLAINING THE
CLOUD COMPUTING
CLOUD COMPUTING EXPLAINED BY
CHRISTOPHER BARRAT
93
94
Data Center
95
1.
INTRODUCTION
2.
3.
4.
5.
Background
Data center spaces can consume up to
98
Green Design
Green
Procurement
& clean
Green
Operation &
sustainability
Green
disposal
Process Efficiency
Energy Usage
Material Usage
one very
important
aspect for
measurement
of Green DC
Detailed Calculation
Total IT Load
94kW
Cooling Infrastructure
80kW
24kW
Lighting Load
2kW
200kW
PUE
2.13
PUE
Level of Efficiency
3.0
Very Inefficient
2.5
Inefficient
2.0
Average
1.5
Efficient
105
1.
2.
IT system energy efficiency and environmental conditions are presented first because
measures taken in these areas have a cascading effect of secondary energy savings for
the mechanical and electrical systems.
Operation &
management
design
Selection and
procurement
Key elements in a
green data center