Professional Documents
Culture Documents
MONOGRAPH
ON
Design and implementation of Kabul University Data Center
طرح و تطبیق دیتا سنتر پوهنتون کابل
BY
Wasima Habib
17-RT200-332
In partial fulfillment of the requirements for the award of the degree of
BACHELOR OF INFORMATION TECHNOLOGY
BIT
TO
RANA University
Baraki Square, Kabul–Afghanistan
Islamic Republic of Afghanistan جمهوری اسالمی افغانستان
Ministry of Higher Education دلوړو زده کړو وزارت
Directorate of Private Higher Education د لوړو زده کړو د خصوصي پوهنتونو ریاست
RANA University رڼا پوهنتون
Directorate of Computer Science Faculty
د کمپیوټر ساینس پوهنځي ریاست
Information Technology Department
د معلوماتی ټکنالوژی دیپارتمنت آمریت
MONOGRAPH
ON
Design and implementation of Kabul University Data
Center
طرح و تطبیق دیتا سنتر پوهنتون کابل
VC academic: _________________________________
Name & signature Mr.
STUDENT PARTICULARS
Name: Wasima Habib Registration No: 17-RT200-332
Design and implementation
of Kabul University Data
Father’s Name: Habibullah Project Title:
Center
Assessment Criteria
Member 1
Problem Definition: Relevant Yes☐ No☐ clearly phrased Yes☐ No☐ Testable Yes☐
No☐
Member 2
Problem Definition: Relevant Yes☐ No☐ clearly phrased Yes☐ No☐ Testable Yes☐
No☐
Member 3
Problem Definition: Relevant Yes☐ No☐ clearly phrased Yes☐ No☐ Testable Yes☐
No☐
Member 4
Problem Definition: Relevant Yes☐ No☐ clearly phrased Yes☐ No☐ Testable Yes☐
No☐
DECLARATION
I hereby, declare that the Monograph “Design and implementation of Kabul University Data
Center” of the requirements for the Degree of Bachelor of Information Technology (BIT) to
RANA University is my original work and not submitted for any other degree, diploma,
fellowship or similar title or prize.
Name:
Signature: __________________
Date: ______________________
FACULTY CERTIFICATE
Batch: 2016-2021
Register Number: 17-RT200-332
Serial Number:
This is to certify that the Project / Monograph titled “Design and implementation of Kabul
University Data Center” Submitted in partial fulfillment of the requirements for the degree of
"Bachelor of Information Technology to RANA University, Baraki Square, Kabul –
Afghanistan is carried out By Wasima Habib Under my direct supervision and
guidance and that no part of this report has been submitted for the award of any other
degree, diploma, fellowship or other similar titles or prize and that the work has not been
published in any scientific or popular journals or magazines.
Department Stamp
ACKNOWLEDGEMENT
All praises and thanks to Almighty Allah, the source of knowledge and wisdom to mankind,
who conferred me with power of mind and capability to take this material contribution to
already existing knowledge. All respect and love to him who is an everlasting model of
guidance for humanity as a whole.
I would like to express the deepest appreciation to the committee chair H.E Dr. Shafiullah
Naimi the Chancellor of RANA University, who encouraged me in writing my monograph on
“Topic” with the attitude and the substance of a genius, he continually and convincingly
conveyed a spirit of adventure accordingly.
I wish to thank my project supervisor, Mr. Azizullah Shirzad guidance made my project
possible. His encouragement and wisdom made my efforts worthwhile. My heartfelt gratitude
also goes to Dean of CS Faculty, Mr. Abdul Ghafar Omerkhil for his insight and completion
of my project.
It is with great honor that I would also like to thank my friends, whose names I have not
mentioned, how yet supported and helped me in one way or the other.
Finally, I thank you, the reader for taking time to read my thesis.
Signature
Wasima Habib
17-RT200-332
BIT (Bachelor of Information Technology)
Table of Contents
Chapter 1 | Introduction
1.1 Overview
Today the Data Center is the heart of most companies’ operations, the importance of
effective management of increasingly large amounts of data is prompting many companies
to significantly upgrade their current operations, or to create brand new data centers from
greenfield. At the same time, economic conditions are forcing companies to focus on
efficiency and simplification. As a result, Data Center optimization and/or consolidation may
be on your agenda.
Kabul University was founded in 1931 during the government of Mohammed Nadir Shah and
then Prime Minister Mohammad Hashim Khan. Approximately 22,000 students attend Kabul
University. Of these, nearly 43% are female. The mission of Kabul University is to mature
and prosper as an internationally recognized institution of learning and research, a
community of stakeholders committed to shared governance, and a center of innovative
thought and practice. The data center design for the Kabul University helps IT to manage
everything centrally and avoid losing data also removing the paperwork. It helps all
employees record gather into a file server and by taking backups secure the data more.
Data centers are facilities that house servers and related equipment and systems. They are
distinct from data repositories, which collect various forms of research data, although some
data repositories are occasionally called data centers. Many colleges and universities have
data centers or server rooms distributed across one or more campuses, as we would like the
Kabul University do also. This monograph reports on the experiences of having all
application and storage servers were consolidated into a new, university datacenter. I would
discuss the advantages of consolidation, the planning process for the actual data center
design and implementation, and lessons learned from the testing virtual experience.
1.2 Objectives
Several factors are currently converging to make this an opportune time for the University of
Kabul to review its model for housing, securing, and managing its computing servers and
equipment. They are:
1. The commissioning of the Information Technology Facility which provides highly efficient
data center space previously not available.
2. The University’s “2021 Vision” Sustainability Targets include a goal to achieve net-
negative energy growth from 2010 to 2021.Sloution that can reduce IT energy use.
3. Technologies such as virtualization and remote server management have matured and
can be more widely deployed.
4. University efficiency initiatives over several years have put continuing pressure on IT staff
resources, so changes that free up IT staff to work on higher-priority IT needs are
recognized as necessary.
1.3 Benefits
There are many advantages to the centralized data center. Many of these advantages also
applied to the other companies for having a data center, but for the purposes of this paper,
we are addressing them in the context of the university’s experience.
1.3.12. Security
With server rooms scattered all over the university, security issues can be a concern. Now if
the servers are housed in one location, the university can provide a highly secure
environment in a more cost-effective way. The data center has card-swipe access to the
building and biometric access to the data center itself. There are also cameras installed in
the building as a further security measure.
1.4.2 VMware
For installing windows server components and testing on the clients’ PCs.
1.5.3. Security
Since it is rather easy to gain access to programs and other types of data, security concerns
are a big issue in LAN. The sole responsibility to stop unauthorized access is in the hands of
LAN administrators. The LAN administrator has to make sure that the centralized data is
properly secured by implementing correct set of rules and privacy policies on the server.
1.5.4. Maintenance
LAN often faces hardware problems and system failure. Hence, it requires a special
administrator to look after these issues. The administrator needs to be well knowledgeable in
the field of networking and needed at its full-time job.
1. Wireless networking
Help employees be more productive and collaborate better by enabling them to work
wirelessly from anywhere in the office.
2. Voice
3. Video
Enable more cost-effective surveillance and security systems or support on-demand and live
streaming media.
4. Security
Reduce business risks associated with viruses and other security threats.
6. Modular architecture
With a wide variety of available LAN and WAN options, you can upgrade your network
interfaces to accommodate future technologies. The 2800 Series also offers several types of
slots that make it easy to add connectivity and services in the future on an "integrate-as-you-
grow" basis.
7. Flexibility
Connectivity via DSL, cable modem, T1, or 3G wireless maximizes your options for both
primary and backup connections.
2.2.4. ADDC
domain controller is a server that responds to authentication requests and verifies users on
computer networks. Domains are a hierarchical way of organizing users and computers that
work together on the same network. The domain controller keeps all of that data organized
and secured.
The domain controller (DC) is the box that holds the keys to the kingdom- Active Directory
(AD). While attackers have all sorts of tricks to gain elevated access on networks, including
attacking the DC itself, you can not only protect your DCs from attackers but actually use
DCs to detect cyberattacks in progress.
The primary responsibility of the DC is to authenticate and validate user access on the
network. When users log into their domain, the DC checks their username, password, and
other credentials to either allow or deny access for that user.
Active Directory is a type of domain, and a domain controller is an important server on that
domain. Kind of like how there are many types of cars, and every car needs an engine to
operate. Every domain has a domain controller, but not every domain is Active Directory.
In general, yes. Any business – no matter the size – that saves customer data on their
network needs a domain controller to improve security of their network. There could be
exceptions: some businesses, for instance, only use cloud based CRM and payment
solutions. In those cases, the cloud service secures and protects customer data.
1. NTP
To set date and time for all servers and clients from a central point. The Network Time
Protocol (NTP) is a networking protocol for clock synchronization between computer
systems over packet-switched, variable-latency data networks. In operation since before
1985, NTP is one of the oldest Internet protocols in current use. NTP was designed by David
L. Mills of the University of Delaware.
2. WSUS
To push new updates to all the users
3. File Server
Helps for a centralized resource point and safe documentation
4. Shadow Copy
To have a backup of the files and prevent from the file removals by mistake
5. FSRM
To control what should be in the file server and what should not
6. Firewall
May help to secure inbound and outbound file transfers
Chapter 3 | Requirements Gathering
3.1. Requirements Gathering
3.1.1. Hardware requirements
The Recommended Minimum System Requirements, here, should allow even someone new
to installing a usable system with enough room to be comfortable.
• PowerEdge Rack Servers
• Power Distributor
• Firewall (Sophos XG Firewall)
• Cisco Router (Cisco 2800 ISR router)
• Cisco Switch (Cisco Catalyst 9200 Series Switch)
• Rack 42U
• AC
• Fire Alarm (Smoke Detector)
• UPS (Battery)
• Rj45 Connector
• Cables
• Security Camera
3.2.2. Security
Data center security refers broadly to the array of technologies and practices used to protect
a facility’s physical infrastructure and network systems from external and internal threats. On
a very basic level, data center security is all about restricting and managing access. Only
authorized personnel should be able to access critical infrastructure and IT systems. Data
center security includes both the “things” put in place to accomplish that goal (such as
locked access points, surveillance systems, or security personnel) and the “controls” that
manage them (such as security policies, access lists, or rules for handling data).
3.2.3. Important Data Center Security Standards
Here are a few critical data center physical security standards and technologies every
colocation customer should evaluate when they’re looking to partner with a facility.
Access Lists
While it may seem like a simple thing, one of the most important elements of data center
security is ensuring that only authorized persons are permitted to access key assets. When
a company colocates with a data center, not every employee there needs to have access to
the servers. This is a critical component of the “Zero Trust” security philosophy. By
maintaining up-to-date access lists, a facility can help their customers prevent theft and
guard against human error by people who aren’t authorized to handle IT assets in the first
place.
Video Surveillance
Another longtime staple of physical security technologies, video surveillance is still incredibly
valuable for data centers. Closed-circuit television cameras (CCTVs) with full pan, tilt, and
zoom features should monitor exterior access points and all interior doors as well as the data
floor itself. Camera footage should be backed up digitally and archived offsite to guard
against unauthorized tampering.
24x7x365 Security
Security checkpoints, cameras, and alarms won’t amount to much without security staff on-
site to respond to potential threats and unauthorized activity. Routine patrols throughout
every data center zone can provide a visible reminder that security personnel are on the
lookout and can react quickly to deal with any potential issue.
Background Checks
Between security staff and remote hands technicians, data centers have a lot of people
moving throughout a secure facility. Conducting thorough background checks on staff, as
well as implementing vetting requirements for all third-party contractors, can provide
assurances to their customers that these people can be trusted to manage and protect their
valuable IT assets.
Exit Procedures
When someone who has the authorization to access sensitive zones and assets within the
data center leaves their position, their privileges don’t go with them. Whether it’s data center
personnel or customer employees with access rights who are leaving the organization,
facilities should have systems and procedures in place to remove those privileges. This
could mean updating access lists, collecting keys, or deleting biometric data from the
facility’s system to make sure they won’t be able to pass through security in the future.
Multi-Factor Authentication
Every data center should follow “Zero Trust” logical security procedures that incorporate
multi-factor authentication. Every access point should require two or more forms of
identification or authorization to ensure that no one will simply be “waved through” by
security if they’re missing one form of authentication.
Biometric Technology
One of the latest innovations in security standards, biometric technology identifies people
through a unique physical characteristic, such as a thumbprint, retina shape, or voice
pattern. There are a variety of ways to incorporate biometric technology into access
protocols, and it is especially valuable as one component of two-factor authentication.
As data center security technology continues to involve, new physical security measures will
surely be incorporated as best practices. Data center physical security standards may not be
evident at first glance because many of them are intended to remain out of sight. Even so,
data center customers can review security certifications and request a more detailed
overview of the physical and logical security measures a facility has put in place to ensure
that data remains well-protected.
Chapter 4 | System Design
Click the second line item for the GUI. The default install is now Server Core. Then
click Next.
Read License Agreement, Turn on Checkbox “I accept the license terms,” and then
click Next.
Figure 4. 6: Terms and Conditions
[Optional:] Add a drive using Native Boot To Vhd: SHIFT-F10 to open a command prompt
window; Find installation drive (dir c:, dir d:, dir e:, etc). Diskpart to open the Disk Partition
Utility (the first four lines below are all the same command and must run on the same line,
separated here to make it easier to read).Create vdisk file=e:\BootDemo.vhd
type=expandable maximum=40000. Attach disk. Exit. Then Refresh.
Figure 4. 9: Refresh
It will then start copying files. This will take a while (could be 20 mins or so depending on
hardware performance).It will reboot a couple times (automatically). After the first reboot, it
will no longer be running off of the DVD.
In the Password box, enter a new password for this computer. It must meet complexity
requirements. Re-enter the password in the second password box, and then click Finish.
Pressing Windows Key on the keyboard will bring up the start screen (formerly known as
Start Menu). If you Right-Click on Computer, you will see the new right-click menu is on the
bottom of the screen instead of in a dropdown box. Select Properties.
You will see that the System Properties screen looks almost identical to prior versions of
windows. We can now change the computer name by clicking on Change Settings.
Figure 4. 16: Changing Computer Name
Type new computer name you would like to use and click OK.
Click the Windows button and type ‘add feature’ to start the feature installation:
Figure 4. 20: Windows Features
This opens up the ‘Add roles and features’ wizard in Server Manager. Click Next a couple of
times until you reach the features section:
In the features section expand ‘Remote Server Administration Tools’ all the way down to the
‘AD DS Snap-Ins’ component. Select it and click Next:
Figure 4. 22: ADDS Installation
In the Add Roles and Features Wizard dialog that opens, proceed to the Features tab in the
left pane, and then select Group Policy Management.
4- DNS Configuration
First, you’ll need to start the Configure Your Server Wizard. To do so, click Start -> All
Programs -> Administrative Tools, and then click Configure Your Server Wizard.
On the Server Role page, click DNS server, and then click Next.
On the Summary of Selections page, view and confirm the options that you have selected.
The following items should appear on this page:
• Install DNS
• Run the Configure a DNS Wizard to configure DNS
If the Summary of Selections page lists these two items, click Next.
If the Summary of Selections page does not list these two items, click Back to return to the
Server Role page, click DNS, and then click Next to load the page again.
When the Configure Your Server Wizard installs the DNS service, it first determines whether
the IP address for this server is static or is configured automatically. If your server is
currently configured to obtain its IP address automatically, the Configuring Components
page of the Windows Components Wizard will prompt you to configure the server with a
static IP address. To do so perform the following actions:
In the Local Area Connection Properties dialog box, click Internet Protocol (TCP/IP), and
then click Properties.
Next, click Use the following IP address, and then type the static IP address, subnet mask,
and default gateway for this server.
In Alternate DNS, either type the IP address of another internal DNS server, or leave this
box blank.
When you’ve finished setting up the static IP addresses for your DNS, click OK, and then
click Close.
After you Close the Windows Components Wizard, the Configure a DNS Server Wizard will
start. In the wizard, follow these steps:
On the Select Configuration Action page, select the Create a forward lookup zone check
box, and then click Next.
To specify that this DNS hosts a zone containing DNS resource records for your network
resources, on the Primary Server Location page, click This server maintains the zone, and
then click Next.
On the Zone Name page, in Zone name, specify the name of the DNS zone for your
network, and then click Next. The name of the zone is the same as the name of the DNS
domain for your small organization or branch office.
On the Dynamic Update page, click Allow both nonsecure and secure dynamic updates,
and then click Next. This makes sure that the DNS resource records for the resources in
your network update automatically.
On the Forwarders page, click Yes, it should forward queries to DNS servers with the
following IP addresses, and then click Next. When you select this configuration, you forward
all DNS queries for DNS names outside your network to a DNS at either your ISP or central
office. Type one or more IP addresses that either your ISP or central office DNS servers use.
On the Completing the Configure a DNS Wizard page of the Configure a DNS Wizard, you
can click Back to change any of your selected settings. Once you’re happy with your
selections, click Finish to apply them.
After finishing the Configure a DNS Wizard, the Configure Your Server Wizard displays the
This Server is Now a DNS Server page. To review the changes made to your server or to
make sure that a new role was installed successfully, click on the Configure Your Server log.
The Configure Your Server Wizard log is located at:
Forward lookup zones are the specific zones which resolve domain names into IP
addresses. If you’ve followed the configuration instructions above, your forward lookup zone
should already be set up. If for some reason you need to set up a forward lookup zone after
configuring your DNS, you can follow these instructions:
First, open up DNS by navigating to the Start menu -> Administrative Tools -> DNS.
Expand the server and right click Forward Lookup Zones and click New Zone.
Click Next and select the type of zone you want to create.
Select the method to replicate zone data throughout the network and click Next.
Select the type of updates you want to allow and click Next.
If you need to change the DNS server for different network interfaces, you can do so using
the following:
In Network Connections, right-click the local area connection, and then click Properties.
In Local Area Connection Properties, select Internet Protocol (TCP/IP), and then click
Properties.
Click Use the following DNS server addresses, and in Preferred DNS server and Alternate
DNS server, type the IP addresses of the preferred and alternate DNS servers.
A DNS resolver cache is a temporary database created by a server to store data on recent
DNS lookups. Keeping a cache helps speed up the lookup process for returning IP
addresses. You can use the command ipconfig /displaydns to see what entries are currently
stored in your server’s cache.
Sometimes though, a virus will hijack a servers DNS cache and use it to re-route requests.
This is sometimes referred to as cache poisoning, and is one of several reasons why you
may want to flush the DNS cache.
ipconfig /flushdns
When completed successfully, you should receive a message that says “Windows IP
configuration successfully flushed the DNS Resolver Cache.”
5- WSUS Installation
On your Server, open Server Manager, on the Dashboard, click Add Roles and Features
then click next 3 times till you get Select server roles box, in Select server roles box, select
the Windows Server Update Services (In the pop-up window, click Add Features)… then
click Next…
On the Select role services box, verify that both WID Database and WSUS Services are
selected, and then click Next…
Figure 4. 27: WSUS Services
On the Content location selection box, type C:\Comsys WSUS, and then click Next…
In the Windows Server Update Services Configuration Wizard window, on the Before You
Begin, click Next to proceed…
On the Join the Microsoft Update Improvement Program, just click Next…
Figure 4. 36: WSUS Update Program
On the Choose Upstream Server box, click the Synchronize from Microsoft Update option
and then click Next…
On the Set Sync Schedule box, I choose Synchronize manually, then click Next…
Figure 4. 44: Synchronize Setup
On the Finished box, click the Begin initial synchronization option, and then click Finish…
** If you notice in my WSUS Server, WSUS is synchronizing update information, this might take few minutes…
If everything goes well, on the synchronization status you can see that Status is Idle and
the Last Synchronization result: Succeeded…
In the Computers dialog box, select Use Group Policy or registry settings on computers
then click OK…
** I choose Use Group Policy because I wanted all my clients getting windows updates by
GPO…
Next, click All Computers, and then, in the Actions pane, click Add Computer Group…
Figure 4. 50: Adding Computer Group
In the Add Computer Group dialog box, in the Name text box, type Computer system
Laptop, and then click Add…
** On the Domain Server, open Group Policy Management, right click Computer system
Laptop and then click Create a GPO in this domain, and Link it here…
In the New GPO dialog box, type WSUS Computer system Laptop, and then click OK…
In the Setting pane, double-click Specify intranet Microsoft update service location, and
then click the Enabled option, then in the Set the intranet update service for detecting
updates and the Set the intranet statistics server text boxes,
type http://dc01.comsys.local:8530, and then click OK…
Figure 4. 57: Specify intranet Microsoft update service location
In the Setting pane, double click Enable client-side targeting, in the Enable client-side
targeting dialog box, click the Enabled option, in the Target group name for this
computer text box, type Computer system Laptop, and then click OK…
Next, let’s log in to our client PC as domain administrator and verify that our client is
receiving the GPO by typing gpresult /r in the command prompt, In the output of the
command, confirm that, under COMPUTER SETTINGS, WSUS Comsystem Laptop is
listed under Applied Group Policy Objects…
Figure 4. 59: Testing GPO
Next, we need to Approve and at the same time deploy an Update to our client PC…
in WSUS console, under Updates, click Critical Updates, right click any updates you
prefer for your client PC and then click Approve…
Figure 4. 61: Approve and deploy an Update
In the Approve Updates window, in the Comsystem Laptop drop-down list box,
select Approved for Install…
Now, to deploy the selected updates, on the Client PC, in the cmd type Wuauclt.exe /detectnow…
Figure 4. 65: deploy the selected updates
before you confirm the client can receive the update from the WSUS Server, return to
WSUS Server and the on the WSUS console, on the Download Status, verify that the
necessary / selected updates is finish downloading…
Now return to Client PC and open Windows Update from Control Panel, you should
notice update available for your client PC and you can proceed with installation…
6- File Server
Open Server Manager from the Left down corner of server Desktop as shown below
Click on Add Roles & Features from Server Manager Dashboard as shown below.
You can see the file and Storage services is selected already because we are installing this
service on Domain controller but if you install and add this Role service on any other fresh
server then you have to follow the same process.
Figure 5. 2: Select File Server Feature
After Next it will install the Services on the server on which you want to setup file & share
services.
After this we will open File & Storage service given on the Server Manager Dashboard as
shown below
The disk option will show you the disk used to create volumes. You can attach more
Physical and virtual disk and after scan to detect the same you can further configure
volumes in the Disk.
Figure 4. 77: Volume and Disk
The Storage Pool option show you the details of group of physical disks which create a pool
that enable you to make more efficient use of disk capacity. Currently there is no other
storage attached with server so it shows empty area &You can add new storage pool from
the Task button given in the top right corner as shown.
On Configure sharing settings continue with default settings and click on Next
Figure 4. 86: Configuring Sharing Settings
The next page will show the default permission of that folder. If you want to edit permission,
you can do the same by customize permissions button else click on next
A Network Topology is the arrangement with which computer systems or network devices
are connected to each other. Topologies may define both physical and logical aspect of the
network. Both logical and physical topologies could be same or different in a same network.
A tree topology is a special type of structure where many connected elements are arranged
like the branches of a tree. For example, tree topologies are frequently used to organize the
computers in a corporate network, or the information in a database.
In a tree topology, there can be only one connection between any two connected nodes.
Because any two nodes can have only one mutual connection, tree topologies create a
natural parent and child hierarchy.
In computer networks, a tree topology is also known as a star bus topology. It incorporates
elements of both a bus topology and a star topology. Below is an example network diagram
of a tree topology, where the central nodes of two-star networks are connected to one
another.
4: Windows should be updated online using control panel, windows update, install update
6: Default administrator user must be renamed to guest and guest user must be renamed to
administrator then a super admin user must be added
7: Installing antivirus
11: Adding Super user to the domain admin and enterprise admin groups of AD
14: select valid IP, Gateway, and DNS address to the server
steps:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters
\
3: Find the value named AutoShareServer and change DWORD value to 0. if it is not
present then add it
Perform the following steps to configure TCP/IP parameters to reduce the likelihood and
effect od DoS attacks
Under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services
Key: TCPIP\Parameters
Value: SynAttackProtect
Parameter: 1
Key: TCPIP\Parameters
Value: EnableCMPRedirect
Parameter: 0
Key: TCPIP\Parameters
Value: EnableDeadGWDetect
Parameter: 0
Key: TCPIP\Parameters
Value: EnablePMTUDiscovery
Parameter: 0
Key: TCPIP\Parameters
Value: KeepAliveTime
Parameter: 300000
Key: TCPIP\Parameters
Value: DisableIPSourceRouting
Parameter: 2
Key: TCPIP\Parameters
Value: TcpMaxConnectResponseRetransmissions
Parameter: 2
Key: TCPIP\Parameters
Value: TcpMaxDataRetransmissions
Key: TCPIP\Parameters
Value: TCPMaxPortsExhausted
Parameter: 5
E&Y Recommendations
regedit32 >
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RemoteAccess\Parameters
\AccoutnLockout > Set [MaxDanials] to 5 attempts
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RemoteAccess\Parameters
change the value of EnableAudit to 1
To open Server Manager, click the Server Manager icon in the taskbar or select Server
Manager in the Start Menu.
Click Manage in the upper right portion of the screen and click Add Roles and Features to
open a wizard.
Note: You cannot add roles and features until Server Manager finishes loading. Wait
until Server Manager loads before you add roles and features.
On the Before you begin page, click Next to begin. You can skip this page in the future by
checking Skip this page by default box.
On the Select installation type page, choose Role-based or feature-based installation and
click Next.
On the Server Selection page, choose the server to which you want to add the role or
feature. In most cases, this choice is the server you are logged in to. Click Next.
Select all desired roles on the Server Roles page. When you add roles, the wizard prompts
you to add prerequisite roles and features, if any. After you have selected the desired roles,
click Next.
Select all desired features on the Features page and click Next.
Complete the configuration of the selected roles and features and click Next on each screen.
After you complete the initial configuration of the chosen features, the Confirmation page
displays and lists a summary of the changes. Verify the changes before proceeding. If you
want the server to restart automatically after installation completes, check the box labeled
Restart the destination server automatically if required.
Router Configuration:
Router> enable
Router(conf)#
Router(conf)# hostname R1
Router(config-line)#
Router(config-line)# login
Router(config-if)#
Router(config-if)# no shutdown
=================================
Switch(config-line)#
Switch(config-if)#
----------------------------
Switch#
Switch# configure terminal
Switch# show ?
Switch# debug ?
Switch# reload
---------------------------------
Switch(conf)#
Switch(conf)# hostname R1
----------------------------------------------------
Switch(config-line)#
Switch(config-line)# login
Switch(config-if)#
Switch(config-if)# no shutdown
Switch(config-if)# exit
Debugging:
Router# debug ?
Router# reload
To me, building a sustainable data center means building facilities that don’t have a lasting,
detrimental impact on the planet. It means powering our data centers from renewable energy
sources; it means designing the most energy efficient facilities we possibly can and using the
very latest techniques and engineering infrastructure to provide efficient power and cooling
to our data halls.
It also means considering the recyclable content of materials we use for our facilities,
minimizing waste to landfill and consider recycling waste heat, whilst ensuring our facilities
are well maintained. It means working with our customers to ensure they are streamlining
their computing practices and deploying highly efficient server technology.
Today, renewable energy is often less expensive than brown power. Buyers can negotiate
long-term fixed-price or stable-price contracts for energy. This means energy costs from
companies using renewables are likely to be more stable and offer more reliable pricing than
fossil fuels.
If we can do all these things, then we are moving toward a sustainable data center and a
sustainable business. What’s good for the planet is good for business.
How the Internet of Things (IoT) Has Impacted Data Center Development
IoT devices gather large amounts of data which can put big demands on data centers and
their networks. Whilst much of the focus around the IoT tends to be around the
decentralization of deployment or edge computing, where devices sit close to the end points
they are monitoring, the centralized data center and Cloud still play a crucial part as data is
streamed back to a centralized hub for analysis.
Connectivity is often an issue as most of these applications require a low latency connection
from their out-of-town location back to the centralized data center.
Ironically, this means that despite measures taken to reduce energy consumption and
carbon emissions – things like electric vehicles, autonomous cars, smart building systems
controlling efficient use of HVAC systems through temperature sensors, reduced airline
travel by holding video calls, etc. – this drives more traffic through our data centers and
increases energy consumption.
In terms of Edge data centers, we are seeing increasing demand from customers who
require smaller parcels of IT capacity in out-of-town locations. This can be a challenge for
data center operators, since the size of a potential deployment may not justify the investment
required to build a new facility outside of primary data center locations.
At Iron Mountain Data Centers, we have a unique advantage on Edge data centers since we
already operate 1,450 global storage facilities through Iron Mountain Group. This provides
access to existing facilities in many secondary and tertiary locations.
From a funding perspective, debt and equity lenders are far more comfortable lending for
developments in established markets such as the FLAP markets in Europe; North Virginia,
Phoenix, Dallas, New York, Silicon Valley, Atlanta and Chicago in North America; and
Singapore, Hong Kong, India, Australia and Japan in APAC.
At Iron Mountain Data Centers, all our developed markets are in demand. In Europe, we’re
seeing demand in FLAP and the Nordic countries, as well as inquiries from places like Berlin
and Munich in Germany, Milan in Italy, Madrid in Spain, and other locations in Switzerland,
Poland, Turkey and Belgium.
In North America, all the key markets are busy, but our biggest demand continues to come in
Virginia and Phoenix.
In APAC, our Singapore facility is close to being full and we are seeing increasing amounts
of inquiries for Hong Kong and Indonesia. Our largest growth potential, however, is coming
from India, where we expect demand to double over the next couple of years in markets
such as Mumbai, Chennai, Bangalore, Kolkata, Hyderabad and Pune.
Data center customers are diverse, and their data center needs are too. Our retail colocation
customers often want a standard product offering in an existing facility. We strive to provide
tailor-made solutions for our customers, but many colocation customers are happy with
standard designs and can make it work for their requirements.
Our bigger customers often have specific engineering requirements. These are often larger
deployments that require exclusive use of a data hall and the associated engineering
infrastructure. We are seeing an increasing trend for some of our bigger customers to be
actively involved in the design process.
In the future, I think we will see a rise in decentralized locations for data centers, driven by
Edge. Data centers will be far more efficient in the engineering infrastructure, as well as the
efficiency of the servers deployed within the facilities. As design evolves, data centers will
hopefully consume less energy, generate less heat and be able to operate at higher
temperatures.
I suspect the operating temperatures within data halls will increase and engineering
infrastructure will be simplified as customers will be more dependent on the resiliency of their
own equipment, rather than rely on the infrastructure of their host. AI will inevitably be used
to much greater effect to ensure efficiency and resilience.
We will also see more carbon reduction technology such as carbon scrubbers. These are
just one more step towards a future where data centers become harmless to the
environment. Hopefully, with each new development, we are closer to meeting that goal.
Additional infrastructure was added to their UPS room, UPS/Switch room and the data
center. These renovated rooms are now primarily cooled by a dedicated Glycol Cooling
System being distributed by a two 15 hp Glycol Pump Package with three 3- fan Liebert dry
coolers located on the roof of the 4th floor. All rooms are now protected by a new fire
suppression system and environmental monitoring was added to monitor the new Liebert
equipment installed, all fire suppression/detection systems, the existing UPS system and the
water detection system was expanded.
The renovation consisted of decommissioning and removal of four up-flow computer room
air conditioning units and three roof top dry coolers. Demolition of existing interior walls and
ceiling to accommodate new expanded data center area. Construction of new and repairing
of existing walls; all walls were constructed and/or repaired to conform to the UL 419 1 hour
assembly rating. Installation of a new suspended ceiling system with 24” x 24” vinyl faced
acoustical panels, new lighting throughout expansion area and raised access floor with 1/16”
high performance.
EEC coordinated all delivery and rigging for provided equipment and also coordinated the
equipment start-up and certifications services for all new equipment with the factory
authorized technicians. The company also contracts with EEC to maintain all UPS systems,
UPS batteries, HVAC systems, and fire suppression/detection systems.
Below are the details of the project:
850 sq/ft Data Center expansion area with a Tate 12” Raised Access Floor System
Protected by:
Cooled by:
Protected by:
Cooled by:
Protected by:
UPS Power:
Protected by:
Cooled by:
Environmental Monitoring
Expanded RLE LD2500 water detection system to monitor under the expansion area raised
access floor system
One Liebert N-Form Enterprise Edition Complete Monitoring Solution. This system is
currently monitoring all the new Liebert equipment installed, all fire suppression/detection
systems, as well as the existing UPS system.
Conclusion
The consolidation of distributed data centers or server rooms on university campuses offers
many advantages to their owners and administrators, but only minimal disadvantages. The
University at Albany carried out a decade-long project to design and build a state-of-the-art
data center. The libraries participated in a two-year project to migrate their servers to the
new data center. This included the hire of a data center migration consulting firm, the
development of a migration plan and schedule for the physical move that took place late
summer 2014. The authors have found that there are many advantages to consolidating
data centers, including taking advantage of economies of scale, an improved physical
environment, better backup services and security systems, and more. Lessons learned from
this experience include the value of participating in the process, reviewing migration
schedules carefully, clarifying the costs of consolidation, contributing to the development of
an SLA, and communicating all plans and developments to the libraries’ customers,
including faculty, staff, and students. As other university libraries consider the possibility of
consolidating their data centers, the authors hope that this paper will provide some guidance
to their efforts.
References
1- “Gigabit Campus Network Design-Principles and Architecture” at
http://ww.cisco.com/warp/public/cc/so/neso/cpso/gcnd_wp.html
2- “Data Centers: Best Practice for Security and Performance” at
http://www.cisco.com/warp/public/cc/so/neso/wnso/power/gdmdd_wp.pdf
http://www.msi.org/publications/publications.cfm?pub=857
http://www.google.com.pl/
http://en.wikipedia.org/wiki/Actor-network_theory
http://www.nature.com/ncb/jornal/v1/n1/full/ncb0599_E13.html
http://stat.gamma.rug.nl.snijders/kadushin_concepts.pdf
http://www.trainsignal.com
http://www.microsoft.com
3- An introduction of Wireless Technologies, F.Ricci,2010/2011
4- Cisco Press -CCNA Security 1.0 Course Booklet 2010 published by cisco press
5- CCNA Security 640 -554 the author “Keith Barker” CCIE No.6783 (R&S and
Security_ Rode in 2013
6- N. Nadarajah, E. Wong, and A. Nirmalathas, “Automatic Protection Switching and
LAN emulation in Passive Optical Networks, “IEE Elect. Lett, Vol.42, no.3, PP173-
173,2006
7- Traffic Management and measurement of bandwidth & Loads, Mark Minasi, 2014
8- 802.11 Wireless LAN Fundamental, P. Roshan and lury, Cisco press, 2004
9- Server Administration “Chaptere 3” Configure Network Services and Access