You are on page 1of 109

200-310.

exam

Number: 200-310
Passing Score: 800
Time Limit: 120 min
File Version: 1.0

http://www.gratisexam.com/

Cisco

200-310

Designing for Cisco Internetwork Solutions

Version 1.0

Sections
1. Enterprise Network Design Explanation
2. Design Methodologies Explanation
3. Considerations for Expanding an Existing Network Explanation
4. Addressing and Routing Protocols in an Existing Network Explanation
5. Design Objectives Explanation

http://www.gratisexam.com/
Exam B

QUESTION 1
View the Exhibit.

You are designing an IP addressing scheme for the network in the exhibit above.

Each switch represents hosts that reside in separate VLANs. The subnets should be allocated to match the following host capacities:
Router subnet: two hosts
SwitchA subnet: four hosts
SwitchB subnet: 10 hosts
SwitchC subnet: 20 hosts
SwitchD subnet: 50 hosts

You have chosen to subnet the 192.168.51.0/24 network.

Which of the following are you least likely to allocate?

http://www.gratisexam.com/
A. a /25 subnet
B. a /26 subnet
C. a /27 subnet
D. a /28 subnet
E. a /29 subnet
F. a /30 subnet

Correct Answer: A
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
Of the available choices, you are least likely to allocate a /25 subnet. The largest broadcast domain in this scenario contains 50 hosts. A /25 subnet can contain up
to 126 assignable hosts. In this scenario, allocating a /25 subnet would reserve half the 192.168.51.0/24 network for a single virtual LAN (VLAN). The total number
of hosts for which you need addresses in this scenario is 86. Therefore, you would only need to use half the /24 subnet if all 86 hosts were residing in the same
VLAN.

You should begin allocating address ranges starting with the largest group of hosts to ensure that the entire group has a large, contiguous address range available.
Subnetting a contiguous address range in structured, hierarchical fashion enables routers to maintain smaller routing tables and eases administrative burden when
troubleshooting.

You are likely to use a /26 subnet. In this scenario, the largest VLAN contains 50 hosts. If you were to divide the 192.168.51.0/25 subnet into two /26 subnets, the
result would be two new subnets capable of supporting up to 62 assignable hosts: the 192.168.51.0/26 subnet and the 192.168.51.64/26 subnet. Therefore, you
should start subnetting with a /26 network. To maintain a logical, hierarchical IP structure, you could then allocate the 192.168.51.64/26 subnet to SwitchD's VLAN.

You are likely to use a /27 subnet. The nextlargest broadcast domain in this scenario is the SwitchC subnet, which contains 20 hosts. If you were to divide the
192.168.51.0/26 subnet into two /27 subnets, the result would be two new subnets capable of supporting up to 30 assignable hosts: the 192.168.51.0/27 subnet and
the 192.168.51.32/27 subnet. To maintain a logical, hierarchical IP structure, you could then allocate the 192.168.51.32/27 subnet to SwitchC's VLAN.

You are likely to use a /28 subnet. The nextlargest broadcast domain in this scenario is the SwitchB subnet, which contains 10 hosts. If you were to divide the
192.168.51.0/27 subnet into two /28 subnets, the result would be two new subnets capable of supporting up to 14 assignable hosts: the 192.168.51.0/28 subnet and
the 192.168.51.16/28 subnet. To maintain a logical, hierarchical IP structure, you could then allocate the 192.168.51.16/28 subnet to SwitchB's VLAN.

You are likely to use a /29 subnet. The nextlargest broadcast domain in this scenario is the SwitchA subnet, which contains four hosts. If you were to divide the
192.168.51.0/28 subnet into two /29 subnets, the result would be two new subnets capable of supporting up to six assignable hosts: the 192.168.51.0/29 subnet and
the 192.168.51.8 subnet. To maintain a logical, hierarchical IP structure, you could then allocate the 192.168.51.8/29 subnet to SwitchA's VLAN.

You are likely to use a /30 subnet. The final subnet in this scenario is the link between RouterA and RouterB, which contains two hosts. If you were to divide the

http://www.gratisexam.com/
192.168.51.0/29 subnet into two /30 subnets, the result would be two new subnets capable of supporting two assignable hosts each: the 192.168.51.0/30 subnet
and the 192.168.51.4/30 subnet. To maintain a logical, hierarchical IP structure, you could then allocate the 192.168.51.4/30 subnet to the link between RouterA and
RouterB. This would leave the 192.168.51.0/30 subnet unallocated. However, you could further divide the 192.168.51.0/30 subnet into single /32 host addresses
that could then be used for loopback IP addressing on the routers.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 8, IPv4 Address Subnets, pp. 302-310
CCDA 200-310 Official Cert Guide, Chapter 8, Plan for a Hierarchical IP Address Network, pp. 311-312
Cisco: IP Addressing and Subnetting for New Users

QUESTION 2
Which of the following is a type of attack that can be mitigated by enabling DAI on campus access layer switches?

http://www.gratisexam.com/

A. ARP poisoning
B. VLAN hopping
C. DHCP spoofing
D. MAC flooding

Correct Answer: A
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

Explanation:
Dynamic ARP Inspection (DAI) can be enabled on campus access layer switches to mitigate Address Resolution Protocol (ARP) poisoning attacks. In an ARP
poisoning attack, which is also known as an ARP spoofing attack, the attacker sends a gratuitous ARP (GARP) message to a host. The message associates the
attacker's media access control (MAC) address with the IP address of a valid host on the network. Subsequently, traffic sent to the valid host address will go through
the attacker's computer rather than directly to the intended recipient. DAI protects against ARP poisoning attacks by inspecting all ARP packets that are received on
untrusted ports.

Dynamic Host Configuration Protocol (DHCP) spoofing attacks can be mitigated by enabling DHCP snooping on campus access layer switches, not by enabling
DAI. In a DHCP spoofing attack, an attacker installs a rogue DHCP server on the network in an attempt to intercept DHCP requests. The rogue DHCP server can
then respond to the DHCP requests with its own IP address as the default gateway address? hence all traffic is routed through the rogue DHCP server. DHCP

http://www.gratisexam.com/
snooping is a feature of Cisco Catalyst switches that helps prevent rogue DHCP servers from providing incorrect IP address information to hosts on the network.
When DHCP snooping is enabled, DHCP servers are placed onto trusted switch ports and other hosts are placed onto untrusted switch ports. If a DHCP reply
originates from an untrusted port, the port is disabled and the reply is discarded.

Virtual LAN (VLAN) hopping attacks can be mitigated by disabling Dynamic Trunking Protocol (DTP) on campus access layer switches, not by enabling DAI. A
VLAN hopping attack occurs when a malicious user sends frames over a VLAN trunk link? the frames are tagged with two different 802.1Q tags, with the goal of
sending the frame to a different VLAN. In a VLAN hopping attack, a malicious user connects to a switch by using an access VLAN that is the same as the native
VLAN on the switch. If the native VLAN on a switch were VLAN 1, the attacker would connect to the switch by using VLAN 1 as the access VLAN. The attacker
would transmit packets containing 802.1Q tags for the native VLAN and tags spoofing another VLAN. Each packet would be forwarded out the trunk link on the
switch, and the native VLAN tag would be removed from the packet, leaving the spoofed tag in the packet. The switch on the other end of the trunk link would
receive the packet, examine the 802.1Q tag information, and forward the packet to the destination VLAN, thus allowing the malicious user to inject packets into the
destination VLAN even though the user is not connected to that VLAN.

To mitigate VLAN hopping attacks, you should configure the native VLAN on a switch to an unused value, remove the native VLAN from each end of the trunk link,
place any unused ports into a common unrouted VLAN, and disable DTP for unused and nontrunk ports. DTP is a Cisco-proprietary protocol that eases
administration by automating the trunk configuration process. However, for nontrunk links and for unused ports, a malicious user who has gained access to the port
could use DTP to gain access to the switch through the exchange of DTP messages. By disabling DTP, you can prevent a user from using DTP messages to gain
access to the switch.
MAC flooding attacks can be mitigated by enabling port security on campus access layer switches, not by enabling DAI. In a MAC flooding attack, an attacker
generates thousands of forged frames every minute with the intention of overwhelming the switch's MAC address table. Once this table is flooded, the switch can no
longer make intelligent forwarding decisions and all traffic is flooded. This allows the attacker to view all data sent through the switch because all traffic will be sent
out each port. Implementing port security can help mitigate MAC flooding attacks by limiting the number of MAC addresses that can be learned on each interface to
a maximum of 128. A MAC flooding attack is also known as a Content Addressable Memory (CAM) table overflow attack.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 12, Loss of Availability, pp. 495-496
Cisco: Layer 2 Security Features on Cisco Catalyst Layer 3 Fixed Configuration Switches Configuration Example: Background Information
Cisco: Enterprise Data Center Topology: Preventing VLAN Hopping

QUESTION 3
You issue the following commands on RouterA:

Packets sent to which of the following destination IP addresses will be forwarded to the 10.1.1.3 next-hop IP address? (Choose two.)

A. 172.16.0.1
B. 192.168.0.1

http://www.gratisexam.com/
C. 192.168.0.14
D. 192.168.0.17
E. 192.168.0.26
F. 192.168.1.1

Correct Answer: DE
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
Of the choices available, packets sent to 192.168.0.17 and 192.168.0.26 will be forwarded to the 10.1.1.3 next-hop IP address. When a packet is sent to a router,
the router checks the routing table to see if the next-hop address for the destination network is known. The routing table can be filled dynamically by a routing
protocol, or you can configure the routing table manually by issuing the ip route command to add static routes. The ip route command consists of the syntax ip route
net-address mask next-hop, where net-address is the network address of the destination network, mask is the subnet mask of the destination network, and next-
hop is the IP address of a neighboring router that can reach the destination network.

A default route is used to send packets that are destined for a location that is not listed elsewhere in the routing table. For example, the ip route 0.0.0.0 0.0.0.0
10.1.1.1command specifies that packets destined for addresses not otherwise specified in the routing table are sent to the default next-hop address of 10.1.1.1. A
net-address and mask combination of 0.0.0.0 0.0.0.0 specifies any packet destined for any network.

If multiple static routes to a destination are known, the most specific route is used? the most specific route is the route with the longest network mask. For example,
a route to 192.168.0.0/28 would be used before a route to 192.168.0.0/24. Therefore, the following rules apply on RouterA:
Packets sent to the 192.168.0.0 255.255.255.240 network are forwarded to the next-hop address of 10.1.1.4. This includes destination addresses from
192.168.0.0 through 192.168.0.15.
Packets sent to the 192.168.0.0 255.255.255.0 network, except those sent to the 192.168.0.0255.255.255.240 network, are forwarded to the next-hop address of
10.1.1.3. This includes destination addresses from 192.168.0.16 to 192.168.0.255.
Packets sent to the 192.168.0.0 255.255.0.0 network, except those sent to the 192.168.0.0255.255.255.0 network, are forwarded to the next-hop address of
10.1.1.2. This includes destination addresses from 192.168.1.0 through 192.168.255.255.
Packets sent to any destination not listed in the routing table are forwarded to the default static route next-hop address of 10.1.1.1.

The 192.168.0.17 and 192.168.0.26 addresses are within the range of addresses from 192.168.0.16 to 192.168.0.255. Therefore, packets sent to these addresses
are forwarded to the next-hop address of 10.1.1.3.
The 192.168.0.1 and 192.168.0.14 addresses are within the range of addresses from 192.168.0.0 through 192.168.0.15. Therefore, packets sent to these
addresses are forwarded to the next-hop address of 10.1.1.4.
The 192.168.1.1 IP address is within the range of addresses from 192.168.1.0 through 192.168.255.255. Therefore, packets sent to 192.168.1.1 are forwarded to
the next-hop address of 10.1.1.2.

RouterA does not have a specific static route to the 172.16.0.1 network. Therefore, packets sent to 172.16.0.1 are forwarded to the default static route v address of

http://www.gratisexam.com/
10.1.1.1.

Reference:
Boson ICND2 Curriculum, Module 2: Implementing VLSMs and Summarization, Choosing a Route
Cisco: IP Routing Protocol-Independent Commands: ip route
Cisco: Specifying a Next Hop IP Address for Static Routes

QUESTION 4
DRAG DROP
Select the protocols and port numbers from the left, and drag them to the corresponding traffic types on the right. Not all protocols and port numbers will be used.

Select and Place:

Correct Answer:

http://www.gratisexam.com/
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

Explanation:
Lightweight Access Point Protocol (LWAPP) uses User Datagram Protocol (UDP) port 12222 for data traffic and UDP port 12223 for control traffic. LWAPP is a
protocol developed by Cisco and is used as part of the Cisco Unified Wireless Network architecture. LWAPP creates a tunnel between a lightweight access point
(LAP) and a wireless LAN controller (WLC)? in LWAPP operations, both a LAP and a WLC are required. The WLC handles many of the management functions for
the link, such as user authentication and security policy management, whereas the LAP handles real-time operations, such as sending and receiving 802.11 frames,
wireless encryption, access point (AP) beacons, and probe messages. Cisco WLC devices prior to software version 5.2 use LWAPP.

Control and Provisioning of Wireless Access Points (CAPWAP) uses UDP port 5246 for control traffic and UDP port 5247 for data traffic. CAPWAP is a standards-
based version of LWAPP. Cisco WLC devices that run software version 5.2 and later use CAPWAP instead of LWAPP.

Neither LWAPP nor CAPWAP use Transmission Control Protocol (TCP) for communication. TCP is a connection-oriented protocol. Because UDP is a
connectionless protocol, it does not have the additional connection overhead that TCP has? therefore, UDP is faster but less reliable.

Reference:

http://www.gratisexam.com/
Cisco: LWAPP Traffic Study
IETF: RFC 5415: Control And Provisioning of Wireless Access Points (CAPWAP) Protocol Specification

QUESTION 5
Which of the following should not be implemented in the core layer? (Choose two.)

A. ACLs
B. QoS
C. load balancing
D. interVLAN routing
E. a partially meshed topology

Correct Answer: AD
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
Access control lists (ACLs) and inter-VLAN routing should not be implemented in the core layer. Because the core layer focuses on low latency and fast transport
services, you should not implement mechanisms that can introduce unnecessary latency into the core layer. For example, mechanisms such as process-based
switching, packet manipulation, and packet filtering introduce latency and should be avoided in the core.

The hierarchical network model divides the operation of the network into three categories:
Core layer - provides fast backbone services to the distribution layer
Distribution layer - provides policy-based access between the core and access layers
Access layer - provides physical access to the network

ACLs and inter-VLAN routing are typically implemented in the distribution layer. Because the distribution layer is focused on policy enforcement, the distribution layer
provides the ideal location to implement mechanisms such as packet filtering and packet manipulation. In addition, because the distribution layer acts as an
intermediary between the access layer devices and the core layer, the distribution layer is also the recommended location for route summarization and redistribution.

Because a fully meshed topology can add unnecessary cost and complexity to the design and operation of the network, a partially meshed topology is often
implemented in the core layer. A fully meshed topology is not required if multiple paths exist between core layer and distribution layer devices. The core layer is
particularly suited to a mesh topology because it typically contains the least number of network devices. Fully meshed topologies restrict the scalability of a design.
Hierarchical designs are intended to aid scalability, particularly in the access layer.

Quality of Service (QoS) is often implemented in all three layers of the hierarchical model. However, because the access layer provides direct connectivity to
network endpoints, QoS classification and marking are typically performed in the access layer. Cisco recommends classifying and marking packets as close to the
source of traffic as possible. Although classification and marking can be performed in the access layer, QoS mechanisms must be implemented in each of the

http://www.gratisexam.com/
higher layers for QoS to be effective.

Load balancing is often implemented in all three layers of the hierarchical model. Load balancing offers redundant paths for network traffic; the redundant paths can
be used to provide bandwidth optimization and network resilience. Typically, the core and distribution layers offer a greater number of redundant paths than the
access layer does. Because some devices, such as network hosts, often use only a single connection to the access layer, Cisco recommends redundant links for
mission-critical endpoints, such as servers.

Reference:
Cisco: Internetwork Design Guide Internetwork Design Basics

QUESTION 6
You issue the show ip bgp neighbors command on RouterA and receive the following output:

Which of the following is most likely true?

A. RouterA is operating in AS 64496.


B. RouterA has been assigned a BGP RID of 1.1.1.2.
C. RouterA has been unable to establish a BGP session with the remote router.
D. RouterA is configured with the neighbor 203.0.113.1 remote-as 64496 command.

http://www.gratisexam.com/
Correct Answer: D
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
Most likely, RouterA is configured with the neighbor 203.0.113.1 remote-as 64496 command. In this scenario, the output of the show ip bgp neighbors command
reports that RouterA's Border Gateway Protocol (BGP) neighbor has an IP address of 203.0.113.1 and is operating within the remote autonomous system number
(ASN) of 64496. The syntax of the neighbor remote-as command is neighbor ip address remote-as as-number, where ip address and as-number are the IP address
and ASN of the neighbor router. For example, the following command configures a peering relationship with a router that has an IP address of 203.0.113.1 in
autonomous system (AS) 64496:

router(config-router)#neighbor 203.0.113.1 remote-as 64496

Because BGP does not use a neighbor discovery process like many other routing protocols, it is essential that every peer is manually configured and reachable
through Transmission Control Protocol (TCP) port 179. Once a peer has been configured with the neighbor remote-as command, the local BGP speaker will attempt
to transmit an OPEN message to the remote peer. If the OPEN message is not blocked by existing firewall rules or other security mechanisms, the remote peer will
respond with a KEEPALIVE message and will continue to periodically exchange KEEPALIVE messages with the local peer. A BGP speaker will consider a peer
dead if a KEEPALIVE message is not received within a period of time specified by a hold timer. Routing information is then exchanged between peers by using
UPDATE messages. UPDATE messages can include advertised routes and withdrawn routes. Withdrawn routes are those that are no longer considered feasible.
Statistics regarding the number of BGP messages, such as UPDATE messages, can be viewed in the output of the show ip bgp neighbors command.

The output of the show ip bgp neighbors command in this scenario does not indicate that RouterA is operating in AS 64496. Nor does the output indicate that
RouterA has been assigned a BGP router ID (RID) of 1.1.1.2. Among other things, the partial command output from the show ip bgp neighbors command indicates
that the remote peer has an IP address of 203.0.113.1, an ASN of 64496, a RID of 1.1.1.2, an external BGP (eBGP) session that is an Established state, and a hold
time of 180 seconds.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 11, BGP Neighbors, pp. 444-445
Cisco: Cisco IOS IP Routing: BGP Command Reference: neighbor remote-as
Cisco: Cisco IOS IP Routing: BGP Command Reference: show ip bgp neighbors

QUESTION 7
View the Exhibit.

http://www.gratisexam.com/
Refer to the exhibit above. PVST+ is enabled on all the switches. The Layer 3 switch on the right, DSW2, is the root bridge for VLAN 20. The Layer 3 switch on the
left, DSW1, is the root bridge for VLAN 10. Devices on VLAN 10 use DSW1 as a default gateway. Devices on VLAN 20 use DSW2 as a default gateway. You want
to ensure that the network provides high redundancy and fast convergence.

Which of the following are you most likely to do?

A. physically connect ASW1 to ASW2


B. physically connect ASW2 to ASW3
C. physically connect ASW1 to both ASW2 and ASW3
D. replace PVST+ with RSTP
E. replace PVST+ with RPVST+

Correct Answer: E
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
Most likely, you would replace Per-VLAN Spanning Tree Plus (PVST+) with RapidPVST+ (RPVST+) in order to ensure that the network provides fast convergence.
PVST+ is a revision of the Cisco-proprietary Per-VLAN Spanning Tree (PVST), which enables a separate spanning tree to be established for each virtual LAN
(VLAN). Therefore, a per-VLAN implementation of STP, such as PVST+, enables the location of a root switch to be optimized on a per-VLAN basis. However, PVST
+ progresses through the same spanning tree states as the 802.1Dbased Spanning Tree Protocol (STP). Thus it can take up to 30 seconds for a PVST+ link to

http://www.gratisexam.com/
begin forwarding traffic. RapidPVST+ provides faster convergence because it passes through the same three states as the 802.1wbased Rapid STP (RSTP).
Therefore, RPVST+ provides faster convergence than PVST+.

The network in this scenario is already provisioned with high redundancy. Every access layer switch in this scenario is connected to every distribution layer switch. In
addition, the two distribution layer switches are connected by using an EtherChannel bundle. This configuration creates multiple paths to the root bridge for each
VLAN. Connecting any of the access layer switches to any of the other access layer switches might add another layer of redundancy, but this would not provide as
much benefit as replacing PVST+ with RPVST+ in this scenario.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 3, STP Design Considerations, pp. 101-103
Cisco: Spanning Tree from PVST+ to RapidPVST Migration Configuration Example: Background Information

QUESTION 8
Which of the following VPN tunnels support encapsulation of dynamic routing protocol traffic? (Choose three.)

A. IPSec
B. IPSec VTI
C. GRE over IPSec
D. DMVPN hub-and-spoke
E. DMVPN spoke-to-spoke

Correct Answer: BCD


Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation
Explanation/Reference:
IP Security (IPSec) Virtual Tunnel Interface (VTI), Generic Routing Encapsulation (GRE) over IPSec, and Dynamic Multipoint Virtual Private Network (DMVPN) hub-
and-spoke virtual private network (VPN) tunnels support encapsulation of dynamic routing protocol traffic, such as Open Shortest Path First (OSPF) and Enhanced
Interior Gateway Routing Protocol (EIGRP) traffic. A VPN tunnel provides secure, private network connectivity over an untrusted medium, such as the Internet.

IPSec VTI provides support for IP multicast and dynamic routing protocol traffic. However, it does not support non-IP protocols, and it has limited interoperability with
non-Cisco routers.
GRE over IPSec provides support for IP multicast and dynamic routing protocol traffic. In addition, it provides support for non-IP protocols. Because the focus of
GRE is to transport many different protocols, it has very limited security features. Therefore, GRE relies on IPSec to provide data confidentiality and data integrity.
Although GRE was developed by Cisco, GRE works on Cisco and non-Cisco routers.

DMVPN hub-and-spoke VPN tunnels provide support for IP multicast and dynamic routing protocol traffic. However, they support only IP traffic and operate only on
Cisco routers.

http://www.gratisexam.com/
DMVPN spoke-to-spoke VPN tunnels do not provide support for IP multicast or dynamic routing protocol traffic. In addition, they support only IP traffic and operate
only on Cisco routers.
IPSec VPN tunnels do not provide support for IP multicast or dynamic routing protocol traffic. Although IPSec can be used on Cisco and non-Cisco routers, IPSec
can be used only for IP traffic, it provides no support for non-IP protocols.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 7, Enterprise VPN vs. Service Provider VPN, pp. 255-263
Cisco: IPSec VPN WAN Design Overview: Design Selection

QUESTION 9
HostA is a computer on your company's network. RouterA is a NAT router. HostA sends a packet to HostB, and HostB sends a packet back to HostA.

Which of the following addresses is an outside local address?

http://www.gratisexam.com/

A. 15.16.17.18
B. 22.23.24.25
C. 192.168.1.22
D. 192.168.1.30

Correct Answer: D
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

http://www.gratisexam.com/
Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
The 192.168.1.30 address is an outside local address. An outside local address is an IP address that represents an outside host to the local network. Network
Address Translation (NAT) translates between public and private IP addresses to enable hosts on a privately addressed network to access the Internet. Public
addresses are routable on the Internet, and private addresses are routable only on internal networks. Several IP address ranges are reserved for private, internal
use; these addresses, shown below, are defined in Request for Comments (RFC) 1918.
10.0.0.0 through 10.255.255.255
172.16.0.0 through 172.31.255.255
192.168.0.0 through 192.168.255.255

The outside local address is often the same as the outside global address, particularly when inside hosts attempt to access resources on the Internet. However, in
some configurations, it is necessary to configure a NAT translation that allows a local address on the internal network to identify an outside host. When RouterA
receives a packet destined for 192.168.1.30, RouterA translates the 192.168.1.30 outside local address to the 15.16.17.18 outside global address and forwards the
packet to its destination. To configure a static outside local-to-outside global IP address translation, you should issue the ip nat outside source static outside-global
outside-local command.

In this scenario, 15.16.17.18 is an outside global address. An outside global address is an IP address that represents an outside host to the global network. Outside
global addresses are public IP addresses assigned to an Internet host by the host's operator. The outside global address is usually the address registered with the
Domain Name System (DNS) server to map a host's public IP address to a friendly name such as www.mycompany.com.

In this scenario, 192.168.1.22 is an inside local address. An inside local address is an IP address that represents an inside host to the local network. Inside local
addresses are typically private IP addresses defined by RFC 1918.

In this scenario, 22.23.24.25 is an inside global address. An inside global address is a publicly routable IP address that is used to represent an inside host to the
global network. Inside global IP addresses are typically assigned from a NAT pool on the router. You can issue the ip nat pool command to define a NAT pool. For
example, the ip nat pool natpool 22.23.24.11 22.23.24.30 netmask 255.255.255.224 command allocates the IP addresses 22.23.24.11 through 22.23.24.30 to be
used as inside global IP addresses. When a NAT router receives a packet destined for the Internet from a local host, it changes the inside local address to an inside
global address and forwards the packet to its destination.

In addition to configuring a NAT pool to dynamically translate addresses, you can configure static inside local-to-inside global IP address translations by issuing the
ip nat inside source static inside-local inside-global command. This command maps a single inside local address on the local network to a single inside global
address on the outside network.

It is important to specify the inside and outside interfaces when you configure a NAT router. To specify an inside interface, you should issue the ip nat inside
command from interface configuration mode. To specify an outside interface, you should issue the ip nat outside command from interface configuration mode.

The following graphic depicts the relationship between inside local, inside global, outside local, and outside global addresses:

http://www.gratisexam.com/
Reference:
CCDA 200-310 Official Cert Guide, Chapter 8, IPv4 Private Addresses, pp. 299-300
CCDA 200-310 Official Cert Guide, Chapter 8, NAT, pp. 300-302
Cisco: NAT: Local and Global Definitions

QUESTION 10
Which of the following OSPF areas accept all LSAs? (Choose two.)

A. stub
B. not-so-stubby
C. totally stubby
D. backbone
E. standard

Correct Answer: DE
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
Standard areas and backbone areas accept all link-state advertisements (LSAs). Every router in a standard area contains the same Open Shortest Path First
(OSPF) database. If the standard area's ID number is 0, the area is a backbone area. The backbone area must be contiguous, and all OSPF areas must connect to
the backbone area. If a direct connection to the backbone area is not possible, you must create a virtual link to connect to the backbone area through a
nonbackbone area.

Stub areas, totally stubby areas, and not-so-stubby areas (NSSAs) flood only certain types of LSAs. For example, none of these areas floods Type 5, which are
LSAs that originate OSPF autonomous system boundary routers (ASBRs). Instead, stub areas and totally stubby areas are injected with a single default route from

http://www.gratisexam.com/
an ABR. Routers inside a stub area or a totally stubby area will send all packets destined for another area to the area border router (ABR). In addition, a totally
stubby area does not accept Type 3, 4, or 5 summary LSAs, which advertise inter-area routes. These LSAs are replaced by a default route at the ABR. As a result,
routing tables are kept small within the totally stubby area.

An NSSA floods Type 7 LSAs within its own area, but does not accept or flood Type 5 LSAs. Therefore, an NSSA does not accept all LSAs. Similar to Type 5 LSAs,
a Type 7 LSA is an external LSA that originates from an ASBR. However, Type 7 LSAs are only flooded to an NSSA.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 11, OSPF Stub Area Types, pp. 437-438
Cisco: What Are OSPF Areas and Virtual Links?: Normal, Stub, Totally Stub and NSSA Area Differences

QUESTION 11
In a switched hierarchical design, which enterprise campus module layer or layers exclusively use Layer 2 switching?

A. only the campus core layer


B. the distribution and campus core layers
C. only the distribution layer
D. the distribution and access layers
E. only the access layer

Correct Answer: E
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
In a switched hierarchical design, only the access layer of the enterprise campus module uses Layer 2 switching exclusively. The access layer of the enterprise
campus module provides end users with physical access to the network. In addition to using Virtual Switching System (VSS) in place of First Hop Redundancy
Protocols (FHRPs) for redundancy, a Layer 2 switching design requires that inter-VLAN traffic be routed in the distribution layer of the hierarchy. Also, Spanning
Tree Protocol (STP) in the access layer will prevent more than one connection between an access layer switch and the distribution layer from becoming active at a
given time.

In a Layer 3 switching design, the distribution and campus core layers of the enterprise campus module use Layer 3 switching exclusively. Thus a Layer 3 switching
design relies on FHRPs for high availability. In addition, a Layer 3 switching design typically uses route filtering on links that face the access layer of the design.

The distribution layer of the enterprise campus module provides link aggregation between layers. Because the distribution layer is the intermediary between the
access layer and the campus core layer, the distribution layer is the ideal place to enforce security policies, provide load balancing, provide Quality of Service (QoS),
and perform tasks that involve packet manipulation, such as routing. In a switched hierarchical design, the switches in the distribution layer use Layer 2 switching on
ports connected to the access layer and Layer 3 switching on ports connected to the campus core layer.

http://www.gratisexam.com/
The campus core layer of the enterprise campus module provides fast transport services between the modules of the enterprise architecture module, such as the
enterprise edge and the intranet data center. Because the campus core layer acts as the network's backbone, it is essential that every distribution layer device have
multiple paths to the campus core layer. Multiple paths between the campus core and distribution layer devices ensure that network connectivity is maintained if a
link or device fails in either layer. In a switched hierarchical design, the campus core layer switches use Layer 3 switching exclusively.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 3, Distribution Layer Best Practices, pp. 97-99
Cisco: Cisco SAFE Reference Guide: Enterprise Campus

QUESTION 12
Which of the following best describes PAT?

A. It translates a single inside local address to a single inside global address.


B. It translates a single outside local address to a single outside global address.
C. It translates inside local addresses to inside global addresses that are allocated from a pool.
D. It uses ports to translate inside local addresses to one or more inside global addresses.

Correct Answer: D
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
Port Address Translation (PAT) uses ports to translate inside local addresses to one or more inside global addresses. The Network Address Translation (NAT)
router uses port numbers to keep track of which packets belong to each host. PAT is also called NAT overloading.

NAT translates between public and private IP addresses to enable hosts on a privately addressed network to access the Internet. Public addresses are routable on
the Internet, and private addresses are routable only on internal networks. Request for Comments (RFC) 1918 defines several IP address ranges that are reserved
for private, internal use:
10.0.0.0 through 10.255.255.255
172.16.0.0 through 172.31.255.255
192.168.0.0 through 192.168.255.255

Because NAT performs address translation between private and public addresses, NAT effectively hides the address scheme used by the internal network, which
can increase security. NAT also reduces the number of public IP addresses that a company needs to allow its devices to access Internet resources, thereby
conserving IP version 4 (IPv4) address space.

An inside local address is typically an RFC 1918-compliant IP address that represents an internal host to the internal network. An inside global address is used to

http://www.gratisexam.com/
represent an internal host to an external network.

Static NAT translates a single inside local address to a single inside global address or a single outside local address to a single outside global address. You can
configure a static inside local-to-inside global IP address translation by issuing the ip nat inside source static inside-local inside-global command. To configure a
static outside local-to-outside global address translation, you should issue the ip nat outside source static outside-global outside-local command.

Dynamic NAT translates local addresses to global addresses that are allocated from a pool. To create a NAT pool, you should issue the ip nat pool nat-pool start-ip
end-ip{netmask mask | prefix-length prefix} command. To enable translation of inside local addresses, you should issue the ip nat inside source list access-list pool
nat-pool[overload] command.

When a NAT router receives an Internet-bound packet from a local host, the NAT router performs the following tasks:
It checks the static NAT mappings to verify whether an inside global address mapping exists for the localhost.
If no static mapping exists, it dynamically maps the inside local address to an unused inside global address, if one is available, from the NAT pool.
It changes the inside local address in the packet header to the inside global address and forwards the packet to its destination:

When all the inside global addresses in the NAT pool are mapped, no other inside local hosts will be able to communicate on the Internet. This is why NAT
overloading is useful. When NAT overloading is configured, an inside local address, along with a port number, is mapped to an inside global address. The NAT
router uses port numbers to keep track of which packets belong to each host:

http://www.gratisexam.com/
You can issue the ip nat inside source list access-list interface outside-interface overload command to configure NAT overload with a single inside global address,
or you can issue the ip nat inside source list access-list pool nat-pool overload command to configure NAT overloading with a NAT pool.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 8, NAT, pp. 300-302
Cisco: Configuring Network Address Translation: Getting Started: Example: Allowing Internal Users to Access the Internet

QUESTION 13
Which of the following statements are true regarding the function of the LAP in the Cisco Unified Wireless Network architecture? (Choose three.)

A. The LAP determines which RF channel should be used to transmit 802.11 frames.
B. The LAP supports 802.11 encryption.
C. The LAP must be located on the same subnet as a WLC.
D. The LAP maintains associations with client computers.
E. The LAP can function without a WLC.
F. The LAP should be connected to an access port on a switch.

Correct Answer: BDF


Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

Explanation:
In the Cisco Unified Wireless Network architecture, a lightweight access point (LAP) supports 802.11 encryption, maintains associations with client computers, and
should be connected to an access port on a switch. A LAP creates a Lightweight Access Point Protocol (LWAPP) tunnel between itself and a wireless LAN controller
(WLC)? in LWAPP operations, both a LAP and a WLC are required. The WLC handles many of the management functions for the link, such as user authentication
and security policy management, while the LAP handles real-time operations, such as sending and receiving 802.11 frames, wireless encryption, access point (AP)
beacons, and probe messages.

When connecting a LAP to a network, you should connect the LAP to an access port on a switch, not to a trunk port. Because the WLC handles the management
functions for LWAPP operations, the LAP cannot begin associating with client computers unless a WLC is available on the network. Therefore, the LAP must
associate with a WLC after it is connected to the network. After connecting to a WLC and obtaining its configuration information, the LAP can begin associating with
clients. The LAP can receive encrypted or unencrypted 802.11 frames. The WLC, however, does not support 802.11 encryption; as the data passes through the
LAP, it is decrypted and then sent to the WLC for further forwarding.

It is not necessary for the LAP to be located on the same subnet or even in the same geographic area as a WLC. As long as a WLC is available on the network and
the LAP is configured with the address of the WLC, the LAP will be able to connect to the WLC. DHCP option 43 can be used to automatically configure a LAP with
the IP address of one or more WLCs, even if those WLCs reside on a different IP subnet.

http://www.gratisexam.com/
A LAP requires a WLC in order to function. If the WLC becomes unavailable, the LAP will reboot and drop all client associations until the WLC becomes available or
until another WLC is found on the network.

The WLC, not the LAP, determines which radio frequency (RF) channel should be used to transmit 802.11 frames in LWAPP operations. The WLC is responsible
for selecting the RF channel to use, determining the output power for each LAP, authenticating users, managing security policies, and determining the least used
LAP to associate with clients.

Reference:
Cisco: Lightweight AP (LAP) Registration to a Wireless LAN Controller (WLC): Background Information
Cisco: Lightweight Access Point FAQ
Cisco: Wireless LAN Controller and Lightweight Access Point Basic Configuration Example: Configure the Switch for the APs

QUESTION 14
View the Exhibit.

You administer the network shown above. You want to summarize the networks connected to RouterA so that a single route is inserted into RouterB's routing table.

Which of the following is the smallest summarization for the three networks?

A. 172.16.1.0/16
B. 172.16.1.0/18
C. 172.16.1.0/22
D. 172.16.1.0/23E.
E. 172.16.1.0/25

Correct Answer: C
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

http://www.gratisexam.com/
Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
The smallest summarization for the three networks connected to RouterA is 172.16.1.0/22, which is equivalent to a network address of 172.16.1.0 and a subnet
mask of 255.255.252.0. In this scenario, the Class B 172.16.0.0/16 network has been divided into 256 /24 subnets. Three of the first four subnets in the Class B
range have been assigned to network interfaces on RouterA: 172.16.0.0/24, 172.16.1.0/24, and 172.16.3.0/24. Absent from the network assignments is the
172.16.2.0/24 subnet. However, there is no way to summarize the address range without including the 172.16.2.0/24 subnet. Therefore, the smallest summarization
you can create would summarize four subnets into a single /22 subnet.

A /22 subnet creates 64 subnetworks capable of supporting 1,022 assignable host IP addresses each. The assignable address range of the 172.16.0.0/22 subnet
begins with 172.16.0.1 and ends with 172.16.3.255. This range includes all possible assignable IP addresses in the /24 subnets that are directly connected to
RouterA. It also includes all possible assignable IP addresses in the 172.16.2.0/24 subnet.

Subnetting a contiguous address range in structured, hierarchical fashion enables routers to maintain smaller routing tables and eases administrative burden when
troubleshooting. Conversely, a discontiguous IP version 4 (IPv4) addressing scheme can cause routing tables to bloat because the subnets cannot be summarized.
Summarization minimizes the size of routing tables and advertisements and reduces a router's processor and memory requirements.

Summarizing the three /24 networks with a /16 subnet would create too large of a summarization, because the /16 subnet contains the entire Class B range of
172.16.0.0 IP addresses. The first assignable IP address in the 172.16.0.0/16 range is 172.16.0.1. The last assignable IP address is 172.16.255.255. The range
would therefore summarize 256 /24 subnets, not four.

Summarizing the three /24 networks with a /18 subnet would create too large of a summarization. A /18 subnet creates four possible subnets containing 16,382
assignable host IP addresses each. The first assignable IP address in the 172.16.0.0/18 range is 172.16.0.1. The last assignable IP address is 172.16.63.255. The
range would therefore summarize 64 /24 subnets, not four.

Summarizing the three /24 networks with a /23 subnet would create too small of a summarization. A /23 subnet creates 128 possible subnets containing 510
assignable host IP addresses each. The first assignable IP address in the 172.16.0.0/23 range is 172.16.0.1. The last assignable IP address is 172.16.1.255. This
range would therefore exclude the 172.16.3.0/24 subnet connected to RouterA.

Summarizing the three /24 networks with a /25 subnet would not work, because a /25 subnet divides the 172.16.0.0/24 subnet instead of summarizing. A /25 subnet
creates 512 possible subnets containing 126 assignable host IP addresses each. The first assignable IP address in the 172.16.0.0/25 range is 172.16.0.1. The last
assignable IP address is 172.16.0.127. This subnet would therefore contain only half of one the subnets that is directly connected to RouterA.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 8, Plan for a Hierarchical IP Address Network, pp. 311-312
Cisco: IP Addressing and Subnetting for New Users

QUESTION 15
Which of the following statements are true regarding an IDS? (Choose two.)

http://www.gratisexam.com/
A. None of its physical interfaces can be in promiscuous mode.
B. It must have two or more monitoring interfaces.
C. It does not have an IP address assigned to its monitoring port.
D. It does not have a MAC address assigned to its monitoring port.
E. It cannot mitigate single-packet attacks.

Correct Answer: CE
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

Explanation:
An Intrusion Detection System (IDS) cannot mitigate single-packet attacks and does not have an IP address assigned to its monitoring port. An IDS is a network
monitoring device that passively monitors a copy of network traffic, not the actual packet. Typically, an IDS has a management interface and at least one monitoring
interface for each monitored network. Each monitoring interface operates in promiscuous mode and cannot be assigned an IP address? however, the monitoring
interface does have a Media Access Control (MAC) address assigned to its monitoring port. Because an IDS does not reside in the path of network traffic, traffic
does not flow through the IDS? therefore, the IDS cannot directly block malicious traffic before it passes into the network. However, an IDS can send alerts to a
management station when it detects malicious traffic. For example, the IDS in the following diagram is connected to a Switch Port Analyzer (SPAN) interface on a
switch outside the firewall:

This deployment enables the IDS to monitor all traffic flowing between the LAN and the Internet. However, the IDS will have insight only into LAN traffic that passes
through the firewall and will be unable to monitor LAN traffic that flows between virtual LANs (VLANs) on the internal switch. If the IDS in this example were to detect
malicious traffic, it would be unable to directly block the traffic but it would be able to send an alert to a management station on the LAN.

By contrast, an Intrusion Prevention System (IPS) is a network monitoring device that can mitigate single-packet attacks. An IPS requires at least two interfaces for
each monitored network: one interface monitors traffic entering the IPS, and the other monitors traffic leaving the IPS. Like an IDS, an IPS does not have an IP
address assigned to its monitoring ports. Because all monitored traffic must flow through an IPS, an IPS can directly block malicious traffic before it passes into the

http://www.gratisexam.com/
network. The IPS in the following diagram is deployed outside the firewall and can directly act on any malicious traffic between the LAN and the Internet:

Alternatively, an IPS can be deployed in promiscuous mode, which is also referred to as monitor-only mode. When operating in promiscuous mode, an IPS is
connected to a SPAN port and effectively functions as an IDS.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 13, IPS/IDS Fundamentals, pp. 534-535
Cisco: Cisco IPS AIM

QUESTION 16
Which of the following statements are true regarding the distribution layer of the hierarchical network model? (Choose two.)

A. The distribution layer provides load balancing.


B. The distribution layer provides redundant paths to the default gateway.
C. The distribution layer provides fast convergence.
D. The distribution layer provides NAC.

Correct Answer: AB
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
The distribution layer provides load balancing and redundant paths to the default gateway. The hierarchical model divides the network into three distinct
components:
Core layer
Distribution layer
Access layer

The core layer of the hierarchical model provides fast convergence. The core layer typically provides the fastest switching path in the network. As the network
backbone, the core layer is primarily associated with low latency and high reliability. The functionality of the core layer can be collapsed into the distribution layer if
the distribution layer infrastructure is sufficient to meet the design requirements. Thus the core layer does not contain physically connected hosts. For example, in a

http://www.gratisexam.com/
small enterprise campus implementation, a distinct core layer may not be required, because the network services normally provided by the core layer are provided
by a collapsed core layer instead.

The distribution layer serves as an aggregation point for access layer network links. Because the distribution layer is the intermediary between the access layer and
the core layer, the distribution layer is the ideal place to enforce security policies, to provide Quality of Service (QoS), and to perform tasks that involve packet
manipulation, such as routing. Summarization and next-hop redundancy are also performed in the distribution layer.

The access layer provides Network Admission Control (NAC). NAC is a Cisco feature that prevents hosts from accessing the network if they do not comply with
organizational requirements, such as having an updated antivirus definition file. NAC Profiler automates NAC by automatically discovering and inventorying devices
attached to the LAN. The access layer serves as a media termination point for endpoints, such as servers and hosts. Because access layer devices provide access
to the network, the access layer is the ideal place to perform user authentication.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 2, Distribution Layer, pp. 43-44
Cisco: Campus Network for High Availability Design Guide: Distribution Layer

QUESTION 17
Which of the following is a routing protocol that requires a router that operates in the same AS in order to establish a neighbor relationship?

A. BGP
B. EIGRP
C. HSRP
D. static routes

Correct Answer: B
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
Enhanced Interior Gateway Routing Protocol (EIGRP) requires a router that operates in the same autonomous system (AS) in order to establish a neighbor
relationship, which is also known as an EIGRP adjacency. EIGRP routers establish adjacencies by sending Hello packets to the multicast address 224.0.0.10.
EIGRP for IP version 6 (IPv6) routers can use IPv6 link-local addresses to reach neighbors.

Hello packets verify that two-way communication exists between routers. As soon as a router receives an EIGRP Hello packet, the router will attempt to establish an
adjacency with the router that sent the packet. Unlike OSPF, EIGRP does not go through neighbor states? a neighbor relationship is established upon receipt of an
EIGRP Hello packet.

An EIGRP router can form an adjacency with another router only if the following values match:

http://www.gratisexam.com/
AS number
K values, which are used to configure the EIGRP metric
Authentication parameters, if configured

In addition, if the routers are using IP, the primary IP addresses for the routers' connected interfaces must be on the same IP subnet.

Border Gateway Protocol (BGP) does not require a router that operates in the same AS in order to establish a neighbor relationship. Because BGP does not use a
neighbor discovery process like many other routing protocols, every peer is manually configured and must be reachable through Transmission Control Protocol
(TCP) port 179. Once a peer has been configured with the neighbor remote-as command, the local BGP speaker will attempt to transmit an OPEN message to the
remote peer. If the OPEN message is not blocked by existing firewall rules or other security mechanisms, the remote peer will respond with a KEEPALIVE message
and will continue to periodically exchange KEEPALIVE messages with the local peer. A BGP speaker will consider a peer dead if a KEEPALIVE message is not
received within a period of time specified by a hold timer. Routing information is then exchanged between peers by using UPDATE messages. UPDATE messages
can include advertised routes and withdrawn routes. Withdrawn routes are those that are no longer considered feasible. Statistics regarding the number of BGP
messages, such as UPDATE messages, can be viewed in the output of the show ip bgp neighbors command.
Hot Standby Router Protocol (HSRP) is a First Hop Redundancy Protocol (FHRP), not a routing protocol. Therefore, an HSRP router does not establish a neighbor
relationship with another HSRP router. The active and standby routers in an HSRP configuration do send Hello packets to establish roles and determine availability.
Typically, HSRP routers are connected together on the same LAN and are therefore operating in the same AS.

Static routes are manually configured on individual routers and remain in the routing table even if the path is not valid. Therefore, static routes do not establish
neighbor relationships with other routers. A static route can exist regardless of the AS in which the routers are operating.

Reference:
Cisco: Cisco IOS IP Configuration Guide, Release 12.2: Configuring EIGRP

QUESTION 18
Which of the following can you use to hide the IP addresses of hosts on an internal network when transmitting packets to an external network, such as the Internet?

A. a DMZ
B. WPA
C. an ACL
D. NAT

Correct Answer: D
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

Explanation:
You can use Network Address Translation (NAT) to hide the IP addresses of hosts on an internal network when transmitting packets to an external network, such as

http://www.gratisexam.com/
the Internet. NAT is used to translate private IP addresses to public IP addresses. Private-to-public address translation enables hosts on a privately addressed
internal network to communicate with hosts on a public network, such as the Internet. Typically, internal networks use private IP addresses, which are not globally
routable. In order to enable communication with hosts on the Internet, which use public IP addresses, NAT translates the private IP addresses to a public IP
address. Port Address Translation (PAT) can further refine what type of communication is allowed between an externally facing resource and an internally facing
resource by designating the port numbers to be used during communication. PAT can create multiple unique connections between the same external and internal
resources.

You cannot use a demilitarized zone (DMZ) to hide the IP addresses of hosts on an internal network when transmitting packets to an external network. A DMZ is a
network segment that is used as a boundary between an internal network and an external network, such as the Internet. A DMZ network segment is typically used
with an access control method to permit external users to access specific externally facing servers, such as web servers and proxy servers, without providing
access to the rest of the internal network. This helps limit the attack surface of a network.

You cannot use Wi-Fi Protected Access (WPA) to hide the IP addresses of hosts on an internal network when transmitting packets to an external network. WPA is a
wireless standard that is used to encrypt data transmitted over a wireless network. WPA was designed to address weaknesses in Wired Equivalent Privacy (WEP)
by using a more advanced encryption method called Temporal Key Integrity Protocol (TKIP). TKIP provides 128bit encryption, key hashing, and message integrity
checks. TKIP can be configured to change keys dynamically, which increases wireless network security.

You cannot use an access control list (ACL) to hide the IP addresses of hosts on an internal network when transmitting packets to an external network. ACLs are
used to control packet flow across a network. They can either permit or deny packets based on source network, destination network, protocol, or destination port.
Each ACL can only be applied to a single protocol per interface and per direction. Multiple ACLs can be used to accomplish more complex packet flow throughout
an organization. For example, you could use an ACL on a router to restrict a specific type of traffic, such as Telnet sessions, from passing through a corporate
network.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 8, NAT, pp. 300-302

QUESTION 19
Which of the following statements is true regarding the service-port interface on a Cisco WLC?

A. It is used for client data transfer.


B. It is used for in-band management.
C. It is used for out-of-band management.
D. It is used for Layer 3 discovery operations.
E. It is used for Layer 2 discovery operations.

Correct Answer: C
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

http://www.gratisexam.com/
Explanation:
The service-port interface on a Cisco wireless LAN controller (WLC) is used for out-of-band management. A WLC interface is a logical interface that can be
mapped to at least one physical port. The port mapping is typically implemented as a virtual LAN (VLAN) on an 802.1Q trunk. A WLC has five interface types:
Management interface
Service-port interface
Access point (AP) manager interface
Dynamic interface
Virtual interface

The management interface is used for in-band management, for Layer 2 discovery operations, and for enterprise services such as authentication, authorization, and
accounting (AAA). The AP manager interface is used for Layer 3 discovery operations and handles all Layer 3 communications between the WLC and an
associated AP.

The virtual interface is a special interface used to support wireless client mobility. The virtual interface acts as a Dynamic Host Configuration Protocol (DHCP) server
placeholder and supports DHCP relay functionality. In addition, the virtual interface is used to implement Layer 3 security, such as redirects for a web authentication
login page.

The dynamic interface type is used to map VLANs on the WLC for wireless client data transfer. A WLC can support up to 512 dynamic interfaces mapped onto an
802.1Q trunk on a physical port or onto multiple ports configured as a single port group using link aggregation (LAG).

Reference:
CCDA 200-310 Official Cert Guide, Chapter 4, WLC Interface Types, pp. 184-185
Cisco: Cisco Wireless LAN Controller Configuration Guide, Release 7.4: Information About Interfaces

QUESTION 20
Which of the following statements regarding WMM is true?

http://www.gratisexam.com/

A. Voice traffic is assigned to the Gold access category.


B. Unassigned traffic is treated as though it were assigned to the Silver access category.
C. Best-effort traffic is assigned to the Bronze access category.
D. WMM is not compatible with the 802.11e standard.

Correct Answer: B
Section: Considerations for Expanding an Existing Network Explanation

http://www.gratisexam.com/
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

Explanation:
Wi-Fi Multimedia (WMM) treats unassigned traffic as though it were assigned to the Silver access category. WMM is a subset of the 802.11e wireless standard,
which adds Quality of Service (QoS) features to the existing wireless standards. WMM was initially created by the Wi-Fi Alliance while the 802.11e proposal was
awaiting approval by the Institute of Electrical and Electronics Engineers (IEEE).

The 802.11e standard defines eight priority levels for traffic, numbered from 0 through 7. WMM reduces the eight 802.11e priority levels into four access categories,
which are Voice (Platinum), Video (Gold), Best-Effort (Silver), and Background (Bronze). On WMM-enabled networks, these categories are used to prioritize traffic.
Packets tagged as Voice (Platinum) packets are typically given priority over packets tagged with lower-level priorities. Packets that have not been assigned to a
category are treated as though they had been assigned to the Best-Effort (Silver) category.
When a lightweight access point (LAP) receives a frame with an 802.11e priority value from a WMM-enabled client, the LAP ensures that the 802.11e priority value
is within the acceptable limits provided by the QoS policy assigned to the wireless client. After the LAP polices the 802.11e priority value, it maps the 802.11e priority
value to the corresponding Differentiated Services Code Point (DSCP) value and forwards the frame to the wireless LAN controller (WLC). The WLC will then
forward the frame with its DSCP value to the wired network.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 5, Wireless and Quality of Service (QoS), pp. 197-199
Cisco: Cisco Unified Wireless QoS

QUESTION 21
The network you administer contains the following network addresses:
10.0.4.0/24
10.0.5.0/24
10.0.6.0/24
10.0.7.0/24

You want to summarize these network addresses with a single summary address.

Which of the following addresses should you use?

A. 10.0.0.0/21
B. 10.0.4.0/22
C. 10.0.4.0/23
D. 10.0.4.0/24
E. 10.0.4.0/25
F. 10.0.4.0/26

http://www.gratisexam.com/
Correct Answer: B
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
You should use the 10.0.4.0/22 address to summarize the network addresses 10.0.4.0/24, 10.0.5.0/24, 10.0.6.0/24, and 10.0.7.0/24. The /22 notation indicates that
a 22bit subnet mask (255.255.252.0) is used, which can summarize two /23 networks, four /24 networks, eight /25 networks, and so on. The process of
summarizing multiple subnets with a single address is called supernetting.

You should not use the 10.0.0.0/21 address to summarize the network addresses. The /21 notation indicates that a 21-bit subnet mask (255.255.248.0) is used,
which can summarize two /22 networks, four /23 networks, eight /24 networks, and so on. Although the 10.0.0.0/21 address does include the four network
addresses on your network, it also includes the 10.0.0.0/24, 10.0.1.0/24, 10.0.2.0/24, and 10.0.3.0/24 networks. Whenever possible, you should summarize
addresses to the smallest possible bit boundary.

You cannot use the 10.0.4.0/23 address to summarize the network addresses. The /23 notation indicates that a 23-bit subnet mask (255.255.254.0) is used, which
can summarize two /24 networks, four /25 networks, eight /26 networks, and so on. Therefore, the 10.0.4.0/23 address only summarizes the 10.0.4.0/24 and
10.0.5.0/24 networks. The 10.0.6.0/23 address would be required to summarize the remaining 10.0.6.0/24 and 10.0.7.0/24 networks.

You cannot use the 10.0.4.0/24 address to summarize the network addresses. The /24 notation indicates that a 24bit subnet mask (255.255.255.0) is used, which
can summarize two /25 networks, four /26 networks, eight /27 networks, and so on. However, a 24-bit summary address cannot summarize multiple /24 networks.

You cannot use the 10.0.4.0/25 address to summarize the network addresses. A 25-bit mask is used to subnet a /24 network into two subnets; it cannot be used to
supernet multiple /24 networks.

You cannot use the 10.0.4.0/26 address to summarize the network addresses. A 26-bit mask is used to subnet a /24 network into four subnets; it cannot be used to
supernet multiple /24 networks.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 8, IPv4 Address Subnets, pp. 302-310
Cisco: IP Routing Frequently Asked Questions: Q. What does route summarization mean?
Cisco: IP Addressing and Subnetting for New Users

QUESTION 22
You want to implement a WAN link between two sites.

Which of the following WAN solutions would not offer a guaranteed level of service?

A. GRE tunnel through the Internet


B. ATM virtual circuit

http://www.gratisexam.com/
C. Frame Relay virtual circuit
D. MPLS overlay VPN

Correct Answer: A
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
A Generic Routing Encapsulation (GRE) tunnel through the Internet would not offer a guaranteed level of service. GRE is a tunneling protocol designed to
encapsulate any Layer 3 protocol for transport through an IP network. Although a GRE tunnel can be used to connect to sites across a public network, such as the
Internet, GRE does not have any inherent Quality of Service (QoS) mechanisms that can guarantee a level of service to any of the packets that flow through the
tunnel. Because any traffic that flows through the Internet is delivered on a best-effort basis, WAN solutions that use the Internet, such as GRE tunnels, are better
suited as backup strategies for WAN links that can guarantee a level of service.

Asynchronous Transfer Mode (ATM) and Frame Relay virtual circuits can provide a guaranteed level of service. Because ATM and Frame Relay virtual circuits pass
through a network that has inherent QoS capabilities, each virtual circuit can guarantee a level of service to its endpoints. The service provider network is
responsible for ensuring that the service level agreement (SLA) for each circuit is maintained at all times.

Similarly, a Multiprotocol Label Switching (MPLS) overlay virtual private network (VPN) can provide a guaranteed level of service. MPLS overlay VPNs are provided
by a service provider and are established on an infrastructure that can ensure a level of service for all traffic that passes through the service provider network.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 7, WAN Backup over the Internet, pp. 263-264

QUESTION 23
Which of the following standard or standards natively include PortFast, UplinkFast, and BackboneFast?

A. 802.1s
B. 802.1w
C. 802.1D
D. 802.1D and 802.1s
E. 802.1D and 802.1w

Correct Answer: B
Section: Enterprise Network Design Explanation
Explanation

http://www.gratisexam.com/
Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
The 802.1w Rapid Spanning Tree Protocol (RSTP) standard natively includes PortFast, UplinkFast, and BackboneFast. PortFast enables a port to immediately
access the network by transitioning the port into the Spanning Tree Protocol (STP) forwarding state without passing through the listening and learning states.
Configuring BPDU filtering on a port that is also configured for PortFast causes the port to ignore any bridge protocol data units (BPDUs) it receives, effectively
disabling STP.

UplinkFast increases convergence speed for an access layer switch that detects a failure on the root port with backup root port selection by immediately replacing
the root port with an alternative root port. BackboneFast increases convergence speed for switches that detect a failure on links that are not directly connected to
the switch.

802.1D is the traditional STP implementation to prevent switching loops on a network. Traditional STP, which Cisco training and reference materials refer to simply
as 802.1D, is more formally known as the 802.1D1998 standard. Although PortFast, UplinkFast, and BackboneFast can be used with 802.1D, it does not contain
those features natively. Traditional STP converges slowly, so the 802.1w RSTP standard was developed by the Institute of Electrical and Electronics Engineers
(IEEE) to address the slow transition of an 802.1D port to the forwarding state. RSTP is backward compatible with STP, but the convergence benefits provided by
RSTP are lost when RSTP interacts with STP devices. The features of 802.1w, including PortFast, UplinkFast, and BackboneFast, were integrated into the
802.1D2004 standard, and the traditional STP algorithm was replaced with RSTP.

The 802.1s Multiple Spanning Tree (MST) standard is used to create multiple spanning tree instances on a network. Implementing MST on a switch also
implements RSTP. However, the 802.1s standard does not natively include PortFast, UplinkFast, and BackboneFast within the specification.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 3, Cisco STP Toolkit, pp. 103-105
Cisco: Understanding Rapid Spanning Tree Protocol (802.1w): Conclusion

QUESTION 24
Which of the following network virtualization techniques does Cisco recommend for any-to-any connectivity in large networks?

A. VRFLite
B. MultiVRF
C. EVN
D. MPLS

Correct Answer: D
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

http://www.gratisexam.com/
Explanation:
Cisco recommends Multiprotocol Label Switching (MPLS) as a network virtualization technique for any-to-any connectivity in large networks. MPLS is typically
implemented in an end-to-end fashion at the network edge and requires the edge and core devices to be MPLS-capable. MPLS can support thousands of virtual
networks (VNETs) over a full-mesh topology to provide any-to-any connectivity without requiring excessive operational complexity or management resources.
Although MPLS is best suited for large networks, integrating MPLS into an existing design and infrastructure can be disruptive, particularly if MPLS-incapable
devices must be replaced with MPLS-capable devices at the network edge or in the core.

The Multi-virtual routing and forwarding (Multi-VRF) network virtualization technique, which Cisco also refers to as VRF-Lite, is best suited for small or medium
networks. Multi-VRF uses virtual routing and forwarding (VRF) instances to segregate a Layer 3 network. Multi-VRF is typically used to support one-to-one, end-to-
end connections; however, Multicast Generic Routing Encapsulation (mGRE) tunnels could be used to create any-to-any connectivity in small networks. Cisco
considers a full mesh of mGRE tunnels in larger networks impractical because of the increased operational complexity and management load. On Cisco platforms,
Multi-VRF network virtualization supports up to eight VNETs before operational complexity and management become problematic. The VNETs created by Multi-
VRF mirror the physical infrastructure upon which they are built, and most Cisco platforms support Multi-VRF; therefore, the general network design and overall
infrastructure do not require disruptive changes in order to support a Multi-VRF overlay topology.

Newer Cisco platforms support Easy Virtual Networking (EVN), which is a network virtualization that also uses VRFs to segregate Layer 3 networks. EVN supports
up to 32 VNETs before operational complexity and management become problematic. Cisco recommends using EVN instead of Multi-VRF in small and medium
networks. Although EVN is backward-compatible with Multi-VRF, implementing a homogeneous EVN topology would require replacing unsupported hardware with
EVN-capable devices. Replacing infrastructure is typically disruptive and may require additional modifications to the existing network design.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 4, VRF, p. 154
Cisco: Borderless Campus Network Virtualization-Path Isolation Design Fundamentals: Path Isolation

QUESTION 25
DRAG DROP
Drag the event action on the left to the IPS mode that supports it on the right. Use all event actions. Some boxes will not be filled.

Select and Place:

http://www.gratisexam.com/
Correct Answer:

http://www.gratisexam.com/
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

Explanation:
Promiscuous mode enables Cisco Intrusion Prevention System (IPS) to examine traffic on ports from multiple network segments without being directly connected to
those segments. Copies of traffic are forwarded to IPS for analysis instead of flowing through IPS directly. Therefore, promiscuous mode increases latency because
the amount of time IPS takes to determine whether a network attack is in progress can be greater in promiscuous mode than when IPS is operating in inline mode.
The greater latency means that an attack has a greater chance at success prior to detection.

IPS can use all of the following actions to mitigate a network attack in promiscuous mode:
Request block host: causes IPS to send a request to the Attack Response Controller (ARC) to block all communication from the attacking host for a given
period of time
Request block connection: causes IPS to send a request to the ARC to block the specific connectionfrom the attacking host for a given period of time
Reset TCP connection: clears Transmission Control Protocol (TCP) resources so that normal TCPnetwork activity can be established

IPS in promiscuous mode requires Remote Switched Port Analyzer (RSPAN). RSPAN enables the monitoring of traffic on a network by capturing and sending traffic
from a source port on one device to a destination port on a different device on a non-routed network. Inline mode enables IPS to examine traffic as it flows through
the IPS device. Therefore, the IPS device must be directly connected to the network segment that it is intended to protect. Any traffic that should be analyzed by IPS

http://www.gratisexam.com/
must be to a destination that is separated from the source by the IPS device.

IPS can use all of the following actions to mitigate a network attack in inline mode:
Deny attacker inline: directly blocks all communication from the attacking host
Deny attacker service pair inline: directly blocks communication between the attacker and a specific port
Deny attacker victim pair inline: directly blocks communication that occurs on any port between the attacker and a specific host
Deny connection inline: directly blocks communication for a specific TCP session
Deny packet inline: directly blocks the transmission of a specific type of packet from an attacking host
Modify packet inline: allows IPS to change or remove the malicious contents of a packet

IPS in inline mode mitigates attacks for 60 minutes by default. IPS in promiscuous mode mitigates attacks for 30 minutes by default. However, the mitigation effect
time for both inline mode and promiscuous mode can be configured by an IPS administrator.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 13, IPS/IDS Fundamentals, pp. 534-535
Cisco: Cisco IPS Mitigation Capabilities: Event Actions

QUESTION 26
Which of the following statements are correct regarding network design approaches? (Choose two.)

A. The top-down approach is recommended over the bottom-up approach.


B. The top-down approach is more time-consuming than the bottom-up approach.
C. The top-down approach can lead to costly redesigns.
D. The bottom-up approach focuses on applications and services.
E. The bottom-up approach provides a "big picture" overview.
F. The bottom-up approach incorporates organizational requirements.

Correct Answer: AB
Section: Design Methodologies Explanation
Explanation

Explanation/Reference:
Section: Design Methodologies Explanation

Explanation:
The top-down approach to network design is recommended over the bottom-up approach, and the top-down approach is more time-consuming than the bottom-up
approach. The top-down design approach takes its name from the methodology of starting with the higher layers of the Open Systems Interconnection (OSI) model,
such as the Application, Presentation, and Session layers, and working downward toward the lower layers. The top-down design approach is more time-consuming
than the bottom-up design approach because the top-down approach requires a thorough analysis of the organization's requirements. Once the designer has
obtained a complete overview of the existing network and the organization's needs, in terms of applications and services, the designer can provide a design that
meets the organization's current requirements and that can adapt to the organization's projected future needs. Because the resulting design includes room for future

http://www.gratisexam.com/
growth, costly redesigns are typically not necessary with the top-down approach to network design.

By contrast, the bottom-up approach can be much less time-consuming than the top-down design approach. The bottom-up design approach takes its name from
the methodology of starting with the lower layers of OSI model, such as the Physical, Data Link, Network, and Transport layers, and working upward toward the
higher layers. The bottom-up approach relies on previous experience rather than on a thorough analysis of organizational requirements or projected growth. In
addition, the bottom-up approach focuses on the devices and technologies that should be implemented in a design, instead of focusing on the applications and
services that will actually use the network. Because the bottom-up approach does not use a detailed analysis of an organization's requirements, the bottom-up
design approach can often lead to costly network redesigns. Cisco does not recommend the bottom-up design approach, because the design does not provide a
"big picture" overview of the current network or its future requirements.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 1, TopDown Approach, pp. 24-25
Cisco: Using the TopDown Approach to Network Design: 4. TopDown and BottomUp Approach Comparison (Flash)

QUESTION 27
View the Exhibit.

You have been asked to use CDP to document the network shown in the diagram above. You are working from HostA, which is connected to the console port of

http://www.gratisexam.com/
SwitchA. You connect to SwitchA and issue the show cdp neighbors and show cdp neighbors detail commands.

Which of the following statements are correct? (Choose two.)

A. The show cdp neighbors detail command will show all of the host IP addresses in use on HostA's LAN.
B. The show cdp neighbors command will show which port on SwitchB connects to SwitchA.
C. The show cdp neighbors command will show two devices connected to SwitchA.
D. The show cdp neighbors detail command will show information for all Cisco devices on the network.
E. The show cdp neighbors detail command will display all of RouterA's IP addresses.

Correct Answer: BC
Section: Design Methodologies Explanation
Explanation

Explanation/Reference:
Section: Design Methodologies Explanation

Explanation:
The show cdp neighbors command will display the directly connected Cisco devices that are sending Cisco Discovery Protocol (CDP) updates; the directly
connected devices in this case are RouterA and SwitchB. The port ID of the sending device will be displayed by the show cdp neighbors command. Therefore, the
show cdp neighbors command will show which port on SwitchB and which interface on RouterA connect to SwitchA. CDP is used to collect information about
neighboring Cisco devices and is enabled by default. Because CDP operates at the Data Link layer, which is Layer 2 of the Open Systems Interconnection (OSI)
model, CDP is not dependent on any particular Layer 3 protocol addressing, such as IP addressing. Therefore, if CDP information is not being exchanged between
devices, you should check for Physical layer and Data Link layer connectivity problems. CDP is enabled by default on Cisco devices. You can globally disable CDP
by issuing the no cdp run command in global configuration mode. You can disable CDP on a perinterface basis by issuing the no cdp enable command in interface
configuration mode.

The show cdp neighbors detail command will not show information for all of the Cisco devices on the network. The only devices that will send CDP information are
the directly connected devices.
The show cdp neighbors detail command will not display all of RouterA's IP addresses. Updates sent from RouterA and received by SwitchA will include only the IP
address of the port that sent the update.

The show cdp neighbors detail command will not show all of the IP addresses of hosts on the LAN. Hosts do not send CDP information? only directly connected
Cisco devices send CDP updates.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 15, CDP, p. 629
Cisco: Cisco IOS Configuration Fundamentals Command Reference, Release 12.2: show cdp neighbors

QUESTION 28
Which of the following prefixes will an IPv6enabled computer use to automatically configure an IPv6 address for itself?

http://www.gratisexam.com/
http://www.gratisexam.com/

A. 2000::/3
B. FC00::/7
C. FE80::/10
D. FF00::/8

Correct Answer: C
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
An IP version 6 (IPv6) enabled computer will use the prefix FE80::/10 to automatically configure an IPv6 address for itself. The IPv6 prefix FE80::/10 is used for
unicast link-local addresses. IPv6 addresses in the FE80::/10 range begin with the characters FE80 through FEBF. Unicast packets are used for one-to-one
communication. Link-local addresses are unique only on the local segment. Therefore, link-local addresses are not routable. Unicast link-local addresses are used
for neighbor discovery and for environments in which no router is present to provide a routable IPv6 prefix.

IPv6 was developed to address the lack of available address space with IPv4. An IPv6 address is a 128bit (16byte) address that is typically written as eight groups
of four hexadecimal characters, including numbers from 0 through 9 and letters from A through F. Each group of four characters is separated by colons. Leading
zeroes in each group can be dropped. A double colon can be used at the beginning, middle, or end of an IPv6 address in place of one or more contiguous four
character groups consisting of all zeroes. However, only one double colon can be used in an IPv6 address. Therefore, the following IPv6 addresses are equivalent:
FE80:0000:0000:070D:0000:50A0:0001:0024
FE80::070D:0000:50A0:0001:0024
FE80:0:0:70D:0:50A0:1:24
FE80::70D:0:50A0:1:24

An IPv6enabled computer will not use the prefix 2000::/3 to automatically configure an IPv6 address for itself. The IPv6 prefix 2000::/3 is used for global
aggregatable unicast addresses. IPv6 addresses in the 2000::/3 range begin with the characters 2000 through 3FFF. Global aggregatable unicast address prefixes
are distributed by the Internet Assigned Numbers Authority (IANA) and are globally routable over the Internet. Because there is an inherent hierarchy in the
aggregatable global address scheme, these addresses lend themselves to simple consolidation, which greatly reduces the complexity of Internet routing tables.

An IPv6enabled computer will not use the prefix FC00::/7 to automatically configure an IPv6 address for itself. The IPv6 prefix FC00::/7 is used for unicast unique-
local addresses. IPv6 addresses in this range begin with the characters FC00 through FDFF. Unique-local addresses are not globally routable, but they are routable

http://www.gratisexam.com/
within an organization.

An IPv6enabled computer will not use the prefix FF00::/8 to automatically configure an IPv6 address for itself. The IPv6 prefix FF00::/8 is used for multicast
addresses, which are used for one-to-many communication. IPv6 addresses in the FF00::/8 range begin with the characters FF00 through FFFF. However, certain
address ranges are used to indicate the scope of the multicast address. The following IPv6 multicast scopes are defined:
FF01::/16 -nodelocal
FF02::/16 -linklocal
FF05::/16 -uniquelocal
FF08::/16 -organizationlocal
FF0E::/16 -global

Reference:
CCDA 200-310 Official Cert Guide, Chapter 9, LinkLocal Addresses, p. 343
CCDA 200-310 Official Cert Guide, Chapter 9, SLAAC of LinkLocal Address, p. 350
Cisco: IPv6: A Primer for Physical Security Professionals

QUESTION 29
Which of the following does NetFlow use to identify a traffic flow?

A. only Layer 2 information


B. only Layer 3 information
C. only Layer 4 information
D. Layer 2 and Layer 3 information
E. Layer 3 and Layer 4 information
F. Layer 4 through 7 information

Correct Answer: E
Section: Design Methodologies Explanation
Explanation

Explanation/Reference:
Section: Design Methodologies Explanation

Explanation:
NetFlow uses Open Systems Interconnection (OSI) Layer 3 and Layer 4 information to identify a traffic flow. NetFlow is a Cisco IOS feature that can be used to
gather flow-based statistics, such as packet counts, byte counts, and protocol distribution. A device configured with NetFlow examines packets for select Layer 3
and Layer 4 attributes that uniquely identify each traffic flow. A traffic flow can be identified based on the unique combination of the following seven attributes:
Source IP address
Destination IP address
Source port number
Destination port number

http://www.gratisexam.com/
Protocol value
Type of Service (ToS) value
Input interface

The data gathered by NetFlow is typically exported to management software. You can then analyze the data to facilitate network planning, customer billing, and
traffic engineering. For example, NetFlow can be used to obtain information about the types of applications generating traffic flows through a router.

NetFlow does not use Layer 2 information, such as a packet's source Media Access Control (MAC) address, to identify a traffic flow. Although the input will be
considered when identifying a traffic flow, the MAC address of the interface will not be considered.

Network-Based Application Recognition (NBAR), not NetFlow, uses Layer 4 through 7 information to classify application traffic. NBAR is a Quality of Service (QoS)
feature that enables a device to perform deep packet inspection for all packets that pass through an NBAR-enabled interface. With deep packet inspection, an
NBAR-enabled device can classify traffic based on the content of a Transmission Control Protocol (TCP) or a User Datagram Protocol (UDP) packet, instead of just
the network header information. In addition, NBAR can provide statistical reporting relative to each recognized application.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 15, NetFlow, pp. 626-628
Cisco: Cisco IOS Switching Services Configuration Guide, Release 12.2: Capturing Traffic Data

QUESTION 30
Which of the following is a Layer 2 high-availability feature?

A. NSF
B. UDLDC
C. SPF
D. FHRP

Correct Answer: B
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
UniDirectional Link Detection (UDLD) is a Layer 2 high-availability (HA) feature. UDLD monitors a link to verify that both ends of the link are functioning. UDLD
operates by sending messages across the link. When a port receives a UDLD message, the port responds by sending an echo message to verify that the link is
bidirectional. Layer 2 HA features, such as UDLD, Spanning Tree Protocol (STP), and IEEE 802.3ad link aggregation, increase network resiliency and are often
integral components in redundant topology designs.

Shortest Path First (SPF), First-Hop Redundancy Protocol (FHRP), and nonstop forwarding (NSF) are Layer 3 HA features, not Layer 2 HA features. SPF uses an

http://www.gratisexam.com/
efficient algorithm to determine the optimal Layer 3 path to a destination within a routing domain. FHRP provides gateway resiliency for hosts. NSF provides graceful
restart provisions for common routing protocols to ensure fast convergence and uninterrupted Layer 3 forwarding during failure events, such as supervisor module
failure and switchover.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 4, Virtualization Technologies, pp. 153-157
Cisco: Campus 3.0 Virtual Switching System Design Guide: VSS Architecture and Operation

QUESTION 31
Which of the following statements is true regarding VMs?

A. VMs running on a host computer must run the same version of an OS as the host computer.
B. Multiple VMs can be running simultaneously on a single host computer.
C. Installing virus protection on the host computer automatically protects any VMs running on that host computer.
D. All software is shared among the host computer and the VMs.

Correct Answer: B
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

Explanation/Reference:
Multiple virtual machines (VMs) can be running simultaneously on a single host computer. A VM is an isolated environment running a separate operating system
(OS) while sharing hardware resources with a host machine's OS. For example, you can configure a Windows 7 VM that can run within Windows 8? both OSs can
run at the same time if virtualization software, such as Microsoft HyperV, is used. The Windows 7 VM could then be used as a testing environment for patch or
application deployment.

Depending on a computer's hardware capabilities, multiple VMs can be installed on a single computer, which can help provide more efficient utilization of hardware
resources. For example, VMWare ESXi Server provides a hypervisor that runs on bare metal, meaning without a host OS, and that can efficiently manage multiple
VMs on a single server. A VM can access the physical network through a network adapter shared by the host computer. Alternatively, a VM could access virtualized
networking devices on the host, such as routers or switches, to access network resources.

Before a VM is installed, it is important to ensure that the hardware on the host in which you are configuring the VM has enough CPU process availability and
random access memory (RAM) to support the simultaneous use of multiple OSs and to ensure that the client you are accessing the VM from has sufficient network
bandwidth.

The VMs on a host computer can, but are not required to, run the same version of an OS as the host computer. For example, you can install Windows 8 on a VM
that is hosted on a Windows 8 computer. Alternatively, as in the example given previously, you can configure a Windows 7 VM that can run within Windows 8.

http://www.gratisexam.com/
Installing virus protection on the host computer will not automatically protect any VMs running on that host computer. Securing the host computer does not secure all
virtual computers running on that host computer. You must manually manage the security of each VM installed on a host computer. For example, installing patches
and security software on the host computer will not also configure the patches and software to be installed on the VMs.

Although a VM shares the hardware resources of the host computer, the software remains separate. Software installed on the host is not accessible from within the
VM. For example, Microsoft Office might be installed on the host computer, but in order to access Microsoft Office from within a VM you must also install Microsoft
Office on the VM. Separate instances of software on the host computer and on each VM can help protect the host computer from potentially harmful changes made
within a VM. For example, if a VM user accidentally deletes a system file or installs malicious software, the host computer will not be affected. This applies to drivers
as well? if the network adapter driver is removed from the VM, the host computer and the other VMs will not be affected.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 4, Server Virtualization, p. 155

QUESTION 32
Which of the following are true of the access layer of a hierarchical design? (Choose two.)

A. It provides address summarization.


B. It aggregates LAN wiring closets.
C. It aggregates WAN connections.
D. It isolates the distribution and core layers.
E. It is also known as the backbone layer.
F. It performs Layer 2 switching.
G. It performs NAC for end users.

Correct Answer: FG
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation
Explanation/Reference:
The access layer typically performs Layer 2 switching and Network Admission Control (NAC) for end users. The access layer is the network hierarchical layer where
end-user devices connect to the network. Port security and Spanning Tree Protocol (STP) toolkit features like PortFast are typically implemented in the access
layer.

The distribution layer of a hierarchical design, not the access layer, provides address summarization, aggregates LAN wiring closets, and aggregates WAN
connections. The distribution layer is used to connect the devices at the access layer to those in the core layer. Therefore, the distribution layer isolates the access
layer from the core layer. In addition to these features, the distribution layer can also be used to provide policy-based routing, security filtering, redundancy, load
balancing, Quality of Service (QoS), virtual LAN (VLAN) segregation of departments, inter-VLAN routing, translation between types of network media, routing
protocol redistribution, and more.

http://www.gratisexam.com/
The core layer of a hierarchical design, not the access layer, is also known as the backbone layer. The core layer is used to provide connectivity to devices
connected through the distribution layer. In addition, it is the layer that is typically connected to enterprise edge modules. Cisco recommends that the core layer
provide fast transport, high reliability, redundancy, fault tolerance, low latency, limited diameter, and QoS. However, the core layer should not include features that
could inhibit CPU performance. For example, packet manipulation that results from some security, QoS, classification, or inspection features can be a drain on
resources.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 2, Access Layer, pp. 44-46
Cisco: High Availability Campus Network Design-Routed Access Layer using EIGRP or OSPF: Hierarchical Design

QUESTION 33
In which of the following modules of the Cisco enterprise architecture would you expect to find a DNS server? (Choose two.)

A. campus core
B. data center
C. building distribution
D. enterprise edge
E. building access

Correct Answer: BD
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
You would expect to find a Domain Name System (DNS) server in the data center or enterprise edge modules of the Cisco enterprise architecture. The enterprise
architecture model is a modular framework that is used for the design and implementation of large networks. The enterprise architecture model includes the
following modules: enterprise campus, enterprise edge, service provider (SP) edge, and remote modules that utilize resources that are located away from the main
enterprise campus.

The campus core layer, building distribution layer, and building access layer are all part of the enterprise campus module. These submodules of the enterprise
campus module rely on a resilient multilayer design to support the day-to-day operations of the enterprise. Also found within the enterprise campus module is the
data center submodule, which is also referred to as the server farm submodule. The data center submodule provides file and print services to the enterprise
campus. In addition, the data center submodule typically hosts internal DNS, email, Dynamic Host Configuration Protocol (DHCP), and database services.

The enterprise edge module represents the boundary between the enterprise campus module and the outside world. In addition, the enterprise edge module
aggregates voice, video, and data traffic to ensure a particular level of Quality of Service (QoS) between the enterprise campus and external users located in remote
submodules. Enterprise WAN, Internet connectivity, ecommerce servers, and remote access & virtual private network (VPN) are all submodules of the enterprise
edge module.

http://www.gratisexam.com/
Enterprise data center, enterprise branch, and teleworkers are examples of remote submodules that are found within the enterprise architecture model. These
submodules represent enterprise resources that are located outside the main enterprise campus. These submodules typically connect to the enterprise campus
through the use of the SP edge and the enterprise edge modules. Because many Cisco routers commonly used at the edge of the network are capable of providing
DHCP and DNS services to the network edge, devices in the remote submodules do not need to rely on the DHCP and DNS servers located in the enterprise
campus.

The SP edge module consists of submodules that represent third-party network service providers. For example, most enterprise entities rely on Internet service
providers (ISPs) for Internet connectivity and on public switched telephone network (PSTN) providers for telephone service. In addition, the third-party infrastructure
found in the SP edge is often used to provide connectivity between the enterprise campus and remote resources.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 8, DNS, pp. 319-321

QUESTION 34
DRAG DROP
Select the subnet masks on the left, and place them over the number of host addresses that the subnet mask can support. Not all subnet masks will be used.

Select and Place:

Correct Answer:

http://www.gratisexam.com/
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
A subnet mask specifies how many bits belong to the network portion of a 32bit IP address. The remaining bits in the IP address belong to the host portion of the IP
address. To determine how many host addresses are defined by a subnet mask, use the formula 2n-2, where n is the number of bits in the host portion of the
address.

A /19 subnet mask uses 13 bits for host addresses. Therefore, 213 -2 equals 8,190 valid host addresses.
A /20 subnet mask uses 12 bits for host addresses. Therefore, 212 -2 equals 4,094 valid host addresses.
A /22 subnet mask uses 10 bits for host addresses. Therefore, 210-2 equals 1,022 valid host addresses.
A /23 subnet mask uses nine bits for host addresses. Therefore, 29-2 equals 510 valid host addresses.
A /25 subnet mask uses seven bits for host addresses. Therefore, 27-2 equals 126 valid host addresses.

Although it is important to learn the formula for calculating valid host addresses, the following list demonstrates the relationship between subnet masks and valid
host addresses:

http://www.gratisexam.com/
Reference:
CCDA 200-310 Official Cert Guide, Chapter 8, IPv4 Address Subnets, pp. 302-310
Cisco: IP Addressing and Subnetting for New Users

QUESTION 35
Which of the following statements is true regarding NetFlow?

A. NetFlow can collect timestamps of traffic flowing between a particular source and destination.
B. Data collected by NetFlow cannot be exported.
C. Many configuration changes to existing network devices are required in order to accommodate NetFlow.
D. For audit purposes, NetFlow must run on every router in a network.

Correct Answer: A
Section: Design Methodologies Explanation
Explanation

Explanation/Reference:
Section: Design Methodologies Explanation

Explanation:
NetFlow is a Cisco IOS feature that can collect timestamps of traffic flowing between a particular source and destination. NetFlow can be used to gather flow-based

http://www.gratisexam.com/
statistics, such as packet counts, byte counts, and protocol distribution. A device configured with NetFlow examines packets for select Layer 3 and Layer 4 attributes
that uniquely identify each traffic flow. The data gathered by NetFlow is typically exported to management software. You can then analyze the data to facilitate
network planning, customer billing, and traffic engineering. A traffic flow is defined as a series of packets with the same source IP address, destination IP address,
protocol, and Layer 4 information. Although NetFlow does not use Layer 2 information, such as a source Media Access Control (MAC) address, to identify a traffic
flow, the input interface on a switch will be considered when identifying a traffic flow. Each NetFlowenabled device gathers statistics independently of any other
device? NetFlow does not have to run on every router in a network in order to produce valuable data for an audit. In addition, NetFlow is transparent to the existing
network infrastructure and does not require any network configuration changes in order to function.

Reference:
Cisco: Cisco IOS Switching Services Configuration Guide, Release 12.2: NetFlow Overview

QUESTION 36
On a Cisco router, which of the following message types does the traceroute command use to map the path that a packet takes through a network?

A. ICMP Echo
B. ICMP TEM
C. LLDP TLV
D. CDP TLV

Correct Answer: B
Section: Design Methodologies Explanation
Explanation

Explanation/Reference:
Section: Design Methodologies Explanation

Explanation:
On a Cisco router, the traceroute command uses Internet Control Message Protocol (ICMP) Time Exceeded Message (TEM) messages to map the path that a
packet takes through a network. The traceroute command works by sending a sequence of messages, usually User Datagram Protocol (UDP) packets, to a
destination address. The Time-to-Live (TTL) value in the IP header of each series of packets is incremented as the traceroute command discovers the IP address of
each router in the path to the destination address. The first series of packets, which have a TTL value of one, make it to the first hop router, where their TTL value is
decremented by one as part of the forwarding process. Because the new TTL value of each of these packets will be zero, the first hop router will discard the packets
and send an ICMP TEM to the source of each discarded packet. The traceroute command will record the IP address of the source of the ICMP TEM and will then
send a new series of messages with a higher TTL. The next series of messages is sent with a TTL value of two and arrives at the second hop before generating
ICMP TEMs and thus identifying the second hop. This process continues until the destination is reached and every hop in the path to the destination is identified. In
this manner, the traceroute command can be used to manually build a topology map of an existing network? however, more effective mechanisms, such as Link
Layer Discovery Protocol (LLDP) or Cisco Discovery Protocol (CDP), are typically used instead when available.

Some network trace implementations similar to the IOS traceroute command send ICMP Echo messages or Transmission Control Protocol (TCP) synchronization
(SYN) packets by default. For example, the tracert command on Microsoft Windows platforms uses ICMP Echo messages by default, instead of ICMP TEMs, to
map the path a packet takes through a network. Some implementations offer configuration options to specify the message types used to map the network path of a

http://www.gratisexam.com/
series of packets. Being able to specify the message type is useful in environments where firewalls or other filtering mechanisms restrict the flow of certain types of
packets, such as ICMP Echo messages.

CDP is a Cisco-proprietary network discovery protocol that uses Type-Length-Value (TLV) fields to share data with neighboring Cisco devices. A TLV is a data
structure that defines a type of data, its maximum length, and a value. For example, the CDP Device-ID TLV contains a string of characters identifying the name
assigned to the device. Each CDP message contains a series of TLV fields, which collectively describe a Cisco device, its configuration, and its capabilities. CDP-
enabled devices listen for CDP packets and parse the TLVs to build a table with information about each neighboring Cisco device. The information in the CDP table
can be used by other processes on the device. For example, native virtual LAN (VLAN) mismatches are commonly identified based on the information from the CDP
table.

Likewise, LLDP uses TLV fields to share data with neighboring network devices. LLDP is an open-standard network discovery protocol specified as part of the
Institute of Electrical and Electronics Engineers (IEEE) 802.1AB standard. Because LLDP is designed to operate in a multivendor environment, it specifies a number
of mandatory TLVs that must be included at the beginning of each LLDP message. Any optional TLVs follow the mandatory TLVs, and an empty TLV specifies the
end of the series. Most Cisco platforms support both CDP and LLDP.

Reference:
Cisco: Understanding the Ping and Traceroute Commands

QUESTION 37
Which of the following is a hierarchical routing protocol that can summarize routes at border routers and by using redistribution?

http://www.gratisexam.com/

A. RIPv1
B. RIPv2
C. OSPF
D. EIGRP

Correct Answer: C
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
Open Shortest Path First (OSPF) is a hierarchical, link-state routing protocol that can summarize routes at border routers or by using redistribution summarization.

http://www.gratisexam.com/
OSPF divides an autonomous system (AS) into areas. These areas can be used to limit routing updates to one portion of the network, thereby keeping routing
tables small and update traffic low. Only OSPF routers in the same hierarchical area form adjacencies. Hierarchical design provides for efficient performance and
scalability. Although OSPF is more difficult to configure, it converges more quickly than most other routing protocols.

Enhanced Interior Gateway Routing Protocol (EIGRP) is a hybrid routing protocol that combines the best features of distance-vector and link-state routing protocols.
Unlike OSPF, EIGRP supports automatic summarization and can summarize routes on any EIGRP interface. However, both OSPF and EIGRP converge faster than
other routing protocols and support manual configuration of summary routes.

Routing Information Protocol version 1 (RIPv1) and RIPv2 are not hierarchical routing protocols that divide an AS into areas. RIPv1 and RIPv2 are distance-vector
routing protocols that use hop count as a metric. By default, RIP sends out routing updates every 30 seconds, and the routing updates are propagated to all RIP
routers on the network.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 11, OSPFv2 Summary, p. 439
Cisco: Open Shortest Path First

QUESTION 38
View the Exhibit:

Refer to the exhibit. Which of the following statements are true regarding the deployment of the IPS in the exhibit? (Choose two.)

A. It increases response latency.


B. It increases the risk of successful attacks.
C. It can directly block all communication from an attacking host.
D. It can reset TCP connections.
E. It does not require RSPAN on switch ports.

Correct Answer: CE
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

http://www.gratisexam.com/
Explanation:
When Cisco Intrusion Prevention System (IPS) is configured in inline mode, IPS can directly block all communication from an attacking host. In addition, an IPS in
inline mode does not require that Remote Switched Port Analyzer (RSPAN) be enabled on switch ports.

Inline mode enables IPS to examine traffic as it flows through the IPS device. Therefore, any traffic that should be analyzed by IPS must be to a destination that is
separated from the source by the IPS device. By contrast, promiscuous mode enables IPS to examine traffic on ports from multiple network segments without being
directly connected to those segments. Promiscuous mode, which is also referred to as monitor-only operation, enables an IPS to passively examine network traffic
without impacting the original flow of traffic. This passive connection enables the IPS to have the most visibility into the networks on the switch to which it is
connected. However, promiscuous mode operation increases latency and increases the risk of successful attacks.

IPS can use all of the following actions to mitigate a network attack in inline mode:
Deny attacker inline: directly blocks all communication from the attacking host
Deny attacker service pair inline: directly blocks communication between the attacker and a specific port
Deny attacker victim pair inline: directly blocks communication that occurs on any port between the attacker and a specific host
Deny connection inline: directly blocks communication for a specific Transmission Control Protocol (TCP)session
Deny packet inline: directly blocks the transmission of a specific type of packet from an attacking host
Modify packet inline: allows IPS to change or remove the malicious contents of a packet

IPS in promiscuous mode, not inline mode, requires RSPAN. RSPAN enables the monitoring of traffic on a network by capturing and sending traffic from a source
port on one device to a destination port on a different device on a non-routed network. Because copies of traffic from the RSPAN port are forwarded to a monitor-
only IPS for analysis instead of flowing through IPS directly, the amount of time IPS takes to determine whether a network attack is in progress can be greater in
promiscuous mode than when IPS is operating in inline mode. The increased response latency means that an attack has a greater chance at success prior to
detection.

IPS in promiscuous mode, not inline mode, can reset TCP connections. Promiscuous mode supports three actions to mitigate attacks: Request block host, Request
block connection, and Reset TCP connection. The Request block host action causes IPS to send a request to the Attack Response Controller (ARC) to block all
communication from the attacking host for a given period of time. The Request block connection action causes IPS to send a request to the ARC to block the
specific connection from the attacking host for a given period of time. The Reset TCP connection action clears TCP resources so that normal TCP network activity
can be established. However, resetting TCP connections is effective only for TCP-based attacks and against only some types of those attacks.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 13, IPS/IDS Fundamentals, pp. 534-535
Cisco: Cisco IPS Mitigation Capabilities: Inline Mode Event Actions

QUESTION 39
Which of the following statements are correct regarding wireless signals in a VoWLAN? (Choose two.)

A. High data rate signals require higher SNRs than low data rate signals.
B. VoWLANs require lower SNRs than data-only WLANs.
C. Signals from adjacent cells on nonoverlapping channels should have an overlap of between 15 and 20percent to ensure smooth roaming.
D. VoWLANs require lower signal strengths than data-only WLANs.

http://www.gratisexam.com/
E. Increasing the strength of a signal cannot increase its SNR.

Correct Answer: AC
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

Explanation:
In a Voice over wireless LAN (VoWLAN), high data rate signals require higher signal-to-noise ratios (SNRs) than low data rate signals. In addition, signals from
adjacent cells on nonoverlapping channels should have an overlap between 15 and 20 percent to ensure smooth roaming. The sensitivity of an 802.11 radio
decreases as the data rate goes up. Thus the separation of valid 802.11 signals from background noise must be greater at higher data rates than at lower data
rates. Otherwise, the 802.11 radio will be unable to distinguish the valid signals from the surrounding noise. For example, an 802.11 radio might register a 1Mbps
signal at -45 decibel milliwatts (dBm) with -96 dBm of noise. These values produce an SNR of 51 decibels (dB). However, if the data rate is increased to 11 Mbps,
the radio might register a signal of -63 dBm with -82 dBm of noise, thereby bringing the SNR to 19 dB. Because the sensitivity of the radio is diminished at the higher
data rate, the radio might not be able to distinguish parts of the signal from the surrounding noise, which might result in packet loss. Therefore, the optimal cell size
is determined by the configured data rate and the transmitter power of the access point (AP).

Packet loss can also be mitigated by maintaining an overlap between 15 and 20 percent on nonoverlapping channels for all adjacent cells in a VoWLAN. By
providing at least 15 percent overlap between adjacent cells, a wireless client has a greater chance of completing the roaming process without incurring too much
delay or packet loss. If the overlap is less than 15 percent, the client might drop its connection with one AP before it has completed associating with the next AP.
This can result in degraded voice quality and disconnected calls.

VoWLANs require higher signal strengths than data-only wireless LANs (WLANs). Data traffic can tolerate delayed or dropped packets because its associated
applications typically do not operate in real time. If a wireless client breaks its connection with an AP and packets are delayed or lost, the client can retransmit the
missing packets when it reconnects. By contrast, real-time data, such as voice traffic, is particularly sensitive to delay, variations in delay, and packet loss. If packets
are delayed too long or lost because a client breaks its connection with an AP, the quality of the client's voice stream is degraded. If there is enough delay or packet
loss, the call will be disconnected by the client device.

Likewise, VoWLANs require higher SNRs than data-only WLANs. A high SNR indicates that a device can easily distinguish valid wireless signals from the
surrounding noise. The greater the separation between signal and noise, the higher the likelihood that wireless clients will not experience packet loss due to signal
interference. Cisco recommends maintaining a minimum signal strength of -67 dBm and a minimum SNR of 25 dB throughout the coverage area of a VoWLAN to
help mitigate packet loss.

Increasing the strength of a signal can increase its SNR. By increasing the strength of a transmitted signal, the difference between the signal and any associated
noise can be increased at the receiving station. A wireless LAN controller (WLC) can be configured to adjust the signal strength of a lightweight AP (LAP) if it
registers a low SNR value from one of the LAP's associated devices.

Reference:
Cisco: Site Survey Guide: Deploying Cisco 7920 IP Phones: Getting started

http://www.gratisexam.com/
QUESTION 40
Which of the following is a circuit-switched WAN technology that offers less than 2 Mbps of bandwidth?

A. ATM
B. Frame Relay
C. ISDN
D. SONET
E. SMDS
F. Metro Ethernet

Correct Answer: C
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
Integrated Services Digital Network (ISDN) is a circuit-switched WAN technology that offers less than 2 Mbps of bandwidth. Circuit-switched WAN technologies rely
on dedicated physical paths between nodes in a network. For example, when RouterA needs to contact RouterB, a dedicated path is established between the
routers and then data is transmitted. While the circuit is established, RouterA cannot use the WAN link to transmit any data that is not destined for networks
accessible through RouterB. When RouterA no longer has data for RouterB, the circuit is torn down until it is needed again.

Because circuit-switched links rely on dedicated physical paths, they are considered leased WAN technologies. Other examples of leased WAN technologies are
time division multiplexing (TDM) and Synchronous Optical Network (SONET).

Metro Ethernet is a WAN technology that is commonly used to connect networks in the same metropolitan area. However, Metro Ethernet providers typically provide
up to 1,000 Mbps of bandwidth. A company that has multiple branch offices within the same city can use Metro Ethernet to connect the branch offices to the
corporate headquarters.

Packet-switched networks do not rely on dedicated physical paths between nodes in a network. In a packet-switched network, a node establishes a single physical
circuit to a service provider. Multiple virtual circuits can share this physical circuit, allowing a single device to send data to several destinations. Because packet-
switched links do not rely on dedicated physical paths, they are considered shared WAN links. Frame Relay, X.25, Multiprotocol Label Switching (MPLS), and
Switched Multimegabit Data Service (SMDS) are examples of packet-switched, shared WAN technologies.

Asynchronous Transfer Mode (ATM) is a shared WAN technology that transports its payload in a series of fixed-sized 53byte cells. ATM has the unique ability to
transport different types of traffic, including IP packets, traditional circuit-switched voice, and video, while still maintaining a high quality of service for delay-sensitive
traffic such as voice and video services. Although ATM could be categorized as a packet-switched WAN technology, it is often listed in its own category as a cell-
switched WAN technology instead.

Reference:

http://www.gratisexam.com/
CCDA 200-310 Official Cert Guide, Chapter 6, ISDN, pp. 221-222
Cisco: Introduction to WAN Technologies: Circuit Switching
Cisco: Asynchronous Transfer Mode Switching: ATM Devices and the Network Environment

QUESTION 41
You administer a router that contains five routes to the same network: a static route, a RIPv2 route, an IGRP route, an OSPF route, and an internal EIGRP route.
The default ADs are used. The link to the static route has just failed.

Which route or routes will be used?

A. the RIPv2 route


B. the IGRP route
C. the OSPF route
D. the EIGRP route
E. both the RIPv2 route and the EIGRP route

Correct Answer: D
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
The Enhanced Interior Gateway Routing Protocol (EIGRP) route is used when the link to the static route goes down. EIGRP is a Cisco-proprietary routing protocol.
When multiple routes to a network exist and each route uses a different routing protocol, a router prefers the routing protocol with the lowest administrative distance
(AD). The following list contains the most commonly used ADs:

http://www.gratisexam.com/
In this scenario, the static route has the lowest AD. Therefore, the static route is used instead of the other routes. When the static route fails, the EIGRP route is
preferred, because internal EIGRP has an AD of 90.

If the EIGRP route were to fail, the Interior Gateway Routing Protocol (IGRP) route would be preferred, because IGRP has an AD of 100. If the IGRP route were
also to fail, the Open Shortest Path First (OSPF) route would be preferred, because OSPF has an AD of 110. The Routing Information Protocol version 2 (RIPv2)
route would not be used unless all of the other links were to fail, because RIPv2 has an AD of 120. ADs for a routing protocol can be manually configured by issuing
the distance command in router configuration mode. For example, to change the AD of OSPF from 110 to 80, you should issue the following commands:

RouterA(config)#router ospf 1
RouterA(configrouter)#distance 80

You can view the AD of the best route to a network by issuing the show ip routecommand. The AD is the first number inside the brackets in the output. For example,
the following router output shows an OSPF route with an AD of 80:

Router#show ip route
Gateway of last resort is 10.19.54.20 to network 10.140.0.0
E2 172.150.0.0 [80/5] via 10.19.54.6, 0:01:00, Ethernet2

The number 5 in the brackets above is the OSPF metric, which is based on cost. OSPF calculates cost based on the bandwidth of an interface: the higher the
bandwidth, the lower the cost. When two OSPF paths exist to the same destination, the router will choose the OSPF path with the lowest cost.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 10, Administrative Distance, pp. 386-387
Cisco: What Is Administrative Distance?

http://www.gratisexam.com/
QUESTION 42
Which of the following statements is true regarding physical connections in the Cisco ACI architecture?

A. Spine nodes must be fully meshed.


B. Leaf nodes must be fully meshed.
C. Each leaf node must connect to each spine node.
D. Each APIC must connect to each leaf node.

Correct Answer: C
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
In the Cisco Application Centric Infrastructure (ACI), each leaf node must connect to each spine node. Cisco ACI is a data center technology that uses switches,
categorized as spine and leaf nodes, to dynamically implement network application policies in response to application-level requirements. Network application
policies are defined on a Cisco Application Policy Infrastructure Controller (APIC) and are implemented by the spine and leaf nodes.

The spine and leaf nodes create a scalable network fabric that is optimized for east-west data transfer, which in a data center is typically traffic between an
application server and its supporting data services, such as database or file servers. Each spine node requires a connection to each leaf node? however, spine
nodes do not interconnect nor do leaf nodes interconnect. Despite its lack of fully meshed connections, this physical topology enables nonlocal traffic to pass from
any ingress leaf interface to any egress leaf interface through a single, dynamically selected spine node. By contrast, local traffic is passed directly from an ingress
interface on a leaf node to the appropriate egress interface on the same leaf node.

Because a spine node has a connection to every leaf node, the scalability of the fabric is limited by the number of ports on the spine node, not by the number of
ports on the leaf node. In addition, redundant connections between a spine and leaf pair are unnecessary because the nature of the topology ensures that each leaf
has multiple connections to the network fabric. Therefore, each spine node requires only a single connection to each leaf node.

Redundancy is also provided by the presence of multiple APICs, which are typically deployed as a cluster of three controllers. APICs are not directly involved in
forwarding traffic and are therefore not required to connect to every spine or leaf node. Instead, the APIC cluster is connected to one or more leaf nodes in much the
same manner that other endpoint groups (EPGs), such as application servers, are connected.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 4, ACI, pp. 135
Cisco: Application Centric Infrastructure Overview: Implement a Robust Transport Network for Dynamic Workloads

QUESTION 43
Which of the following are not supported by GET VPN? (Choose two.)

http://www.gratisexam.com/
A. centralized key management
B. dynamic NAT
C. voice traffic
D. static NAT
E. native multicast traffic

Correct Answer: BD
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
Group Encrypted Transport (GET) virtual private network (VPN) supports neither static nor dynamic Network Address Translation (NAT). GET VPN is a Cisco-
proprietary technology that provides tunnel-less, end-to-end security for both unicast and multicast traffic. GET VPN uses IP Security (IPSec) tunnel mode with
address preservation to preserve the inner IP header of each encrypted packet? the IP source address and various IP header fields are unaffected by the encryption
process. Because NAT changes information in the IP header, such as the IP source address, NAT is not supported by GET VPN and must be performed either
before a packet is encrypted or after a packet is decrypted. Cisco recommends GET VPN for environments needing highly scalable, any-to-any encrypted
connectivity for unicast and multicast traffic, such as a large financial network using a Multiprotocol Label Switching (MPLS) WAN.

In a GET VPN, trusted group member routers receive security policy and authentication keys from a central key server. Although group member routers obtain
keying information from a central key server, the key server is not involved in the flow of traffic as in a hub-and-spoke design. Instead, group member routers can
use the keying information from the key server to dynamically form direct connections with one another for data transmission. This enables group member routers to
form security associations with sufficient speed to minimize transmission delay and to support the Quality of Service (QoS) levels necessary for voice traffic.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 7, GETVPN, pp. 258-259
Cisco: Cisco Group Encrypted Transport VPN

QUESTION 44
View the Exhibit.

http://www.gratisexam.com/
Refer to the exhibit above. The Layer 3 switch on the left, DSW1, is the root bridge for all VLANs in the topology. Devices on VLAN 10 use DSW1 as a default
gateway. Devices on VLAN 20 use the Layer 3 switch on the right, DSW2, as a default gateway. A device that is operating in VLAN 20 and is connected to ASW3
transmits a packet that is destined beyond Router1.

What path will the packet most likely take through the network?

A. ASW3 > DSW2 > Router1


B. ASW3 > DSW1 > Router1
C. ASW3 > DSW2 > DSW1 > Router1
D. ASW3 > DSW1 > DSW2 > Router1

Correct Answer: D
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
Most likely, the packet will travel from ASW3 to DSW1, to DSW2, and then to Router1. Because all of the virtual LANs (VLANs) use DSW1 as the root bridge in this
scenario, all traffic from the access layer switches, regardless of VLAN, flows first to DSW1. Traffic from VLAN 10 is therefore already optimized because VLAN 10
uses DSW1 as its default gateway. However, VLAN 20 uses DSW2 as its default gateway. Therefore, traffic from VLAN 20 will most likely flow first to DSW1 and
then across the PortChannel 1 EtherChannel interface to DSW2 for forwarding.

http://www.gratisexam.com/
In this scenario, if you were to configure a separate spanning tree to be established for each VLAN, the location of the root switch could be optimized on a per-VLAN
basis. For example, configuring DSW2 as the preferred root bridge for devices that operate on VLAN 20 would cause VLAN 20 traffic from both ASW1 and ASW3
to flow directly to DSW2 for forwarding to Router1. VLAN 10 traffic would remain optimized to flow directly to DSW1 from ASW1, ASW2, or ASW3.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 3, STP Design Considerations, pp. 101-103
Cisco: InterSwitch Link and IEEE 802.1Q Frame Format: Background Theory
Cisco: Catalyst 3750X and 3560X Switch Software Configuration Guide, Release 12.2(55)SE: Configuring the Switch Priority of a VLAN

QUESTION 45
Which of the following address blocks is typically used for IPv4 link-local addressing?

A. 192.168.0.0/16
B. 172.16.0.0/12
C. 169.254.0.0/16
D. 10.0.0.0/8
E. 127.0.0.0/8

Correct Answer: C
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
Of the available choices, only the 169.254.0.0/16 address block is typically used for IP version 4 (IPv4) link-local addressing. The IP addresses in the 169.254.0.0/16
address block, which includes the IP addresses from 169.254.0.0 through 169.254.255.255, are defined by Request for Comments (RFC) 3927. This address block
is reserved for the dynamic configuration of IPv4 link-local addresses. On Microsoft Windows computers, addresses in these ranges are known as Automatic
Private IP Addressing (APIPA) addresses.

Addresses in the 192.168.0.0/16, 172.16.0.0/12, and 10.0.0.0/8 ranges are private IP addresses that are defined by RFC 1918. The following are the valid IP
address blocks in each of the classes available for commercial use as defined by RFC 1918:

Class A - 10.0.0.0 through 10.255.255.255, or 10.0.0.0/8


Class B - 172.16.0.0 through 172.31.255.255, or 172.16.0.0/12
Class C - 192.168.0.0 through 192.168.255.255, or 192.168.0.0/16

The 127.0.0.0/8 IP address block is a special-use IPv4 address block that is defined by the Internet Engineering Task Force (IETF) in RFC 1122 and in RFC 6890,
which obsoletes RFC 5735. The 127.0.0.1/32 IP address is typically used as a loopback address for devices on a network.

http://www.gratisexam.com/
Reference:
IETF: RFC 3927: Dynamic Configuration of IPv4 Link-Local Addresses

QUESTION 46
Which of the following protocols can provide Application layer management information?

http://www.gratisexam.com/

A. RMON
B. RMON2
C. SNMPv1
D. SNMPv2E. SNMPv3

Correct Answer: B
Section: Design Methodologies Explanation
Explanation

Explanation/Reference:
Section: Design Methodologies Explanation

Explanation:
Remote Monitoring version 2 (RMON2) can provide Open Systems Interconnection (OSI) Application layer management information. RMON2 builds on the
framework of Simple Network Management Protocol (SNMP) and extends the Management Information Base (MIB) to provide network flow statistics. The statistics
that RMON2 provides are divided into groups based on the type of information they contain. For example, RMON2 groups contain information about Network layer
address mappings, Application layer traffic statistics, and per-protocol traffic distribution. In addition, RMON2 provides a managed device with the ability to locally
store historical data that can then be used to analyze trends in network utilization and to determine whether a managed device requires optimization. By contrast,
the Cisco NetFlow feature can provide similar data for analysis? however, very little NetFlow data is typically stored locally. Instead, NetFlow data is typically
exported to a collector where it can be analyzed to determine whether a managed device requires optimization.

Remote Monitoring version 1, commonly referred to as RMON, provides Physical and Data Link layer management information. Like RMON2, RMON divides the
management data it provides into distinct groups? however, RMON's groups contain information about the physical network, such as Ethernet interface statistics,
host addresses based on Media Access Control (MAC) addresses, and Data Link layer traffic statistics. RMON information can also be maintained on the managed
device to provide historical data. Although RMON data is limited to only Physical and Data Link layer information, it can still be a valuable resource for determining
whether a managed device requires optimization.

Simple Network Management Protocol (SNMP) provides a framework for obtaining basic information about a managed device. Like RMON and RMON2, SNMP
uses the MIB to store information about a managed device; however, SNMP does not have the capability to locally store historical data. Therefore, SNMP requires a

http://www.gratisexam.com/
network management station (NMS) to periodically poll a managed device to accumulate historical data that can then be used determine whether the managed
device requires optimization. Three versions of SNMP currently exist: SNMP version 1 (SNMPv1), SNMPv2, and SNMPv3. SNMPv1 and SNMPv2 do not provide
authentication, encryption, or message integrity. Thus access to management information is based on a simple password known as a community string; the
password is sent as plain text with each SNMP message. If an attacker intercepts a message, the attacker can view the password information. SNMPv3 improves
upon SNMPv1 and SNMPv2 by providing encryption, authentication, and message integrity to ensure that the messages are not viewed or tampered with during
transmission.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 15, RMON, pp. 624-626
IETF: RFC 2021: Remote Network Monitoring Management Information Base Version 2 using SMIv2: 2. Overview
IETF: RFC 3577: Introduction to the Remote Monitoring (RMON) Family of MIB Modules: 4. RMON Documents

QUESTION 47
Which of the following is not true regarding the MPLS WAN deployment model for branch connectivity?

A. It provides the highest SLA guarantees for QoS capabilities.


B. It provides the highest SLA guarantees for network availability.
C. It is the most expensive deployment model.
D. It supports only dual-router configurations.

Correct Answer: D
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation
The Multiprotocol Label Switching (MPLS) WAN deployment model for branch connectivity supports both single-router and dual-router configurations. Cisco defines
three general deployment models for branch connectivity:
MPLS WAN
Hybrid WAN
Internet WAN

The MPLS WAN deployment model can use a single-router configuration with connections to multiple MPLS service providers or a dual-router configuration where
each router has a connection to one or more MPLS service providers. Service provider diversity ensures that an outage at the service provider level will not cause
an interruption of service at the branch. The MPLS WAN deployment model can provide service-level agreement (SLA) guarantees for Quality of Service (QoS) and
network availability through service-provider provisioning and routing protocol optimization. Although using multiple MPLS services provides increased network
resilience and bandwidth, it also increases the complexity and cost of the deployment when compared to other deployment models.

The Hybrid WAN deployment model can use a single or dual-router configuration and relies on an MPLS service provider for its primary WAN connection and on an

http://www.gratisexam.com/
Internet-based virtual private network (VPN) connection as a backup circuit. Unlike the MPLS WAN deployment model, the Hybrid WAN deployment model cannot
ensure QoS capabilities for traffic that does not pass to the MPLS service provider. In the Hybrid WAN deployment model, low-priority traffic is often routed through
the lower cost Internet VPN circuit, which can reduce the bandwidth requirements for the MPLS circuit, further lowering the overall cost without sacrificing network
resilience.

The Internet WAN deployment model can use a single or dual-router configuration and relies on an Internet-based VPN solution for primary and backup circuits.
Internet service provider (ISP) diversity ensures that carrier level outages do not affect connectivity between the branch and the central site. Because the Internet
WAN deployment model uses the public Internet, its QoS capabilities are limited. However, the Internet WAN deployment model is the most cost effective of the
three models defined by Cisco.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 7, Branch Connectivity, p. 271

QUESTION 48
Which of the following queuing methods provides bandwidth and delay guarantees?

A. FIFO
B. LLQ
C. WFQ
D. CBWFQ

Correct Answer: B
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

Explanation:
Low-latency queuing (LLQ) provides bandwidth and delay guarantees through the creation of one or more strict-priority queues that can be used specifically for
delay-sensitive traffic, such as voice and video traffic. In addition, LLQ supports the creation of up to 64 user-defined traffic classes. Each strict-priority queue can
use as much bandwidth as possible but can only use its guaranteed minimum bandwidth when other queues have traffic to send, thereby avoiding bandwidth
starvation for the user-defined queues. Cisco recommends limiting the strict-priority queues to a total of 33 percent of the link capacity.

Class-based weighted fair queuing (CBWFQ) provides bandwidth guarantees, so it can be used for voice, video, and mission-critical traffic. However, CBWFQ does
not provide the delay guarantees provided by LLQ, because CBWFQ does not provide support for strict-priority queues. CBWFQ improves upon weighted fair
queuing (WFQ) by enabling the creation of up to 64 custom traffic classes, each with a guaranteed minimum bandwidth.

Although WFQ can be used for voice, video, and mission-critical traffic, it does not provide the bandwidth or delay guarantees provided by LLQ, because WFQ does
not support the creation of strict-priority queues. Traffic flows are identified by WFQ based on source and destination IP address, port number, protocol number, and
Type of Service (ToS). Although WFQ is easy to configure, it is not supported on high-speed links. WFQ is used by default on Cisco routers for serial interfaces at

http://www.gratisexam.com/
2.048 Mbps or lower.

First-in-first-out (FIFO) queuing does not provide any traffic guarantees of any sort. FIFO queuing requires no configuration, because all packets are arranged into a
single queue. As the name implies, the first packet received is the first packet transmitted, without regard for packet type, protocol, or priority. Therefore, FIFO
queuing is not appropriate for voice, video, or mission-critical traffic. By default, Cisco uses FIFO queuing for interfaces faster than 2.048 Mbps.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 6, Low-Latency Queuing, p. 235
Cisco: Enterprise QoS Solution Reference Network Design Guide: Queuing and Dropping Principles
Cisco: Signalling Overview: RSVP Support for Low Latency Queuing

QUESTION 49
DRAG DROP
Select the processes from the left, and place them in the appropriate corresponding Cisco PBM Design Lifecycle phase column on the right. All processes will be
used.

Select and Place:

Correct Answer:

http://www.gratisexam.com/
Section: Design Objectives Explanation
Explanation

Explanation/Reference:
Section: Design Objectives Explanation

Explanation:
The Cisco Plan, Build, Manage (PBM) Design Lifecycle is a newer methodology designed to streamline the concepts from Cisco's older design philosophy: the
Prepare, Plan, Design, Implement, Operate, and Optimize (PPDIOO) Design Lifecycle. As the name implies, the PBM Design Lifecycle is divided into three distinct
phases: Plan, Build, and Manage.

The Plan phase of the PBM Design Lifecycle consists of the following three processes:
Strategy and analysis
Assessment
Design

The purpose of the strategy and analysis process is to generate proposed improvements to an existing network infrastructure with the overall goal of increasing an
organization's return on investment (ROI) from the network and its support staff. The assessment process then examines the proposed improvements from the
strategy and analysis process and determines whether the improvements comply with organizational goals and industry best practices. In addition, the assessment

http://www.gratisexam.com/
process identifies potential deficiencies that infrastructure changes might cause in operational and support facilities. Finally, the design process produces a network
design that meets current organizational objectives while maintaining resiliency and scalability.

The Build phase of the PBM Design Lifecycle consists of the following three processes:
Validation
Deployment
Migration

The purpose of the validation process is to implement the infrastructure changes outlined in the design process of the Plan phase and to verify that the
implementation meets the organizational needs as specified by the network design. The validation process implements the network design in a controlled
environment such as in a lab or staging environment. Once the network design has been validated, the purpose of the deployment process is to implement the
network design in a full-scale production environment. Finally, the purpose of the migration process is to incrementally transition users, devices, and services to the
new infrastructure as necessary.

The Manage phase of the PBM Design Lifecycle consists of the following four processes:
Product support
Solution support
Optimization
Operations management

The product support process addresses support for specific hardware, software, or network products. Cisco Smart Net is an example of a component of the product
support process. By contrast, solution support is focused on the solutions that hardware, software, and network products provide for an organization. Cisco Solution
Support is the primary component of the solution support process. Cisco Solution Support serves as the primary point of contact for Cisco solutions, leverages
solution-focused expertise, coordinates between multiple vendors for complex solutions, and manages each case from inception to resolution. The optimization
process is concerned with improving the performance, availability, and resiliency of a network implementation. It also addresses foreseeable changes and
upgrades, which reduces operating costs, mitigates risk, and improves ROI. The operations management process addresses the ongoing management of the
network infrastructure. It includes managed solutions for collaboration, data center, security, and general network services.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 1, Cisco Design Lifecycle: Plan, Build, Manage, pp. 9-12
Cisco: Services: Portfolio

QUESTION 50
In which of the following situations would eBGP be the most appropriate routing protocol?

A. when the router has a single link to a router within the same AS
B. when the router has redundant links to a router within the same AS
C. when the router has a single link to a router within a different AS
D. when the router has redundant links to a router within a different AS

Correct Answer: D
Section: Addressing and Routing Protocols in an Existing Network Explanation

http://www.gratisexam.com/
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
External Border Gateway Protocol (eBGP) would be the most appropriate routing protocol for a router that has redundant links to a router within a different
autonomous system (AS). An AS is defined as the collection of all areas that are managed by a single organization. Routing protocols that dynamically share routing
information within an AS are called interior gateway protocols (IGPs), and routing protocols that dynamically share routing information between multiple ASes are
called exterior gateway protocols (EGPs). Border Gateway Protocol (BGP) routers within the same AS communicate by using internal BGP (iBGP), and BGP routers
in different ASes communicate by using eBGP. BGP is typically used to exchange routing information between ASes, between a company and an Internet service
provider (ISP), or between ISPs.

Static routing, not BGP, would be the most appropriate routing method for a router that has a single link to a router within a different AS. Because BGP can be
complicated to configure and can use large amounts of processor and memory resources, static routing is recommended if dynamic routing information does not
need to be exchanged between routers that reside in different ASes. For example, if you connect a router to the Internet through a single ISP, it is not necessary for
the router to run BGP, because the router will use a single, static default route to the ISP for all traffic that is not destined to the internal network.

An IGP would be the most appropriate routing protocol for a router that has a single link or redundant links to a router within the same AS. Enhanced Interior
Gateway Routing Protocol (EIGRP), Open Shortest Path First (OSPF), and Routing Information Protocol (RIP) are examples of IGPs.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 11, BGP Neighbors, pp. 444-446
Cisco: Sample Configuration for iBGP and eBGP With or Without a Loopback Address: Introduction

QUESTION 51
In which of the following layer or layers should you implement QoS?

A. in only the core layer


B. in only the distribution layer
C. in only the access layer
D. in only the core and distribution layers
E. in only the access and distribution layers
F. in the core, distribution, and access layers

Correct Answer: F
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

http://www.gratisexam.com/
Explanation:
You should implement Quality of Service (QoS) in the core, distribution, and access layers. A network can become congested due to the aggregation of multiple
links or a drop in bandwidth from one link to another. When many packets are sent on a congested network, a delay in transmission time can occur. Lack of
bandwidth, end-to-end delay, jitter, and packet loss can be mitigated by implementing QoS. QoS facilitates the optimization of network bandwidth by prioritizing
network traffic based on its type. Prioritizing packets enables time-sensitive traffic, such as voice traffic, to be sent before other packets. Packets are queued based
on traffic type, and packets with a higher priority are sent before packets with a lower priority.

Because the access layer provides direct connectivity to network endpoints, QoS classification and marking are typically performed in the access layer. Cisco
recommends classifying and marking packets as close to the source of traffic as possible and using hardware-based QoS functions whenever possible. Although
classification and marking are typically performed in the access layer, QoS mechanisms must be implemented in each of the higher layers for QoS to be effective.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 3, Campus LAN QoS Considerations, pp. 111-112
Cisco: Campus Network for High Availability Design Guide: General Design Considerations

QUESTION 52
DRAG DROP
Select the attributes from the left, and place them under the corresponding Layer 2 access design on the right. Attributes can be selected more than once, and
some attributes might not be used.

Select and Place:

http://www.gratisexam.com/
Correct Answer:

http://www.gratisexam.com/
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
Loop-free inverted U designs support all service module installations, have all uplinks active, and support virtual LAN (VLAN) extensions. A service module is a
piece of hardware that extends the functionality of a Cisco device, for example, the Secure Sockets Layer (SSL) Service Module for Catalyst 6500 series switches
and Cisco 7600 series routers performs the majority of the CPU-intensive SSL processing so that the switch's processor or router's processor is not burdened by
large numbers of SSL connections. Loop-free inverted U designs offer redundancy at the aggregation layer, not the access layer; therefore, traffic will black-hole
upon failure of an access switch uplink. All uplinks are active with no looping, thus there is no Spanning Tree Protocol (STP) blocking by default. However, STP is
still essential so that redundant paths that might be created by any inadvertent errors in cabling or configuration are blocked.

Loop-free U designs do not support VLAN extensions, have all uplinks active, and support all service module implementations. Loop-free U designs offer a
redundant link between access layer switches as well as a redundant link at the aggregation layer. Because of the redundant path in both layers, extending a VLAN
beyond an individual access layer pair would create a loop. Like loop-free inverted U designs, loop-free U designs also run STP and have issues with traffic being
black-holed upon failure of an access switch uplink.

http://www.gratisexam.com/
Flex Link designs have a single active uplink, support VLAN extensions and all service modules, and disable STP by default. There are no loops in a Flex Link
design, and STP is disabled when a device is configured to participate in a Flex Link. Interface uplinks in this topology are configured in active/standby pairs, and
each device can only belong to a single Flex Link pair. In the event of an uplink failure, the standby link becomes active and takes over, thereby offering redundancy
when an access layer uplink fails. Possible disadvantages of the Flex Link design include its increased convergence time over other designs and its inability to run
STP in order to block redundant paths that might be created by inadvertent errors in cabling or configuration.

Reference:
Cisco: Data Center Access Layer Design

QUESTION 53
Which of the following are most likely to be provided by a collapsed core? (Choose four.)

A. Layer 2 aggregation
B. high-speed physical and logical paths
C. intelligent network services
D. end user, group, and endpoint isolation
E. routing and network access policies

Correct Answer: ABCE


Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
Layer 2 aggregation, high-speed physical and logical paths, intelligent network services, and routing and network access policies are typically provided by the core
and distribution layers. A collapsed core is a three-tier hierarchical design in which the core and distribution layers have been combined. The hierarchical model
divides the network into three distinct components:
Core layer
Distribution layer
Access layer

The core layer typically provides the fastest switching path in the network. As the network backbone, the core layer is primarily associated with low latency and high
reliability. The functionality of the core layer can be collapsed into the distribution layer if the distribution layer infrastructure is sufficient to meet the design
requirements. It is Cisco best practice to ensure that a collapsed core design can meet resource utilization requirements for the network.

The distribution layer serves as an aggregation point for access layer network links. Because the distribution layer is the intermediary between the access layer and
the core layer, the distribution layer is the ideal place to enforce security policies, to provide Quality of Service (QoS), and to perform tasks that involve packet
manipulation, such as routing. Summarization and next-hop redundancy are also performed in the distribution layer.

http://www.gratisexam.com/
The access layer provides Network Admission Control (NAC). NAC is a Cisco feature that prevents hosts from accessing the network if they do not comply with
organizational requirements, such as having an updated antivirus definition file. NAC Profiler automates NAC by automatically discovering and inventorying devices
attached to the LAN. The access layer serves as a media termination point for endpoints, such as servers and hosts. Because access layer devices provide access
to the network, the access layer is the ideal place to perform user authentication.

End user, group, and endpoint isolation is not typically required of a collapsed core layer in a three-tier hierarchical network design. That function is typically provided
by the devices in the access layer.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 2, Collapsed Core Design, p. 49
Cisco: Small Enterprise Design Profile Reference Guide: Collapsed Core Network Design

QUESTION 54
Which of the following are recommended campus network design practices? (Choose two.)

A. use a redundant triangle topology


B. use a redundant square topology
C. avoid equal-cost links between redundant devices
D. summarize routes from the distribution layer to the core layer
E. create routing protocol peer relationships on all links

Correct Answer: AD
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
When designing a campus network, Cisco recommends that you use a redundant triangle topology and summarize routes from the distribution layer to the core
layer. In a redundant triangle topology, each core layer device has direct paths to redundant distribution layer devices, as shown in the diagram below:

http://www.gratisexam.com/
This topology ensures that a link or device failure in the distribution layer can be detected immediately in hardware. Otherwise, a core layer device could detect only
link or device failures through a software-based mechanism such as expired routing protocol timers. Additionally, the use of equal-cost redundant links enables a
core layer device to enter both paths into its routing table. Because both equal-cost paths are active in the routing table, the core layer device can perform load
balancing between the paths when both paths are up. When one of the equal-cost redundant links fails, the routing protocol does not need to reconverge, because
the remaining redundant link is still active in the routing table. Thus traffic flows can be immediately rerouted around the failed link or device.

You should summarize routes from the distribution layer to the core layer. With route summarization, contiguous network addresses are advertised as a single
network. This process enables the distribution layer devices to limit the number of routing advertisements that are sent to the core layer devices. Because fewer
advertisements are sent, the routing tables of core layer devices are kept small and access layer topology changes are not advertised into the core layer.
Cisco does not recommend that you use a redundant square topology. In a redundant square topology, not every core layer device has redundant direct paths to
distribution layer devices, as shown below:

http://www.gratisexam.com/
Because a redundant square topology does not provide a core layer device with redundant direct paths to the distribution layer, the device will enter only the path
with the lowest cost into its routing table. If the lowest cost path fails, the routing protocol must converge in order to select an alternate path from the remaining
available paths. No traffic can be forwarded around the failed link or device until the routing protocol converges.

You should create routing protocol peer relationships on only the transit links of Layer 3 devices. A transit link is a link that directly connects two or more Layer 3
devices, such as a multilayer switch or a router. By default, a Layer 3 device sends routing protocol updates out of every Layer 3 interface that participates in the
routing protocol. These routing updates can cause unnecessary network overhead on devices that directly connect to a large number of networks, such as
distribution layer switches. Therefore, Cisco recommends filtering routing protocol updates from interfaces that are not directly connected to Layer 3 devices.

Reference:
Cisco: Campus Network for High Availability Design Guide: Using Triangle Topologies

QUESTION 55
The IP address 169.254.173.233 is an example of which of the following types of IP addresses?

A. a Class A address
B. a public address
C. a DHCP address
D. an APIPA address

Correct Answer: D
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:

http://www.gratisexam.com/
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
The IP address 169.254.173.233 is an example of an Automatic Private IP Addressing (APIPA) address. On networks that utilize IP, each computer requires a
unique IP address in order to access network resources. If an APIPA-capable computer, which must be running Windows 2000 or later, is configured to use
Dynamic Host Configuration Protocol (DHCP) and is unable to obtain an IP address from a DHCP server, it will assign itself an APIPA address. An APIPA IP
address is in the range of 169.254.0.0 to 169.254.255.255.

The computer with this address will most likely not be able to access other computers on the network unless those computers are also using APIPA addresses. A
computer that has an APIPA address continually checks the network for a DHCP server. When a DHCP server becomes available, the computer releases its APIPA
address and leases an IP address from the DHCP server.
IP version 4 (IPv4) addresses are 32bit (four-byte) addresses typically written in dotted-decimal format, where each byte is written as a decimal value from 0 to 255
and separated by dots. All IPv4 addresses fall into one of several classes. Class A IP addresses range from 1.0.0.0 through 126.255.255.255, Class B IP addresses
range from 128.0.0.0 through 191.255.255.255, and Class C addresses range from 192.0.0.0 through 223.255.255.255. Two other classes of IP addresses exist:
Class D and Class E. Class D addresses are reserved for multicast use, and Class E addresses are reserved for experimental use.

Neither Class D addresses nor Class E addresses can be used on the Internet. The table below shows the classes of IPv4 addresses and their ranges:

IPv4 addresses can be either public or private. A public IP address is an address that has been assigned by the Internet Assigned Numbers Authority (IANA) for use
on the Internet. IANA has also designated several ranges of IPv4 addresses for use on internal private networks that will not directly connect to the Internet.

The table below shows the IPv4 addresses that IANA designated for private use:

Reference:

http://www.gratisexam.com/
CCDA 200-310 Official Cert Guide, Chapter 8, IPv4 Private Addresses, pp. 299-300
CCDA 200-310 Official Cert Guide, Chapter 8, NAT, pp. 300-302

QUESTION 56
View the Exhibit.

Refer to the exhibit. Which of the following statements are true about the deployment of the IPS in the exhibit? (Choose two.)

http://www.gratisexam.com/

A. It increases response latency.


B. It decreases the risk of successful attacks.
C. It can directly block all communication from an attacking host.
D. It can reset TCP connections.
E. It does not require RSPAN on switch ports.

Correct Answer: AD
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

http://www.gratisexam.com/
Explanation:
When Cisco Intrusion Prevention System (IPS) is configured in promiscuous mode, IPS response latency is increased, thereby increasing the risk of a successful
attack. In addition, IPS in promiscuous mode supports the Reset TCP connection action, which mitigates Transmission Control Protocol (TCP) attacks by resetting
TCP connections.

Promiscuous mode, which is also referred to as monitor-only operation, enables an IPS to passively examine network traffic without impacting the original flow of
traffic. This passive connection enables the IPS to have the most visibility into the networks on the switch to which it is connected. However, promiscuous mode
operation increases response latency and increases the risk of successful attacks because copies of traffic are forwarded to IPS for analysis instead of flowing
through IPS directly, thereby increasing the amount of time IPS takes to determine whether a network attack is in progress. This increased response latency means
that an attack has a greater chance at success prior to detection than it would if the IPS were deployed inline with network traffic.

Remote Switched Port Analyzer (RSPAN) must be enabled on switch ports so that IPS can analyze the traffic on those ports. RSPAN enables the monitoring of
traffic on a network by capturing and sending traffic from a source port on one device to a destination port on a different device on a nonrouted network.

IPS in promiscuous mode supports three actions to mitigate attacks: Request block host, Request block connection, and Reset TCP connection. The Request block
host action causes IPS to send a request to the Attack Response Controller (ARC) to block all communication from the attacking host for a given period of time. The
Request block connection action causes IPS to send a request to the ARC to block the specific connection from the attacking host for a given period of time. The
Reset TCP connection action clears TCP resources so that normal TCP network activity can be established. However, resetting TCP connections is effective only
for TCP-based attacks and against only some types of those attacks.

IPS in promiscuous mode does not directly block all communication from an attacking host. In promiscuous mode, IPS can send a request to block the host to the
ARC but does not directly block the host. One advantage of sending block requests to the ARC is that attacking hosts can be blocked from multiple locations within
the network. IPS can directly deny all communication from an attacking host when operating in inline mode by using the Deny attacker inline action.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 13, IPS/IDS Fundamentals, pp. 534-535
Cisco: Cisco IPS Mitigation Capabilities: Promiscuous Mode Event Actions

QUESTION 57
Which of the following is the QoS model that is primarily used on the Internet?

A. best-effort
B. IntServ
C. DiffServ
D. AutoQoS

Correct Answer: A
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:

http://www.gratisexam.com/
Section: Enterprise Network Design Explanation

Explanation:
The best-effort model is the Quality of Service (QoS) model that is primarily used on the Internet. No QoS mechanisms are used when the best-effort model is
implemented; all packets are treated with equal priority. The best-effort model is very scalable and easy to implement. However, since bandwidth is not guaranteed
for any packet types the best-effort model can be a key limitation when considering an Internet circuit as a backup connection for an enterprise wide area network
(WAN).

The Integrated Services (IntServ) model is not the QoS model primarily used on the Internet. IntServ, which was the first QoS model, provides end-to-end reliability
guarantees for bandwidth, delay, and packet loss. However, IntServ is not very scalable, since its signaling overhead can consume a lot of bandwidth. IntServ uses
Resource Reservation Protocol (RSVP) as the signaling protocol.

The Differentiated Services (DiffServ) model is also not the QoS model primarily used on the Internet. DiffServ does not provide end-to-end reliability guarantees.
Instead, it provides per-hop QoS mechanisms. Because end-to-end signaling is not required, bandwidth is not consumed by signaling overhead? therefore, DiffServ
is more scalable than IntServ. However, the QoS mechanisms employed by DiffServ must be configured consistently at each hop.

AutoQoS is not a QoS model. AutoQoS automates the configuration of QoS on Cisco devices, enabling consistent configurations throughout a large network.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 7, WAN Backup over the Internet, pp. 263-264
Cisco: QoS Fact or Fiction

QUESTION 58
Which of the following protocols can IPSec use to provide the integrity component of the CIA triad? (Choose two.)

A. GRE
B. AH
C. AES
D. ESP
E. DES

Correct Answer: BD
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
IP Security (IPSec) can use either Authentication Header (AH) or Encapsulating Security Payload (ESP) to provide the integrity component of the confidentiality,
integrity, and availability (CIA) triad. The integrity component of the CIA triad ensures that data is not modified in transit by unauthorized parties. AH and ESP are

http://www.gratisexam.com/
integral parts of the IPSec protocol suite and can be used to ensure the integrity of a packet. Data integrity is provided by using checksums on each end of the
connection. If the data generates the same checksum value on each end of the connection, the data was not modified in transit. In addition, AH and ESP can
authenticate the origin of transmitted data. Data authentication is provided through various methods, including user name/password combinations, preshared keys
(PSKs), digital certificates, and onetime passwords (OTPs). Although AH and ESP perform similar functions, ESP provides additional security by encrypting the
contents of the packet. AH does not encrypt the contents of the packet.

In addition to data authentication and data integrity, IPSec can provide confidentiality, which is another component of the CIA triad. IPSec uses encryption protocols,
such as Advanced Encryption Standard (AES) or Data Encryption Standard (DES), to provide data confidentiality. Because the data is encrypted, an attacker cannot
read the data if he or she intercepts the data before it reaches the destination. IPSec does not use either AES or DES for data authentication or data integrity.

Generic Routing Encapsulation (GRE) is a protocol designed to tunnel any Open Systems Interconnection (OSI) Layer 3 protocol through an IP transport network.
Because the focus of GRE is to transport many different protocols, it has very limited security features. By contrast, IPSec has strong data confidentiality and data
integrity features, but it can transport only IP traffic. GRE over IPSec combines the best features of both protocols to securely transport any protocol over an IP
network. However, GRE itself does not provide data integrity or data authentication.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 7, Enterprise Managed VPN: IPsec, pp. 255-259
IETF: RFC 4301: Security Architecture for the Internet Protocol: 3.2. How IPsec Works

QUESTION 59
How many valid host IP addresses are available on a /21 subnet?

A. 32,766
B. 4,094
C. 2,046
D. 510

Correct Answer: C
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
A /21 subnet contains 2,046 valid host addresses. A subnet mask specifies how many bits belong to the network portion of a 32bit IP address. The remaining bits in
the IP address belong to the host portion of the IP address. To determine how many host addresses are defined by a subnet mask, use the formula 2n-2, where n is
the number of bits in the host portion of the address. You must subtract 2 from the number of available hosts, because the first address is the subnetwork address
and the last address is the broadcast address.
To determine the number of bits in the host portion of the address, you should convert /21 to dotted-decimal notation. To convert /21 from Classless Inter-Domain
Routing (CIDR) notation to dotted-decimal notation, begin at the left and set the first 21 bits to a value of 1. These bits identify the network portion of the IP address.

http://www.gratisexam.com/
The remaining 11 bits will be set to 0. These are the host bits.

/21 = 11111111.11111111.11111000.00000000

There are 11 bits equal to the /21 subnet mask. Applying the 2n-2 formula, where n = 11, yields 2,048 -2 = 2,046. Therefore, 2,046 hosts are available for each
subnetwork when a subnet mask of /21 is applied.

Although it is important to learn the formula for calculating valid host addresses, the following list demonstrates the relationship between common subnet masks and
valid host addresses:

Subnetting a contiguous address range in structured, hierarchical fashion enables routers to maintain smaller routing tables and eases administrative burden when
troubleshooting. Conversely, a discontiguous IP version 4 (IPv4) addressing scheme can cause routing tables to bloat because the subnets cannot be summarized.
Summarization minimizes the size of routing tables and advertisements and reduces a router's processor and memory requirements.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 8, Plan for a Hierarchical IP Address Network, pp. 311-312
Cisco: IP Addressing and Subnetting for New Users

QUESTION 60
Which of the following is true regarding the Hybrid WAN deployment model for branch connectivity?

A. It can provide QoS capabilities for essential traffic.

http://www.gratisexam.com/
B. It sacrifices network availability to reduce costs.
C. It is the least expensive deployment model.
D. It supports only single-router configurations.

Correct Answer: A
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
The Hybrid WAN deployment model for branch connectivity can provide Quality of Service (QoS) capabilities for essential traffic. Cisco defines three general
deployment models for branch connectivity:
MPLS WAN
Hybrid WAN
Internet WAN

The Multiprotocol Label Switching (MPLS) WAN deployment model can use a single-router configuration with connections to multiple MPLS service providers or a
dual-router configuration where each router has a connection to one or more MPLS service providers. Service provider diversity ensures that an outage at the
service provider level will not cause an interruption of service at the branch. The MPLS WAN deployment model can provide service-level agreement (SLA)
guarantees for QoS and network availability through service-provider provisioning and routing protocol optimization. Although using multiple MPLS services provides
increased network resilience and bandwidth, it also increases the complexity and cost of the deployment when compared to other deployment models.

The Hybrid WAN deployment model can use a single or dual-router configuration and relies on an MPLS service provider for its primary WAN connection and on an
Internet-based virtual private network (VPN) connection as a backup circuit. Unlike the MPLS WAN deployment model, the Hybrid WAN deployment model cannot
ensure QoS capabilities for traffic that passes over the backup circuit. Because of this QoS limitation, low-priority traffic is often routed through the lower cost
Internet VPN circuit whereas high-priority traffic is routed through the MPLS circuit. This can reduce the bandwidth requirements of the MPLS circuit and lower the
overall cost of the deployment without sacrificing network resilience or QoS capabilities for essential traffic.

The Internet WAN deployment model can use a single or dual-router configuration and relies on an Internet-based VPN solution for primary and backup circuits.
Internet service provider (ISP) diversity ensures that carrier level outages do not affect connectivity between the branch and the central site. Because the Internet
WAN deployment model uses the public Internet, its QoS capabilities are limited. However, the Internet WAN deployment model is the most cost effective of the
three models defined by Cisco.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 7, Branch Connectivity, p. 271

QUESTION 61
View the Exhibit.

http://www.gratisexam.com/
Refer to the exhibit. Which of the following traffic flows will the IPS be unable to monitor? (Choose two.)

A. traffic from the DMZ to the Internet B. traffic from the DMZ to the LAN
B. traffic from the Internet to the DMZ
C. traffic from the Internet to the LAN
D. traffic from the LAN to the DMZ
E. traffic from the LAN to the Internet

Correct Answer: BE
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

Explanation:
The Intrusion Prevention System (IPS) in this scenario will be unable to monitor traffic flows from the demilitarized zone (DMZ) to the LAN and from the LAN to the
DMZ. An IPS provides real-time monitoring of malicious traffic and can prevent malicious traffic from infiltrating the network. An IPS functions similarly to a Layer 2
bridge; a packet entering an interface on the IPS is directed to the appropriate outbound interface without regard to the packet's Layer 3 information. Instead, the
IPS uses interface or virtual LAN (VLAN) pairs to determine where to send the packet. This enables an IPS to be inserted into an existing network topology without
requiring any disruptive addressing changes. Because traffic flows through an IPS, an IPS can detect malicious traffic as it enters the IPS device and can prevent
the malicious traffic from infiltrating the network.

In this scenario, the IPS is deployed inline between the firewall and the edge router. Because traffic flows between the LAN and DMZ do not pass through the

http://www.gratisexam.com/
firewall, the IPS will be unable to monitor them. However, the IPS will be able to monitor traffic flows between the LAN and the Internet and between the DMZ and
the Internet. In addition, because the IPS is deployed on the outside of the firewall, it will have visibility into traffic flows that will ultimately be dropped by the firewall.
This insight can be useful during an active attack; however, it comes at the cost of additional resource utilization since the IPS will be processing more traffic than
will ultimately be passing through the firewall.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 13, IPS/IDS Fundamentals, pp. 534-535

QUESTION 62
When using the bottom-up design approach, which layer of the OSI model is used as a starting point?

A. Application layer
B. Session layer
C. Network layer
D. Data Link layer
E. Physical layer

Correct Answer: E
Section: Design Methodologies Explanation
Explanation

Explanation/Reference:
Section: Design Methodologies Explanation

Explanation:
The Physical layer of the Open Systems Interconnection (OSI) model is used as a starting point when using the bottom-up design approach. The bottom-up design
approach takes its name from the methodology of starting with the lower layers of the OSI model, such as the Physical, Data Link, Network, and Transport layers,
and working upward toward the higher layers. The bottom-up approach focuses on the devices and technologies that should be implemented in a design, instead of
focusing on the applications and services that will be used on the network. In addition, the bottom-up approach relies on previous experience rather than on a
thorough analysis of organizational requirements or projected growth. Because the bottom-up approach does not use a detailed analysis of an organization's
requirements, the bottom-up approach can be much less time-consuming than the top-down design approach. However, the bottom-up design approach can often
lead to network redesigns because the design does not provide a "big picture" overview of the current network or its future requirements.

By contrast, the top-down design approach takes its name from the methodology of starting with the higher layers of the OSI model, such as the Application,
Presentation, and Session layers, and working downward toward the lower layers. The top-down design approach requires a thorough analysis of the organization's
requirements. As a result, the top-down design approach is a more time-consuming process than the bottom-up design approach. With the top-down approach, the
designer obtains a complete overview of the existing network and the organization's needs. With this "big picture" overview, the designer can then focus on the
applications and services that meet the organization's current requirements. By focusing on the applications and services required in the design, the designer can
work in a modular fashion that will ultimately facilitate the implementation of the actual design. In addition, the flexibility of the resulting design is typically much
improved over that of the bottom-up approach because the designer can account for the organization's projected needs.

http://www.gratisexam.com/
Reference:
CCDA 200-310 Official Cert Guide, Chapter 1, Top-Down Approach, pp. 24-25
Cisco: Using the Top-Down Approach to Network Design: 4. Top-Down and Bottom-Up Approach Comparison (Flash)

QUESTION 63
You issue the following commands on RouterA:

RouterA receives a packet destined for 10.0.0.24.

To which next-hop IP address will RouterA forward the packet?

A. 10.0.0.4
B. 192.168.1.1
C. 192.168.1.2
D. 192.168.1.3
E. 192.168.1.4

Correct Answer: B
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
RouterA will forward the packet to the next-hop IP address of 192.168.1.1. When a packet is sent to a router, the router checks the routing table to see if the next-
hop address for the destination network is known. The routing table can be filled dynamically by a routing protocol, or you can configure the routing table manually
by issuing the ip route command to add static routes. The ip route command uses the syntax ip route net-address mask next-hop, where net-address is the network
address of the destination network, mask is the subnet mask of the destination network, and next-hop is the IP address of a neighboring router that can reach the
destination network.

A default route is used to send packets that are destined for a location that is not listed elsewhere in the routing table. For example, the ip route 0.0.0.0 0.0.0.0
192.168.1.4command specifies that packets destined for addresses not otherwise specified in the routing table are sent to the default next-hop address of
192.168.1.4. A net-address and mask combination of 0.0.0.0 0.0.0.0 specifies any packet destined for any network.

http://www.gratisexam.com/
If multiple static routes to a destination are known, the most specific route is used. Therefore, the following rules apply on RouterA:
Packets sent to the 10.0.0.0 255.255.255.224 network are forwarded to the next-hop address of192.168.1.1. This includes destination addresses from 10.0.0.0
through 10.0.0.31.
Packets sent to the 10.0.0.0 255.255.255.0 network, except those sent to the 10.0.0.0255.255.255.224 network, are forwarded to the next-hop address of
192.168.1.2. This includes destination addresses from 10.0.0.32 through 10.0.0.255.
Packets sent to the 10.0.0.0 255.255.0.0 network, except those sent to the 10.0.0.0 255.255.255.0network, are forwarded to the next-hop address of
192.168.1.3. This includes destination addresses from 10.0.1.0 through 10.0.255.255.
Packets sent to any destination not listed in the routing table are forwarded to the default static route next-hop address of 192.168.1.4.

Because the most specific route to 10.0.0.24 is the route toward the 10.0.0.0 255.255.255.224 network, RouterA will forward a packet destined for 10.0.0.24 to the
next-hop address of 192.168.1.1.

Reference:
Cisco: IP Routing ProtocolIndependent Commands: ip route
Cisco: Specifying a Next Hop IP Address for Static Routes

QUESTION 64
In a Layer 3 hierarchical design, which enterprise campus module layer or layers exclusively use Layer 3 switching?

A. only the campus core layer


B. the distribution and campus core layers
C. only the distribution layer
D. the distribution and access layers
E. only the access layer

Correct Answer: B
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
In a Layer 3 hierarchical design, the distribution and campus core layers of the enterprise campus module use Layer 3 switching exclusively. Thus a Layer 3
switching design relies on First Hop Redundancy Protocols (FHRPs) for high availability. In addition, a Layer 3 switching design typically uses route filtering on links
that face the access layer of the design.

In a Layer 2, or switched, hierarchical design, only the access layer of the enterprise campus module uses Layer 2 switching exclusively. The access layer of the
enterprise campus module provides end users with physical access to the network. In addition to using Virtual Switching System (VSS) in place of FHRPs for
redundancy, a Layer 2 switching design requires that inter-VLAN traffic be routed in the distribution layer of the hierarchy. Also, Spanning Tree Protocol (STP) in the
access layer will prevent more than one connection between an access layer switch and the distribution layer from becoming active at a given time.

http://www.gratisexam.com/
The distribution layer of the enterprise campus module provides link aggregation between layers. Because the distribution layer is the intermediary between the
access layer and the campus core layer, the distribution layer is the ideal place to enforce security policies, provide load balancing, provide Quality of Service (QoS),
and perform tasks that involve packet manipulation, such as routing. In a switched hierarchical design, the switches in the distribution layer use Layer 2 switching on
ports connected to the access layer and Layer 3 switching on ports connected to the campus core layer.

The campus core layer of the enterprise campus module provides fast transport services between the modules of the enterprise architecture module, such as the
enterprise edge and the intranet data center. Because the campus core layer acts as the network's backbone, it is essential that every distribution layer device have
multiple paths to the campus core layer. Multiple paths between the campus core and distribution layer devices ensure that network connectivity is maintained if a
link or device fails in either layer. In a switched hierarchical design, the campus core layer switches use Layer 3 switching exclusively.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 2, Hierarchical Model Examples, pp. 46-48
Cisco: Cisco SAFE Reference Guide: Enterprise Campus

QUESTION 65
Which of the following noise values would provide the weakest connection between an AP and a wireless client with an RSSI of -67 dBm?

A. -19 dBm
B. -38 dBm
C. -67 dBm
D. -83 dBmE. -91 dBm

Correct Answer: A
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

Explanation:
A noise value of -19 decibel milliwatts (dBm) would provide the weakest connection between an AP and a wireless client with a Received Signal Strength Indicator
(RSSI) of -67 dBm. In a Voice over wireless LAN (VoWLAN), signal strength is measured in dBm, which is a measure of power normalized to 1 milliwatt (mW). Zero
dBm indicates infinite signal strength, and negative values indicate decreased signal strength. For example, -19 dBm is a stronger signal than -38 dBm. The
signaltonoise ratio (SNR) describes the separation between a valid radio signal and any ambient noise. A wireless client with an RSSI of -67 dBm would require a
noise value less than -67 dBm in order to separate signal from noise. A high SNR indicates that a device can easily distinguish valid signals from the surrounding
noise. The greater the separation between signal and noise, the higher the likelihood that the wireless client will not experience packet loss due to signal
interference.

Conversely, a lower SNR increases the likelihood that the wireless client will be unable to discern the signal. If the SNR is too low, the wireless client might not be
able to distinguish some parts of the signal from the surrounding noise, which might result in packet loss. In this case, an RSSI of -67 dBm with a noise value of -19

http://www.gratisexam.com/
dBm produces an SNR of -48 decibels (dB). A negative SNR value indicates that the strength of the noise is greater than the strength of the received signal,
resulting in 100 percent packet loss. Cisco recommends maintaining a minimum signal strength of -67 dBm and a minimum SNR of 25 dB throughout the coverage
area of a VoWLAN to help mitigate packet loss.

The sensitivity of an 802.11 radio decreases as the data rate goes up. Thus the separation of valid 802.11 signals from background noise must be greater at higher
data rates than at lower data rates. Otherwise, the 802.11 radio will be unable to distinguish the valid signals from the surrounding noise. For example, an 802.11
radio might register a 1Mbps signal at -45 dBm with -96 dBm of noise. These values produce an SNR of 51 dB. However, if the data rate is increased to 11 Mbps,
the radio might register a signal of -63 dBm with -82 dBm of noise, thereby bringing the SNR to 19 dB. Because the sensitivity of the radio is diminished at the higher
data rate, the radio might not be able to distinguish parts of the signal from the surrounding noise, which might result in packet loss. Therefore, the optimal cell size
is determined by the configured data rate and the transmitter power of the access point (AP).

Noise values of -38 dBm, -67 dBm, -83 dBm, and -91 dBm would not provide a connection weaker than the noise value of -19 dBm. Each of these values produces
an SNR higher than the SNR obtained with a noise value of -19 dBm. Because the strength of a connection can be determined by its SNR, a noise value that
produces the lowest SNR would provide the weakest connection.

Reference:
Cisco: Site Survey Guide: Deploying Cisco 7920 IP Phones: Getting started

QUESTION 66
Which of the following operating modes enables a WLC to manage a remote LAP from a central location?

A. local
B. H-REAP
C. monitor
D. rogue detector
E. sniffer

Correct Answer: B
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

Explanation:
Hybrid remote edge access point (H-REAP) mode is an operating mode that enables a wireless LAN controller (WLC) to manage a remote lightweight access point
(LAP) from a central location. After adding a LAP to a WLC, you can configure the mode of the LAP depending on your needs and the capability of the LAP. H-
REAP mode, which is also known as FlexConnect, enables administrators to deploy a LAP in a remote location without also needing to deploy a WLC to the
location. A LAP operating in H-REAP mode can connect over a WAN link to a WLC that is located in a different location. This enables administrators to manage the
LAP from a central location without having to deploy WLCs to each remote office. Furthermore, LAPs operating in H-REAP mode can provide client connectivity
even if the connection to the remote WLC is lost. That is, a LAP operating in H-REAP mode can authenticate clients locally even if the AP cannot reach the WLC.

http://www.gratisexam.com/
Local mode is the default mode of operation for a LAP and does not enable a WLC to manage a remote LAP from a central location. A LAP operating in local mode
uses timers that are tuned for WLCs on a LAN. Therefore, connectivity to a remote WLC can fail if the roundtrip time between the LAP and the WLC is too high,
such as on a WAN link. Cisco recommends using H-REAP to centrally manage remote LAPs across a WAN link.

Rogue detector mode is a LAP operating mode used to detect unauthorized clients on a wired network. You can configure an AP to operate in rogue detector mode
to configure the AP to scan traffic on the wired connection in search of unauthorized APs and unauthorized clients on the wired network.

Sniffer mode is a LAP operating mode used to capture network traffic, which is then forwarded to a designated host for analysis. The host must be running network
analyzer software, such as AiroPeek, to decode the packets sent from the LAP. A LAP operating in sniffer mode does not process normal client data and essentially
becomes a dedicated wireless packet sniffer.

Monitor mode is a LAP operating mode used to provide data for location-based services. A LAP operating in monitor mode functions as a dedicated sensor, which
continuously scans all configured channels and provides data that can be used by location-based services and intrusion detection systems.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 5, AP Modes, pp. 180-181
CCDA 200-310 Official Cert Guide, Chapter 5, Hybrid REAP, p. 200

QUESTION 67
At which of the following layers of the OSI model does CDP operate?

A. Application layer
B. Transport layer
C. Network Layer
D. Data Link layer
E. Physical Layer

Correct Answer: D
Section: Design Methodologies Explanation
Explanation

Explanation/Reference:
Section: Design Methodologies Explanation

Explanation:
Cisco Discovery Protocol (CDP) operates at the Data Link layer, or Layer 2, of the Open Systems
Interconnection (OSI) model. CDP is a proprietary protocol used by Cisco devices to detect neighboring Cisco devices. For example, Cisco switches use CDP to
determine whether a directly connected Voice over IP (VoIP) phone is manufactured by Cisco or by a third party. CDP packets are broadcast from a CDP-enabled
device to a multicast address. Each directly connected CDP-enabled device receives the broadcast and uses that information to build a CDP table. The CDP table
contains a significant amount of information, including the following:

http://www.gratisexam.com/
The device ID of the neighboring device
The capabilities of the neighboring device
The product number of the neighboring device
The holdtime
The local interface
The remote interface

Although CDP does not operate at the Physical layer, or Layer 1, it relies on a fully operational Physical layer. CDP packets are encapsulated by the CDP process
on a Cisco device and then passed to the Physical layer for transmission onto the Physical medium, typically as electrical or optimal pulses which represent the bits
of data. If CDP information is not being exchanged between directly connected devices, you should first check for Physical layer connectivity issues before moving
on to troubleshoot potential Data Link layer connectivity issues.

CDP does not operate at any OSI layer above the Data Link layer, such as the Network layer (Layer 3), Transport layer (Layer 4), or Application layer (Layer 7). One
of the strengths of CDP is that its operation is network protocol agnostic? meaning that CDP is not dependent on any particular Network layer protocol addressing
scheme, such as IP addressing. For example, two directly connected devices with misconfigured IP addressing can still communicate and share information.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 15, CDP, p. 629
Cisco: Configuring Cisco Discovery Protocol

QUESTION 68
Which of the following address and subnet mask combinations summarizes the smallest network?

http://www.gratisexam.com/

A. 172.16.1.0/8
B. 172.16.2.0/16
C. 172.20.3.0/21
D. 172.31.148.0/24

Correct Answer: D
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

http://www.gratisexam.com/
Explanation:
Of the available choices, the 172.31.148.0/24 network address and subnet mask combination summarizes the smallest network. The /24 notation indicates that a
24-bit subnet mask (255.255.255.0) is used. A 24-bit subnet is typically used to represent a single Class C, 256-host subnet. In this scenario, the 172.31.148.0/24
network is a 256-host subnet of the Class B 172.31.0.0 network. The 256-host subnet's network address is 172.31.148.0. Its broadcast address is 172.31.148.255.
In its classful form, this network would be represented by a /16 subnet mask (255.255.0.0) and contain a total of 65,534 hosts. The Class B range network address
would be 172.31.0.0. The broadcast address would be 172.31.255.255.

The 172.20.3.0/21 network address and subnet mask combination does not summarize the smallest network. The /21 notation indicates that a 21bit subnet mask
(255.255.248.0) is used, which can summarize two /22 networks, four /23 networks, eight /24 networks, and so on. In this scenario, the 172.20.3.0/21 subnet results
in a subnet that can support 2,046 hosts. This subnet's network address is 172.20.0.0. Its broadcast address is 172.20.7.255. The next subnet of addresses from
the Class B range would thus have a network address of 172.20.8.0. If this subnet were also a /21, it would have a broadcast address of 172.20.15.255.

The 172.16.2.0/16 network address and subnet mask combination does not summarize the smallest network. The /16 notation indicates that a 16bit subnet mask
(255.255.0.0) is used. This subnet mask in fact encompasses the entire classful 172.16.0.0 network range. Therefore, the network address of the 172.16.2.0/16
network is 172.16.0.0. Its broadcast address is 172.16.255.255, for a total of 65,534 hosts.

The 172.16.1.0/8 network address and subnet mask combination does not summarize the smallest network. The /8 notation indicates that an eightbit subnet mask
(255.0.0.0) is used. This subnet mask encompasses a range of 16,777,214 IP addresses. For example, the Class A 10.0.0.0 network is a /8 range of IP addresses.
In this scenario, the 172.16.1.0/8 network would include every address in the range from 172.0.0.0 through 172.255.255.255.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 8, IPv4 Address Subnets, pp. 302-310
Cisco: IP Routing Frequently Asked Questions: Q. What does route summarization mean?
Cisco: IP Addressing and Subnetting for New Users

QUESTION 69
DRAG DROP
From the left, select the characteristics that apply to a large branch office, and drag them to the right.

Select and Place:

http://www.gratisexam.com/
Correct Answer:

Section: Enterprise Network Design Explanation


Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

http://www.gratisexam.com/
Explanation:
A large branch office is an office that contains between 100 and 200 users, typically uses Rapid Per-VLAN Spanning Tree Plus (RPVST+) and external access
switches, includes a distribution layer, and uses both redundant links and redundant devices. Cisco defines a large branch office as an office that contains between
100 and 200 users and that implements a three-tier design. A triple-tier design separates LAN and WAN termination into multiple devices. In addition, a triple-tier
design separates services, such as firewall functionality and intrusion detection. A large branch office typically uses at least one dedicated device for each network
service. Whereas small and medium branch offices consist of only an edge layer and an access layer, the large branch office also includes a distribution layer.
RPVST+ is an advanced spanning tree algorithm that can prevent loops on a switch that handles multiple virtual LANs (VLANs). RPVST+ is typically supported only
on external switches and advanced routing platforms. External access switches provide high-density LAN connectivity to individual hosts. External access switches
typically aggregate their links on distribution layer switches.

Cisco defines a medium branch office as an office that contains between 50 and 100 users and that implements a two-tier design. A dual-tier design separates LAN
and WAN termination into multiple devices. A medium branch office typically uses two Integrated Services Routers (ISRs), such as the ISR G2, with one ISR
serving as a connection to the headquarters location and the second serving as a connection to the Internet. In addition, the two ISRs are typically connected by at
least one external switch that also serves as an access layer switch for the branch users.

Cisco defines a small branch office as an office that contains up to 50 users and that implements a one-tier design. A single-tier design combines LAN and WAN
termination into a single ISR, where a redundant link to the access layer can be created if the ISR uses an EtherChannel topology versus a trunked topology, which
offers no link redundancy. Because a small branch office uses a single ISR to provide LAN and WAN services, an external access switch, such as the Cisco 2960,
is not necessary. In addition, PVST+ is not supported on most ISR platforms. Similar to a medium branch office, a small branch office contains no Layer 2 loops in
its topology.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 7, Enterprise Branch Profiles, pp. 275-279
Cisco: LAN Baseline Architecture Branch Office Network Reference Design Guide: Large Office Design (PDF)
Cisco: LAN Baseline Architecture Branch Office Network Reference Design Guide: Branch LAN Design Options (PDF)

QUESTION 70
Which of the following is a Cisco-proprietary link-bundling protocol?

A. HSRP
B. LACP
C. PAgP
D. VRRP

Correct Answer: C
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

http://www.gratisexam.com/
Explanation:
Port Aggregation Protocol (PAgP) is a Cisco-proprietary link-bundling protocol. Configuring multiple physical ports into a bundle, which is also known as a port group
or an EtherChannel group, enables a switch to use the multiple physical ports as a single connection between a switch and another device. Because bundled links
function as a single logical port, Spanning Tree Protocol (STP) is automatically disabled on the physical ports in the bundle? however, spanning tree must be
running on the associated port channel virtual interface to prevent bridging loops.
Typically, a link bundle is configured for high-bandwidth transmissions between switches and servers. When a link bundle is configured, traffic is load balanced
across all links in the port group, which provides fault tolerance. If a link in the port group goes down, that link's traffic load is redistributed across the remaining
links.

PAgP cannot be used to create an EtherChannel on non-Cisco switches. In addition, PAgP cannot be used to create an EtherChannel link between a Cisco switch
and a non-Cisco switch, because the EtherChannel protocol must match on each side of the EtherChannel link.

Link Aggregation Control Protocol (LACP) is a link-bundling protocol that is defined in the Institute of Electrical and Electronics Engineers (IEEE) 802.3ad standard,
not by Cisco. Because LACP is a standards-based protocol, it can be used between Cisco and non-Cisco switches.

Both PAgP and LACP work by dynamically grouping physical interfaces into a single logical link. However, LACP is newer than PAgP and offers somewhat different
functionality. Like PAgP, LACP identifies neighboring ports and their group capabilities? however, LACP goes further by assigning roles to the link bundle's
endpoints. LACP enables a switch to determine which ports are actively participating in the bundle at any given time and to make operational decisions based on
those determinations.

Neither Hot Standby Router Protocol (HSRP) nor Virtual Router Redundancy Protocol (VRRP) is a link-bundling protocol. HSRP is a Cisco-proprietary first-hop
redundancy protocol (FHRP). VRRP is an Internet Engineering Task Force (IETF)standard FHRP. Both HSRP and VRRP can be used to configure failover in case
a primary default gateway goes down.

Reference:
Cisco: IEEE 802.3ad Link Bundling: Benefits of IEEE 802.3ad Link Bundling

QUESTION 71
You want to load share traffic from two VLANs across two FHRP-capable default gateways.

Which technologies are you most likely to configure? (Choose two.)

A. HSRP
B. floating static routes
C. RPVST+
D. RSTP
E. STP

Correct Answer: AC
Section: Enterprise Network Design Explanation

http://www.gratisexam.com/
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
Most likely, you will configure Hot Standby Router Protocol (HSRP) and Rapid Per-VLAN Spanning Tree Plus (RPVST+) if you want to load share traffic from two
virtual LANs (VLANs) across two First Hop Redundancy Protocol (FHRP)enabled gateways. HSRP is an FHRP that provides redundancy by enabling the automatic
configuration of active and standby routers. RPVST+ is the Rapid Spanning Tree Protocol (RSTP) implementation of Per VLAN Spanning Tree Plus (PVST+), which
enables the configuration of a separate Spanning Tree Protocol (STP) instance per VLAN configuration. This means that each VLAN in an organization can be
configured to use a different switch as its root.

Although HSRP does not support load balancing, you can configure an HSRP load sharing scenario by assigning different HSRP routers as root bridges for different
VLANs. Next, you could configure a separate HSRP group for each VLAN. Finally, configure each VLAN's root bridge as the active HSRP router for that VLAN's
HSRP group. Using this configuration, each VLAN in an organization will by default send traffic over a different default gateway. The VLANs will only share a default
gateway if one of the HSRP routers goes down.

You would be more likely to use HSRP than floating static routes in this scenario. Floating static routes are manually configured paths. Typically, one path is
assigned a higher administrative distance (AD) so that it is not inserted into the routing table unless the first path becomes unavailable. Therefore, floating static
routes would not make sense in a scenario in which an FHRP can be used to provide both redundancy and availability.

You would be more likely to use RPVST+ than STP or RSTP in this scenario. Neither STP nor RSTP support separate STP instances per VLAN.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 3, STP Design Considerations, pp. 101-103
Cisco: Inter-Switch Link and IEEE 802.1Q Frame Format: Background Theory
Cisco: Catalyst 3750X and 3560X Switch Software Configuration Guide, Release 12.2(55)SE: Configuring the Switch Priority of a VLAN

QUESTION 72
Which of the following is a network architecture principle that is used to facilitate troubleshooting in large, scalable networks?

A. modularity
B. hierarchy
C. top-down
D. bottom-up

Correct Answer: A
Section: Design Objectives Explanation
Explanation

Explanation/Reference:

http://www.gratisexam.com/
Section: Design Objectives Explanation

Explanation:
Of the available choices, the modularity network architecture principle is most likely to facilitate troubleshooting in large, scalable networks. The modularity and
hierarchy principles are complementary components of network architecture. The modularity principle is used to implement an amount of isolation among network
components. This ensures that changes to any given component have little to no effect on the rest of the network. Modularity also simplifies the troubleshooting
process by limiting the task of isolating the problem to the affected module.
The modularity principle typically consists of two building blocks: the access-distribution block and the services block. The access-distribution block contains the
bottom two layers of a three-tier hierarchical network design. The services block, which is a newer building block, typically contains services like routing policies,
wireless access, tunnel termination, and Cisco Unified Communications services.

The hierarchy principle is the structured manner in which both the physical and logical functions of the network are arranged. A typical hierarchical network consists
of three layers: the core layer, the distribution layer, and the access layer. The modules between these layers are connected to each other in a fashion that facilitates
high availability. However, each layer is responsible for specific network functions that are independent from the other layers.

The core layer provides fast transport services between buildings and the data center. The distribution layer provides link aggregation between layers. Because the
distribution layer is the intermediary between the access layer and the campus core layer, the distribution layer is the ideal place to enforce security policies, provide
load balancing, provide Quality of Service (QoS), and perform tasks that involve packet manipulation, such as routing. The access layer, which typically comprises
Open Systems Interconnection (OSI) Layer 2 switches, serves as a media termination point for devices, such as servers and workstations. Because access layer
devices provide access to the network, the access layer is the ideal place to perform user authentication and to institute port security. High availability, broadcast
suppression, and rate limiting are also characteristics of access layer devices.

Top-down and bottom-up are both network design models, not network architecture principles. The top-down network design approach is typically used to ensure
that the eventual network build will properly support the needs of the network's use cases. For example, a dedicated customer service call center might first evaluate
communications and knowledgebase requirements prior to designing and building out the call center's network infrastructure. In other words, a top-down design
approach typically begins at the Application layer, or Layer 7, of the OSI reference model and works down the model to the Physical layer, or Layer 1.

In contrast to the top-down approach, the bottom-up approach begins at the bottom of the OSI reference model. Decisions about network infrastructure are made
first, and application requirements are considered last. This approach to network design can often lead to frequent network redesigns to account for requirements
that have not been met by the initial infrastructure.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 2, Cisco Enterprise Architecture Model, pp. 49-50
Cisco: Enterprise Campus 3.0 Architecture: Overview and Framework: Modularity

QUESTION 73
Which of the following WMM access categories maps to the WLC Gold QoS profile?

A. Voice
B. Video
C. Background
D. Best-Effort

http://www.gratisexam.com/
Correct Answer: B
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

Explanation:
The Video WiFi Multimedia (WMM) access category maps to the wireless LAN controller (WLC) Gold profile. WMM is a subset of the 802.11e wireless standard,
which adds Quality of Service (QoS) features to the existing wireless standards. WMM was initially created by the WiFi Alliance while the 802.11e proposal was
awaiting approval by the Institute of Electrical and Electronics Engineers (IEEE).

The 802.11e standard defines eight priority levels for traffic, numbered from 0 through 7. WMM reduces the eight 802.11e priority levels into four access categories,
which are Voice (Platinum), Video (Gold), Best-Effort (Silver), and Background (Bronze). On WMM-enabled networks, these categories are used by WLCs to
prioritize traffic. Packets tagged as Voice (Platinum) packets are typically given priority over packets tagged with lower-level priorities. Packets that have not been
assigned to a category are treated as though they had been assigned to the Best-Effort (Silver) category.

When a lightweight access point (LAP) receives a frame with an 802.11e priority value from a WMM-enabled client, the LAP ensures that the 802.11e priority value
is within the acceptable limits provided by the QoS policy assigned to the wireless client. After the LAP polices the 802.11e priority value, it maps the 802.11e priority
value to the corresponding Differentiated Services Code Point (DSCP) value and forwards the frame to the wireless LAN controller (WLC). The WLC will then
forward the frame with its DSCP value to the wired network.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 5, Wireless and Quality of Service (QoS), pp. 197-199
Cisco: Enterprise Mobility 4.1 Design Guide: Cisco Unified Wireless QoS

QUESTION 74
Which of the following OSPF areas does not accept Type 3, 4, and 5 summary LSAs?

A. stub area
B. ordinary area
C. backbone area
D. not-so-stubby area
E. totally stubby area

Correct Answer: E
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:

http://www.gratisexam.com/
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
An Open Shortest Path First (OSPF) totally stubby area does not accept Type 3, 4, and 5 summary link-state advertisements (LSAs), which advertise routes outside
the area. These LSAs are replaced by a default route at the area border router (ABR). As a result, routing tables are kept small within the totally stubby area. To
create a totally stubby area, you should issue the area area-id stub no-summary command in router configuration mode.

The backbone area, Area 0, accepts all LSAs. All OSPF areas must directly connect to the backbone area or must traverse a virtual link to the backbone area. To
configure a router to be part of the backbone area, you should issue the area 0 command in router configuration mode.

An ordinary area, which is also called a standard area, accepts all LSAs. Every router in an ordinary area contains the same OSPF routing database. To configure
an ordinary area, you should issue the area area-id command in router configuration mode.

A stub area does not accept Type 5 LSAs, which advertise external summary routes. Routers inside the stub area will send all packets destined for another area to
the ABR. To configure a stub area, you should issue the area area-id stub command in router configuration mode.

A not-so-stubby area (NSSA) is basically a stub area that contains one or more autonomous system boundary routers (ASBRs). Like stub areas, NSSAs do not
accept Type 5 LSAs. External routes from the ASBR are converted to Type 7 LSAs and tunneled through the NSSA to the ABR, where they are converted back to
Type 5 LSAs. To configure an NSSA, you should issue the area area-id nssa command in router configuration mode. To configure a totally stubby NSSA, which
does not accept summary routes, you should issue the area area-id nssa no-summary command in router configuration mode.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 11, OSPF Stub Area Types, pp. 437-438
Cisco: What Are OSPF Areas and Virtual Links?

QUESTION 75
Which of the following statements best describes NetFlow?

A. NetFlow is a Cisco IOS feature that can collect timestamps of traffic sent between a particular source and destination for the purpose of reviewing in an audit.
B. NetFlow is a protocol that extends the standard MIB data structure and enables a managed device to store statistical data locally.
C. NetFlow is a security appliance that serves as the focal point for security events on a network.
D. NetFlow is used to monitor and manage network devices by collecting data about those devices.

Correct Answer: A
Section: Design Methodologies Explanation
Explanation

Explanation/Reference:
Section: Design Methodologies Explanation

Explanation:

http://www.gratisexam.com/
NetFlow is a Cisco IOS feature that can collect timestamps of traffic flowing between a particular source and destination for the purpose of reviewing in an audit.
NetFlow can be used to gather flowbased statistics, such as packet counts, byte counts, and protocol distribution. A device configured with NetFlow examines
packets for select Layer 3 and Layer 4 attributes that uniquely identify each traffic flow. The data gathered by NetFlow is typically exported to management software.
You can then analyze the data to facilitate network planning, customer billing, and traffic engineering. A traffic flow is defined as a series of packets with the same
source IP address, destination IP address, protocol, and Layer 4 information. Although NetFlow does not use Layer 2 information, such as a source Media Access
Control (MAC) address, to identify a traffic flow, the input interface on a switch will be considered when identifying a traffic flow. Each NetFlowenabled device
gathers statistics independently of any other device; NetFlow does not have to run on every router in a network in order to produce valuable data for an audit. In
addition, NetFlow is transparent to the existing network infrastructure and does not require any configuration changes in order to function.

Simple Network Management Protocol (SNMP) is used to monitor and manage network devices by collecting data about those devices. The data is stored on each
managed device in a data structure known as a Management Information Base (MIB). Three versions of SNMP currently exist: SNMPv1, SNMPv2, and SNMPv3.
SNMPv1 and SNMPv2 do not provide authentication, encryption, or message integrity. Thus access to management information is based on a simple password
known as a community string; the password is sent as plain text with each SNMP message. If an attacker intercepts a message, the attacker can view the password
information. SNMPv3 improves upon SNMPv1 and SNMPv2 by providing encryption, authentication, and message integrity to ensure that the messages are not
viewed or tampered with during transmission.

Remote Monitoring (RMON) and RMON2 are protocols that extend the standard MIB data structure and enable a managed device to store statistical data locally.
Because an RMON-capable device can store its own statistical data, the number of queries by a management station is reduced. RMON agents use SNMP to
communicate with management stations. Therefore, RMON does not need to implement authentication, encryption, or message integrity methods.

Cisco Security Monitoring, Analysis, and Response System (CS-MARS) is a security appliance that serves as the focal point for security events on a network. CS-
MARS can discover the topology of the network and the configurations of key network devices, such as Cisco security devices, third-party network devices, and
applications. Because CS-MARS has a more comprehensive view of the network than individual network security devices have, CS-MARS can identify false
positives and facilitate the mitigation of some types of security issues. For example, once CS-MARS has identified a new Intrusion Prevention System (IPS)
signature, it can distribute this signature to all of the relevant IPS devices on the network.

Reference:
Cisco: Cisco IOS Switching Services Configuration Guide, Release 12.2: NetFlow Overview

QUESTION 76
Which of the following processes is a component of the Manage phase in the Cisco PBM Design Lifecycle?

A. assessment
B. validation
C. deployment
D. optimization
E. migration

Correct Answer: D
Section: Design Methodologies Explanation
Explanation

http://www.gratisexam.com/
Explanation/Reference:
Section: Design Methodologies Explanation

Explanation:
The optimization process is a component of the Manage phase in the Cisco Plan, Build, Manage (PBM) Design Lifecycle. The PBM Design Lifecycle is a newer
methodology designed to streamline the concepts from Cisco's older design philosophy: the Prepare, Plan, Design, Implement, Operate, and Optimize (PPDIOO)
Design Lifecycle. As the name implies, the PBM Design Lifecycle is divided into three distinct phases: Plan, Build, and Manage.

The Plan phase of the PBM Design Lifecycle consists of the following three processes:
Strategy and analysis
Assessment
Design

The purpose of the strategy and analysis process is to generate proposed improvements to an existing network infrastructure with the overall goal of increasing an
organization's return on investment (ROI) from the network and its support staff. The assessment process then examines the proposed improvements from the
strategy and analysis process and determines whether the improvements comply with organizational goals and industry best practices. In addition, the assessment
process identifies potential deficiencies that infrastructure changes might cause in operational and support facilities. Finally, the design process produces a network
design that meets current organizational objectives while maintaining resiliency and scalability.

The Build phase of the PBM Design Lifecycle consists of the following three processes:
Validation
Deployment
Migration

The purpose of the validation process is to implement the infrastructure changes outlined in the design process of the Plan phase and to verify that the
implementation meets the organizational needs as specified by the network design. The validation process implements the network design in a controlled
environment such as in a lab or staging environment. Once the network design has been validated, the purpose of the deployment process is to implement the
network design in a full-scale production environment. Finally, the purpose of the migration process is to incrementally transition users, devices, and services to the
new infrastructure as necessary.

The Manage phase of the PBM Design Lifecycle consists of the following four processes:
Product support
Solution support
Optimization
Operations management

The product support process addresses support for specific hardware, software, or network products. Cisco Smart Net is an example of a component of the product
support process. By contrast, solution support is focused on the solutions that hardware, software, and network products provide for an organization. Cisco Solution
Support is the primary component of the solution support process. Cisco Solution Support serves as the primary point of contact for Cisco solutions, leverages
solution-focused expertise, coordinates between multiple vendors for complex solutions, and manages each case from inception to resolution. The optimization
process is concerned with improving the performance, availability, and resiliency of a network implementation. It also addresses foreseeable changes and
upgrades, which reduces operating costs, mitigates risk, and improves return on investment (ROI). The operations management process addresses the ongoing

http://www.gratisexam.com/
management of the network infrastructure. It includes managed solutions for collaboration, data center, security, and general network services.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 1, Cisco Design Lifecycle: Plan, Build, Manage, pp. 9-12
Cisco: Services: Portfolio

QUESTION 77
You are designing a routed access layer for a high availability campus network that will be deployed using only Cisco devices.

Which of the following are most likely to result in fast and deterministic recovery when a path to a destination becomes invalid? (Choose two.)

A. EIGRP
B. OSPF
C. RIP
D. redundant equal-cost paths to the destination
E. redundant unequal-cost paths to the destination
F. a single path to the destination

Correct Answer: AD
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
Of the available choices, Enhanced Interior Gateway Routing Protocol (EIGRP) and redundant equal-cost paths to the destination are most likely to result in fast and
deterministic recovery when a path to a destination becomes invalid. Both Open Shortest Path First (OSPF) and the Cisco-developed EIGRP are dynamic routing
protocols and are capable of fast convergence. However, on a network that contains only Cisco routers, EIGRP is typically simpler to deploy than OSPF and can
converge faster than OSPF because of the feasible successors stored in the EIGRP topology database.

One means of optimizing a routing design is to create redundant equal-cost paths between devices because such a design promotes fast and deterministic recovery
when a path becomes invalid. When either EIGRP or OSPF has a redundant equal-cost path to a destination, all of the new path calculation occurs on the local
device if one of the paths becomes unavailable. If the device has no redundant equal-cost paths, the routing protocol must rely on information from neighboring
devices and calculate a new path.

Routing Information Protocol (RIP) does not offer fast recovery. RIP sends out routing updates every 30 seconds, so convergence is relatively slow. In addition, RIP
relies on hold-down timers, which further slowdown convergence time.

The amount of time required for a routing protocol to detect the loss of a forwarding path and to calculate a new best path can both affect convergence time. In
addition, the amount of time it takes for the Cisco Express Forwarding (CEF) table to populate with routing updates can affect the speed of convergence. It is

http://www.gratisexam.com/
therefore important when designing a network to ensure that the routing design is an optimized design.

Reference:
Cisco: High Availability Campus Network Design-Routed Access Layer using EIGRP or OSPF: Route Convergence

QUESTION 78
Which of the following queuing methods provides strict-priority queues and prevents bandwidth starvation?

A. CQ
B. PQ
C. LLQ
D. WFQ
E. FIFO
F. CBWFQ

Correct Answer: C
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

Explanation:
Low-latency queuing (LLQ) provides strict-priority queues and prevents bandwidth starvation. LLQ supports the creation of up to 64 user-defined traffic classes as
well as one or more strict-priority queues that can be used specifically for delay-sensitive traffic, such as voice and video traffic. Each strict-priority queue can use up
to the maximum bandwidth available but can only use its guaranteed minimum bandwidth when other queues have traffic to send, thereby avoiding bandwidth
starvation. Cisco recommends limiting the strict-priority queues to a total of 33 percent of the link capacity. Because LLQ can provide guaranteed bandwidth to
delay-sensitive packets, such as Voice over IP (VoIP) packets, without monopolizing the available bandwidth on a link, LLQ is recommended for handling voice,
video, and mission-critical traffic.

First-in-first-out (FIFO) queuing does not provide strict-priority queues or prevent bandwidth starvation. By default, Cisco uses FIFO queuing for interfaces faster
than 2.048 Mbps. FIFO queuing requires no configuration, because all packets are arranged into a single queue. As the name implies, the first packet received is
the first packet transmitted without regard for packet type, protocol, or priority.

Although you can implement priority queuing (PQ) on an interface to prioritize voice, video, and mission-critical traffic, you should not use it when lower-priority traffic
must be sent on that interface. PQ arranges packets into four queues: high priority, medium priority, normal priority, and low priority. Queues are processed in order
of priority. As long as the high-priority queue contains packets, no packets are sent from other queues. This can cause bandwidth starvation.

Custom queuing (CQ) is appropriate for voice, video, and mission-critical traffic, but it can be difficult to balance the queues to avoid bandwidth starvation of lower-
priority queues. CQ is a form of weighted round robin (WRR) queuing. With round robin (RR) queuing, you configure multiple queues of equal priority and you
assign traffic to each queue. Because each queue has equal priority, each queue takes turns sending traffic over the interface. With WRR queuing, you can assign

http://www.gratisexam.com/
a weight value to each queue whereby each queue can send a number of packets relative to their weight values. CQ allows you to configure each queue with a
specific byte value whereby each queue can send that many bytes before the next queue can send traffic.

Although weighted fair queuing (WFQ) can be used for voice, video, and mission-critical traffic, it does not provide the bandwidth guarantees or the strict-priority
queues provided by LLQ. WFQ is used by default on Cisco routers for serial interfaces at 2.048 Mbps or lower. WFQ addresses the jitter and delay problems
inherent with FIFO queuing, and it addresses the bandwidth starvation problem inherent with PQ. Traffic flows are identified by WFQ based on source and
destination IP addresses, port number, protocol number, and Type of Service (ToS). Although WFQ is easy to configure, it is not supported on high-speed links.

Class-based WFQ (CBWFQ) can be used for voice, video, and mission-critical traffic; however, it does not provide the delay guarantees provided by LLQ, because
CBWFQ does not provide support for strict-priority queues. CBWFQ improves upon WFQ by enabling the creation of up to 64 custom traffic classes, each with a
guaranteed minimum bandwidth. Bandwidth can be allocated as a value in Kbps, by a percentage of bandwidth, or by a percentage of the remaining bandwidth.
Unlike with PQ, bandwidth starvation does not occur with CBWFQ.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 6, Low-Latency Queuing, p. 235
Cisco: Enterprise QoS Solution Reference Network Design Guide: Queuing and Dropping Principles
Cisco: Congestion Management Overview: Low Latency Queueing

QUESTION 79
Which of the following are not true of the access layer of a hierarchical design? (Choose three.)

A. It provides address summarization.


B. It aggregates LAN wiring closets.
C. It isolates the distribution and core layers.
D. It performs Layer 2 switching.
E. It performs NAC for end users.

Correct Answer: ABC


Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
The access layer typically performs Open Systems Interconnection (OSI) Layer 2 switching and Network Admission Control (NAC) for end users. The access layer
is the network hierarchical layer where end-user devices connect to the network. Port security and Spanning Tree Protocol (STP) toolkit features like PortFast are
typically implemented in the access layer.

The distribution layer of a hierarchical design, not the access layer, provides address summarization, aggregates LAN wiring closets, and aggregates WAN
connections. The distribution layer is used to connect the devices at the access layer to those in the core layer. Therefore, the distribution layer isolates the access

http://www.gratisexam.com/
layer from the core layer. In addition to these features, the distribution layer can also be used to provide policy-based routing, security filtering, redundancy, load
balancing, Quality of Service (QoS), virtual LAN (VLAN) segregation of departments, inter-VLAN routing, translation between types of network media, routing
protocol redistribution, and more.

The core layer of a hierarchical design, not the access layer, is also known as the backbone layer. The core layer is used to provide connectivity to devices
connected through the distribution layer. In addition, it is the layer that is typically connected to enterprise edge modules. Cisco recommends that the core layer
provide fast transport, high reliability, redundancy, fault tolerance, low latency, limited diameter, and QoS. However, the core layer should not include features that
could inhibit CPU performance. For example, packet manipulation that results from some security, QoS, classification, or inspection features can be a drain on
resources.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 2, Access Layer, pp. 44-46
Cisco: High Availability Campus Network DesignRouted Access Layer using EIGRP or OSPF: Hierarchical Design

QUESTION 80
View the Exhibit.

You administer the network shown above. All the routers run EIGRP. Automatic summarization is disabled throughout the network. You want to optimize the routing
tables where possible.

On which routers should you enable automatic summarization? (Choose three.)

http://www.gratisexam.com/
http://www.gratisexam.com/

A. RouterA
B. RouterB
C. RouterC
D. RouterD
E. RouterE

Correct Answer: BCE


Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
You should enable automatic summarization on RouterB, RouterC, and RouterE. A summary route is used to advertise a group of contiguous networks as a single
route, thus reducing the size of the routing table. Some routing protocols, such as Enhanced Interior Gateway Routing Protocol (EIGRP) and Routing Information
Protocol version 2 (RIPv2), automatically summarize routes on classful network boundaries.

RouterB will advertise a 10.0.0.0/8 summary route to RouterE, and RouterE will advertise the same summary route to the other routers on the network. Because no
other router on the network contains any part of the 10.0.0.0/8 Class A address space, all other routers will send all traffic destined for the 10.0.0.0/8 network to
RouterE, which will route the traffic to RouterB.

RouterC will advertise the 192.168.0.0/24 network to RouterE. Because the other routers on the network do not contain any part of the 192.168.0.0/24 Class C
address space, they will send all traffic destined for the 192.168.0.0/24 network to RouterE, which will route the traffic to RouterC. The point-to-point links between
routers belong to address spaces that do not overlap with each other or with the 192.168.0.0/24 network.

When RouterE receives the 172.16.1.0/24 route from RouterA and the 172.16.2.0/24 route from RouterD, RouterE will advertise a summarized 172.16.0.0/16 route
to RouterB and RouterC. Because RouterB and RouterC do not contain any part of the 172.16.0.0/16 address space, they will send all traffic destined for the
172.16.0.0/16 network to RouterE. RouterE will then route the traffic to the appropriate next-hop router.

You should not enable automatic summarization on RouterA and RouterD. Automatic summarization can cause problems when classful networks are discontiguous
within a network topology. A discontiguous subnet exists when a summarized route advertises one or more subnets that should not be reachable through that route.
Therefore, when discontiguous networks in the same subnet exist in a topology, you should disable automatic summarization with the no auto-summary command.
When you disable automatic summarization, the routing protocol can advertise the actual networks instead of the classful summary. The network diagram shows

http://www.gratisexam.com/
that both RouterA and RouterD are configured with different parts of the 172.16.0.0/16 Class B address space. Because automatic summarization is enabled,
RouterA and RouterD will advertise the 172.16.0.0/16 summary routes to RouterE.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 10, EIGRP Design, p. 404
CCDA 200-310 Official Cert Guide, Chapter 11, Route Summarization, pp. 455-458
Cisco: EIGRP Commands: autosummary (EIGRP)

QUESTION 81
Which of the following is a hierarchical routing protocol that does not support automatic summarization?

A. RIPv1
B. RIPv2
C. OSPF
D. EIGRP

Correct Answer: C
Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
Open Shortest Path First (OSPF) is a hierarchical, link-state routing protocol that does not support automatic summarization. However, OSPF can be configured to
summarize routes at border routers or by using redistribution summarization. OSPF divides an autonomous system (AS) into areas. These areas can be used to
limit routing updates to one portion of the network, thereby keeping routing tables small and update traffic low. Only OSPF routers in the same hierarchical area
form adjacencies. Hierarchical design provides for efficient performance and scalability. Although OSPF is more difficult to configure, it converges more quickly than
most other routing protocols.

Enhanced Interior Gateway Routing Protocol (EIGRP) is a hybrid routing protocol that combines the best features of distance-vector and link-state routing protocols.
Unlike OSPF, EIGRP supports automatic summarization and can summarize routes on any EIGRP interface. However, both OSPF and EIGRP converge faster than
other routing protocols and support manual configuration of summary routes.

Routing Information Protocol version 1 (RIPv1) and RIPv2 are not hierarchical routing protocols. RIPv1 and RIPv2 are distance-vector routing protocols that use hop
count as a metric. By default, RIP sends out routing updates every 30 seconds, and the routing updates are propagated to all RIP routers on the network.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 11, OSPFv2 Summary, p. 439
Cisco: Open Shortest Path First

http://www.gratisexam.com/
QUESTION 82
Which of the following statements is not true?

A. The access layer should not contain physically connected hosts.


B. The access layer provides NAC.
C. The core layer should provide fast convergence.
D. The core layer should provide high resiliency.
E. The distribution layer provides inter-VLAN routing.
F. The distribution layer provides route filtering.

Correct Answer: A
Section: Enterprise Network Design Explanation
Explanation

Explanation/Reference:
Section: Enterprise Network Design Explanation

Explanation:
The access layer should contain physically connected hosts because it is the tier at which end users connect to the network. The access layer serves as a media
termination point for endpoints such as servers and hosts. Because access layer devices provide access to the network, the access layer is the ideal place to
perform user authentication.

The hierarchical model divides the network into three distinct components:
Core layer
Distribution layer
Access layer

The access layer provides Network Admission Control (NAC). NAC is a Cisco feature that prevents hosts from accessing the network if they do not comply with
organizational requirements, such as having an updated antivirus definition file. NAC Profiler automates NAC by automatically discovering and inventorying devices
attached to the LAN.

The core layer of the hierarchical model is primarily associated with low latency and high reliability. It is the only layer of the model that should not contain physically
connected hosts. As the network backbone, the core layer provides fast convergence and typically provides the fastest switching path in the network. The
functionality of the core layer can be collapsed into the distribution layer if the distribution layer infrastructure is sufficient to meet the design requirements. Thus the
core layer does not contain physically connected hosts. For example, in a small enterprise campus implementation, a distinct core layer may not be required,
because the network services normally provided by the core layer are provided by a collapsed core layer instead.

The distribution layer provides route filtering and inter-VLAN routing. The distribution layer serves as an aggregation point for access layer network links. In addition,
the distribution layer can contain connections to physical hosts. Because the distribution layer is the intermediary between the access layer and the core layer, the
distribution layer is the ideal place to enforce security policies, to provide Quality of Service (QoS), and to perform tasks that involve packet manipulation, such as
routing. Summarization and next-hop redundancy are also performed in the distribution layer.

http://www.gratisexam.com/
Reference:
CCDA 200-310 Official Cert Guide, Chapter 2, Access Layer, pp. 44-46
Cisco: Campus Network for High Availability Design Guide: Access Layer

QUESTION 83
Which of the following methods is always used by a new LAP to discover a WLC?

A. broadcast
B. OTAP
C. DHCP
D. DNS
E. NVRAM

Correct Answer: C
Section: Considerations for Expanding an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Considerations for Expanding an Existing Network Explanation

Explanation:
When you add a lightweight access point (LAP) to a wireless network that uses Lightweight Access Point Protocol (LWAPP), the LAP goes through a sequence of
steps to discover and register with a wireless LAN controller (WLC) on the network. Because a new LAP has not been configured with a static IP address, the LAP
will first attempt to obtain an address from a Dynamic Host Configuration Protocol (DHCP) server. When the LAP receives an IP address, the LAP scans the DHCP
server response for option 43, which identifies the address of a WLC. Although this method is always the first action taken by a new LAP when it attempts to
discover a WLC, the LAP will also use other methods.

When the LAP receives an IP address from the DHCP server, the LAP can also receive other configuration parameters, such as the IP address of a Domain Name
System (DNS) server. If a DNS server is configured, the LAP will attempt to resolve the host name CISCO-LWAPP-CONTROLLER.localdomain, where localdomain
is the fully qualified domain name (FQDN) in use. Once the LAP has resolved the name to one or more IP addresses, the LAP will send an LWAPP discovery
message to all of the IP addresses simultaneously.

Alternatively, a LAP can use Over-the-Air-Provisioning (OTAP) to discover a WLC. OTAP is enabled by default on a new LAP. With OTAP, LAPs periodically
transmit neighbor messages that contain the IP address of a WLC. A new LAP that has OTAP enabled can scan the wireless network for neighbor messages until
the LAP locates the IP address of a local WLC. Once the LAP has discovered the IP address of a WLC, the LAP will send a Layer 3 LWAPP discovery request
directly to the WLC.

If Layer 2 LWAPP mode is supported, a new LAP can attempt to locate a WLC by broadcasting a Layer 2 LWAPP discovery request message. If there are no
WLCs on that network segment or if a WLC does not respond to the Layer 2 broadcast, the LAP will then broadcast a Layer 3 LWAPP discovery request message.

http://www.gratisexam.com/
A new LAP will not have the address of a WLC stored in nonvolatile random access memory (NVRAM) by default. However, you can configure a LAP with the IP
address of a WLC to facilitate the discovery of a WLC when the LAP is installed. In addition, if a LAP has ever joined with a WLC, it may store the previously
discovered WLC IP address as a primary, secondary, or tertiary WLC.

Reference:
Cisco: Lightweight AP (LAP) Registration to a Wireless LAN Controller (WLC): Register the LAP with the WLC

QUESTION 84
Which of the following are BGP attributes that are used to determine best path? (Choose three.)

A. confederation
B. local preference
C. route reflector
D. MED
E. weight

Correct Answer: BDE


Section: Addressing and Routing Protocols in an Existing Network Explanation
Explanation

Explanation/Reference:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Explanation:
Local preference, multi-exit discriminator (MED), and weight are all Border Gateway Protocol (BGP) attributes that are used to determine the best path to a
destination. The following list displays the criteria used by BGP for path selection:
1. Highest weight
2. Highest local preference
3. Locally originated paths over externally originated paths
4. Shortest autonomous system (AS) path
5. Lowest origin type
6. Lowest MED
7. External BGP (eBGP) paths over internal BGP (iBGP) paths
8. Lowest Interior Gateway Protocol (IGP) cost
9. Oldest eBGP path
10. Lowest BGP router ID (RID)

When determining the best path, a BGP router first chooses the route with the highest weight. The weight value is significant only to the local router? it is not
advertised to neighbor routers.
When weight values are equal, a BGP router chooses the route with the highest local preference. The local preference value is advertised to internal iBGP neighbor
routers to influence routing decisions made by those routers.

http://www.gratisexam.com/
When local preferences are equal, a BGP router chooses locally originated paths over externally originated paths. Locally originated paths that have been created
by issuing the network or redistribute command are preferred over locally originated paths that have been created by issuing the aggregate-address command.

If multiple paths to a destination still exist, a BGP router chooses the route with the shortest AS path attribute. The AS path attribute contains a list of the AS
numbers (ASNs) that a route passes through.

If multiple paths have the same AS path length, a BGP router chooses the lowest origin type. An origin type of i, which is used for IGPs, is preferred over an origin
type of e, which is used for Exterior Gateway Protocols (EGPs). These origin types are preferred over an origin type of , which is used for incomplete routes where
the origin is unknown or the route was redistributed into BGP.

If origin types are equal, a BGP router chooses the route with the lowest MED. If MED values are equal, a BGP router chooses eBGP routes over iBGP routes. If
there are multiple eBGP paths, or multiple iBGP paths if no eBGP paths are available, a BGP router chooses the route with the lowest IGP metric to the next-hop
router. If IGP metrics are equal, a BGP router chooses the oldest eBGP path, which is typically the most stable path.

Finally, if route ages are equal, a BGP router chooses the path that comes from the router with the lowest RID. The RID can be manually configured by issuing the
bgp router-id command. If the RID is not manually configured, the RID is the highest loopback IP address on the router. If no loopback address is configured, the
RID is the highest IP address from among a router's available interfaces.

Neither a confederation nor a route reflector are BGP attributes. Confederations and route reflectors are both a means of mitigating performance issues that arise
from large, full-mesh iBGP configurations. A full-mesh configuration enables each router to learn each iBGP route independently without passing through a
neighbor. However, a full-mesh configuration requires the most administrative effort to configure. A confederation enables an AS to be divided into discrete units,
each of which acts like a separate AS. Within each confederation, the routers must be fully meshed unless a route reflector is established. A route reflector can be
used to pass iBGP routes between iBGP routers, eliminating the need for a full-mesh configuration. However, it is important to note that route reflectors advertise
best paths only to route reflector clients. In addition, if multiple paths exist, a route reflector will always advertise the exit point that is closest to the route reflector.

Reference:
CCDA 200-310 Official Cert Guide, Chapter 11, BGP Attributes, Weight, and the BGP Decision Process, pp. 449-455
CCDA 200-310 Official Cert Guide, Chapter 11, Route Reflectors, pp. 446-448
CCDA 200-310 Official Cert Guide, Chapter 11, Confederations, pp. 448-449
Cisco: BGP Best Path Selection Algorithm
Cisco: Integrity Checks: IBGP Neighbors Not Fully Meshed

http://www.gratisexam.com/

http://www.gratisexam.com/
http://www.gratisexam.com/

You might also like