You are on page 1of 16

Trouble Ticket Labs

Posted on July 21st, by Joey in CCNP 642-832 TSHOOT. No Comments

here is what I’m doing…

I’m going through the CCNP TSHOOT Lab Manual (Ciscopress Networking Academy)

https://learningnetwork.cisco.com/servlet/JiveServlet/previewBody/10184-102-1-37275/CCNP
%20TSHOOT%206.0%20SLM.pdf

and I’m happy to say I was able to get the baseline Lab setup fully-working! From the DHCP,
DNS, Syslog, User / Guest VLANs, – switches and routers all fully functioning the way it is
supposed to. Awesome. It took some troubleshooting, but for the most part it was working from
the get-go.

Now I’m answering the Trouble Tickets, which call for TT cfg’s. With a little help from google,
I found this guy, – sharing the Trouble Ticket config files here

http://shadez.info/share/bogdan/Cisco/CCNP3/

http://shadez.info/share/bogdan/Cisco/CCNP3/TSHOOTv6-0_Lab-TT-Cfgs-ALS1/

He’s Awesome, and I would like to Thank him. SO THANKS! (granted he will probably never
read this, but none-the-less).

TASK 1: TROUBLE TICKET LAB 4-1 TT-A

Now for this task, they said that ALS1 was the device that has been replaced and my
junior colleague could not get it online. The only config I loaded was for ALS1, and not the rest.
At this time I didn’t realize all the device configs are being shared (above), I had only thought
that ALS1 was being shared. However when I looked at the configs for the other devices I found
the same problems with DNS and the IP HOST command to resolve those DNS entries.

Problems found:

 – ALS1: Spanning-tree in mstp, the rest of the switches are running rpvst
 – ALS1: DNS
 – ALS1: VLAN 100 – under the SVI no ip route-cache – took it out.
 – ALS1: VLAN 100 – name not configured. Added it. SVI 100 went to up & up.

TASK 2: TROUBLE TICKET LAB 4-1 TT-B


In this Lab, the problem is with the switches, so I just loaded the configs for the switches (duh).

Problems found:

 – ALS1: DNS, Po1 and Po2 set to dot1q, VLAN 100 not added.
 – DLS1: DNS, Po1 set to ISL
 – DLS2: DNS, Po2 set to ISL

TASK 3: TROUBLE TICKET LAB 4-1 TT-C

This one was easy. It was almost not worth the time setting it up… not really, it was still pretty
awesome. The scenerio is, an external consultant is on the GUEST VLAN and needs access to
SRV1.

It was the usual of what I have seen so far, -DNS not being configured, which surprisingly didn’t
goof anything up. VLAN 100 was not configured on ALS1. The SVI is always configured just
not the “vlan 100″ command and then “name MGMT” command. Configured all that stuff, and
got the SVI 100 up and up on ALS1.

The main problem was VLAN 30, which is the GUEST VLAN, was not trunking on any of the
links on DLS2. I just set it up to trunk and our external consultant is now accessing the resources
on SRV1.

TASK 1: TROUBLE TICKET LAB 4-2 TT-A

I ended up spending way to much time on this lab. Basically, ALS1 went down, and was
replaced. A backup configuration was sent to SRV1 and did not work. My job is to make it work.

Initially, DHCP was working for the clients in vlan 10, which is the VLAN from ALS1. The
clients were able to ping the Server at 10.1.50.1 and all devices in the network except ALS1 were
able to send a backup copy of their running configs to tftp. This should of been a dead giveaway.

First, I saw this output:


…immediately beginning my demise into shooting from the hip.

I spent far to long trying everything in the world to fix the spanning-tree pruning, even though
there were no explicit configs in the running-config that would have caused this. Welp, again
after spending far to long on this lab, I finnally snapped. IP Address for VLAN 100 SVI is set to
10.10.100.1! What? No way, and here I thought it was some insane little dead-counter mismatch,
– believe me I started reaching after the first hour.

I changed the SVI to 10.1.100.1 and all was right with the data networking world. Like I said, I
should of snapped to this in the  first few minutes of troubleshooting, as the end users could still
traverse the links, even the ones connected to ALS1. I saw the spanning-tree pruning and ran
with it, shooting from my hip the whole way down the rabbit whole.

TASK 2: TROUBLE TICKET LAB 4-2 TT-B

First of all. Awesome Lab! In this lab HSRP is running on both distribution layer switches.
DLS1 is the Active router for VLAN 10 and is supposed to fail over to DLS2 during an outage.
Basically, when DLS1 fails there is no fail-over to DLS2. Well, it fails over and DLS2 becomes
the active router but there is no connectivity, -connectivity being to the internet which is through
either router R1 or R3 to R2′s loopback, which is the simulated internet connection.

My immediate thought was DHCP, especially after the last lab, I figured the default gateway for
VLAN 10 would be set to DLS1′s ip address and not the Virtual IP, and I was correct. I changed
it, and still no go. I also changed a few things around with the HSRP standby groups under the
SVI’s. Like changing the priority to 105 and not 110, which really doesn’t matter because
tracking was not or is not enabled. But none the less, I enabled tracking to lower the priority on
the Active DLS1 so when it would fail-over to DLS2, the priority would be lower than DLS2.

There was also some group mismatches, which I fixed. So check this out, DHCP is only running
on DLS1, and the clients are using DHCP. If DLS1 fails, then so does DHCP. I enabled DHCP
on DLS2, tested the fail-over and it worked. Again, awesome Lab, and I totally got some-sort-of
satisfaction after solving that one.
TASK 3: TROUBLE TICKET LAB 4-2 TT-C

The Scenerio – Your collegue started configuring MD5 authentication between HSRP routers,
but left on vacation before finishing. Man, I cannot believe he just left me hanging like that.
Anyways, I broke-out cain for this one, and cracked the type 7 password encryption.
And you guessed it. One router was configured with key-string C1s0, and the other as Clsc0.
Changed those to match and all is right in the Data Networking World of HSRP. Oh, I also added
the DHCP configuration to DLS2. Tested, right, and true.
TASK 1: TROUBLE TICKET LAB 5-1 TT-A (7-22-2012)

The Scenario - The company is interested in implementing an IP-based CCTV solution. The


powers at be decide to implement a pilot run to show the capabilities. (this scenario is scarcely
close to the one Jeremy went through in the NIL labs on the CBT Nuggets)

My job is to make sure VLAN-70-CCTV is trunking and allows connectivity between the office
VLAN and branch workers at R2. I also have to make sure HSRP is set up correctly and allows
for proper fail-over.

Overall it was cake. Some Standard IP mismatches between VLAN 70 between DLS1 and
DLS2, which didn’t allow for HSPR to communicate. VLAN 70 was not trunking across the
links and also needed to be created on ALS1.

When testing the Fail-over between DLS1 and DLS2, it arose some interesting problems to the
surface. First, the CCTV VLAN 70 server is connected to DLS2, which is the Active router for
HSPR on VLAN 70. If DLS2 completely goes down, all connectivity is lost, as there is no other
form of redundancy for the CCTV server.

If all the Fast Ethernet Trunk links go down, while leaving the connection to the CCTV and R3
up, HSRP will fail-over and pings end up going across the WAN Links. The reason why this is
interesting is because HSRP router DLS1 will take over, however with the ethernet trunk links
down, there is no way to tag packets. Thus, the SVI for VLAN 70 on DLS1 is essentially bunk at
this point.

So basically, the SVI on DLS1 will not tag any packets across the WAN link, it only works over
the ethernet dot1q. Possibly setting up the router interfaces on R1 and R3 to encapsulate dot1q
may fix the problem, however the way the network is designed, it doesn’t provide proper
redundancy, and really should be redesigned.

but my job isn’t to redesign it. It’s to solve the next trouble ticket.

TASK 2: TROUBLE TICKET LAB 5-1 TT-B

This was a tricky one, after a small fire which took out R1 and R3, my collegues have then
begun to get the replacement routers online. However, they are unsuccessful.

The configs were flipped, – R1 config was on R3 and R3 was on R1. I thought it would be fun to
restore the configs from the tftp server and not just copy and paste. I was able to get R3, which
had R1′s config, talking to the Server at 10.1.50.1 after some basic changes to the FE interface
connecting to DLS2. I restored the config from the tfpt server on R3.

On R1, which had R3′s config, I could not get it talking to the Server at 10.1.50.1, but I was able
to ping what I thought at the time was R3′s loopback interface. In reality it was it’s own
Loopback, but I thought it was R3′s, so I turned R3 into a tftp server, copied R1′s config file
from the Server, and attempted to use R3 as a tftp server. I soon figured out that R1 was not
talking to anyone after the first hop, I noticed the passive interface command for eigrp was
enabled, and the no passive interface command was issued on the wrong interface, which didn’t
allow routes to forward.

I took out the global passive-interface command, and immediately regained connectivity to the


Server at 10.1.50.1. Instead, I still used R3 as a tftp server and served R1′s config. It worked
wonderfully and all is right with the data networking world.

Router(config)tftp-server flash: Route-cfg alias Router-cfg

TASK 3: TROUBLE TICKET LAB 5-1 TT-C

This lab was interesting to say the least. The basics of the lab are ensuring that users get access to
the internet by means of a default route. Of course, as you already guessed it, we are not
advertising the internal local autonomous system out to the world, but we still need a way to get
out to the world, or at least to the ISP in this case, and they can handle the rest.

This is a topic I have been contemplating with myself (as I don’t know to many people that are
really into this stuff, and the ones I do know, I rarely get to speak with.That’s probably partly
why I have this blog.)

The problem I’ve been having with EIGRP is the “ip default-network X.X.X.X” command
almost never works like it did in the perfect lab situation they have in the ROUTE lab book. The
command never sets the default gateway, – don’t get me started. The OSPF “ip default
information originate” command works like a charm, which is the OSPF equivalent of the
EIGRP “ip default-network” command. However, I’m not using OSPF, I’m using EIGRP and
this lab is using EIGRP. So how do we get those default routes to start from R2 and continue
down to R1 & R3, then DLS1 & DLS2, then ALS1 and the users without applying the route into
the EIGRP routing table?

Seeing how I don’t have the answers for this book, I did the best I could and went with default
routes all the way down the network starting with R2. Of course this worked, but sprang to life
other redundancy issues, like if the link to R3, goes down, which would be the default route if its
up, there is not more default route.

Lucky for me, the next lab answered some unanswered questions…

TASK 4: TROUBLE TICKET LAB 5-1 TT-D

So, I was sort of disappointed because I really wanted to know how Cisco accomplished the last
lab. Well, lo-and-behold, when I loaded this lab, which was a continuation of the last lab, – there
it was. The answer to all my questions.

They had created a default static route on R2, and redistributed it into EIGRP. Awesome. The
default static route was out to the ISP, and is a much simpler alternative to what I had done with
the default routes on each  router.
But one thing still remains. Redundancy. Theres still no redundancy, if R3 goes down, which is
DLS2′s default route, DLS2 doesn’t know where to go. Granted if your on the users end, you
don’t notice the R3 failure. However, DLS2 recognizes it, and does not have a default route if R3
goes down.

My solution was just to add a default route pointing to DLS1. Now, this solution works,
however, it does do some wierd things. The hard coded default route takes over, so even when
R3 is up, and redistributing the static route from R2, DLS2 does not s use it automatically.
Overall it works but brings in some interesting configuration issues, which leads back to the
overall redesign of this network to get proper redundancy.

I would like to know what Cisco said about this one. UPDATE: They did the same thing, and
actually I guess a route that needed to be advertised in EIGRP wasn’t being advertised. So I
needed to go in there and hard code it in there.

ANOTHER UPDATE: The Network Engineers totally dawg the default route to the internet, and
with good reason. According to them its a problem here in the states and not in Europe. The only
place that does not do the default static route to the internet is in financial institutions.

http://packetpushers.net/show-82-security-failures-no-ipv6-no-network-management-another-
good-year/

TASK 5: TROUBLE TICKET LAB 5-1 TT-E

This was your standard EIGRP MD5 authentication mismatch.

A quick Side note:

handy command for seeing routes, sorta like traceroute and other goods:
TASK 1: TROUBLE TICKET LAB 5-2 TT-A

Scenario – Migration from EIGRP to OSPF. OSPF and its areas are running at HQ, EIGRP is
running at the branch office. During phase one, some engineers completed the mutual route
redistribution, however, they didn’t do it right. I just gotta ask, who-the-heck is doing the hiring
here?

It’s pretty much your standard route redistribution flaws. Actually, I was wise to this right away.
Seeing how if anybody has taken the ROUTE Exam, knows one of the sims was like this where
if you didn’t include the subnets command as well as the metrics, your hosed and no external
routes get learned.

Like I said it was pretty standard, – when redistributing OSPF into EIGRP, always include
metrics. When redistributing EIGRP into OSPF, always include the subnets command.

R1(config)#router eigrp 1
R1(config-router)#redistribute ospf 1 metric 1544 20000 255 255 1500

R1(config)#router ospf 1

R1(config-router)#redistribute eigrp 1 subnets

TASK 2: TROUBLE TICKET LAB 5-2 TT-B

I almost forgot what I did to solve this lab, as soon as I got done I got pre-occupied with the
Practice Exam Sim here:

http://www.cisco.com/web/learning/le3/le2/le37/le10/tshoot_demo.html

What makes this extremely difficult, is you cannot make changes to the running config. So you
can’t test your theory as to what is actually wrong. One thing I love about networking, is it’s
usually — come up with a theory and test it, and you know for a fact whether it works or not.

On the Exam, the tester can only diagnose the problem. And honestly, some of them you can’t be
entirely sure without testing. Whelp, I’m getting pre-occupied just blogging about it.

The lab, however, was your standard OSPF area mismatches. I know I solved it because I used
the TFTP server to setup the next labs.

TASK 3: TROUBLE TICKET LAB 5-2 TT-C

This was your standard Hello/dead interval mismatch between R3 and DLS2′s FE interfaces.

TASK 4: TROUBLE TICKET LAB 5-2 TT-D

The Scenario – MD5 implementation across OSPF links. Of-course they are doing a test run, and
with good reason, because no body here seems to be competent what-so-ever.

On the running-config, the MD5 password was already encrypted with type 7 hash. Simple
enough, I broke out cane, and cracked em both. However, to my surprise, no mismatch. The
problem ended up being on DLS1, The Passive-interface default command was hard coded under
the OSPF process, and of-course they didn’t hard code the no passive-interface vlan200
command. I threw that command into the OSPF process, as well as redid the md5 across the
links, adding the “ip ospf authentication message-digest” command also. I finished up by
applying MD5 authentication to the rest of the links.

That was one hell of a day full of standard mismatches … And all is right in the Data
Networking world.

TASK 1: TROUBLE TICKET LAB 5-3 TT-A (7-23-2012)


These are the BGP trouble tickets. unfortunately for me, my brain tends to shut off when it hears
BGP.

The scenario, R1 is not peering with R2(ISP). This was your standard BGP AS mismatches. R1
was actually configured like the other routing protocols, kinda funny. I cleaned up the BGP
configuration on R1, and eBGP peering started immediately with the ISP.

Now, I could of stopped there, because all the lab really called for was to fix the peering
problem. But the issue still remains, – the 10.1.0.0 internal network is getting out to the internet.
So I actually decided to go above and beyond on this one, and redistributed bgp using the proper
metrics. I really wasn’t sure if this was right, or if it still is the right way to accomplish this in a
secure environment. Honestly, I don’t think this would be good to do in real life. But none-the-
less, it worked fine.

And to my surprise, the next configuration had the exact same redistribution command hard
coded into EIGRP. I have to say, I was pretty proud of myself at that point.

TASK 2: TROUBLE TICKET LAB 5-3 TT-B

This was a little strange, there is a test VLAN on DLS1, which needs to be accessible through
BGP from clients at the the ISP branch network. I actually ended up getting stuck on this one and
had to look at the configs for the next files. Which had a default route, and changes to the
network being advertised in the local iBGP on R1.

Something else that made this lab a little tuff to solve was the fact that BGP, should not be
configured on DLS1. I actually even had to make changes to R2(ISP) to get the routes to work
right. This lab was a little iffy.

TASK 3: TROUBLE TICKET LAB 5-3 TT-C

At this point, I sorta started to have my bearings together with BGP, and the way cisco
is approaching BGP in these labs.

The ISP is using prefix-lists to ensure that customers do not announce routes that have not been
officially assigned to them. Which means the configuration on R1, needs to be flawless.

I found the static route with the wrong subnet mask,- and the same problem  with network being
advertised under the BGP configuration. They had the prefix /24, when it needed to be /27. I
changed it and restarted BGP. This did not fix the problem. So I started reading things, – books,
and troubleshooting guides, ext. and went back to it, and what do you know it worked.

So this one, I was right, and I fixed it, I just had to be a little patient. BGP is usually slow to peer,
but I guess it’s slow to install routes once it has peered.

Here is a really good link about BGP Public Key Infrastructure, and how it’s actually a real
problem. The problem being people advertising wrong BGP AS’s and IP addresses on the
internet, which leads to network hijacking. It’s pretty interesting, but it still doesn’t make full
sense to me. The reason being is neighbors are not peered/established in BGP automatically, and
if ISP’s are using prefix list’s, I would think that advertising  just any BGP AS and actually
establishing BGP peers wouldn’t actually be possible.

I realize that BGP will choose it’s path to a network that is the closest. So if BGP AS 65501 is
really far, but there is another BGP 65501 closer, it will pick the closer AS, even if the closer AS
is not the real AS. So I guess if you could actually find an ISP that allows you to peer with them,
and the chances of that happening would be slim to none. I would think. But the way these guys
make it sound, ‘- like any moron with an internet connection & a Router could accomplish this. I
doubt it. but who knows? I don’t really know. And if anybody did know, it would be these guys.

http://packetpushers.net/show-105-bgp-origin-validation-with-resource-public-key-
infrastructure-rpki/

Sharing Books and resources. 

http://www.cabrillo.edu/~rgraziani/

cisco – perlman

http://netacad.cabrillo.edu/ (Cisco Networking Academy, no joke, its essentially the courses for


CCNP, CCNA, and CCNA Security. I’m pretty happy to come across this.)

TASK 1: TROUBLE TICKET LAB 6-1 TT-A (7-24-2012)

These are NAT/PAT and DHCP trouble tickets. For this one everything looked pretty good to
me. I mainly focused on the ip nat configurations. I noticed the “ip nat source list 1 pool public-
address” in the running config. I made sure it referenced the correct access list, and for a second
even toyed with the idea that it may need to be applied somewhere, like an interface for example,
kinda’ like how a route-map and then a policy-map creates a PBR over all referencing the
access-list.

So I ended up looking up the proper syntax for applying the command “ip nat source list pool
<poolname>” and of course this is usually accomplished by hard-coding inside or outside source
list. So I took the command “ip nat source list 1 pool public-address” out and applied “ip nat
inside source list pool public-addr” and voila, it worked. I still wasn’t really sure, checked my
solution and it was the same one. Good on me.

TASK 2: TROUBLE TICKET LAB 6-1 TT-B

Before I even started this Lab, just by the description of the problem, I saw this coming a mile
away. I went straight for the public pool of nat addresses, and sure enough, only four address
were being translated. I changed it to the entire public pool the company was given by the ISP
for public addresses, leaving the static addresses out of course , I believe it was dot five through
thirty.

The other way you can solve this lab is to overload the public addresses. Using what I believe
and understand PAT to be, you can do this ”ip nat inside source list pool public-addr overload”

TASK 3: TROUBLE TICKET LAB 6-1 TT-C

This one was a pain in the tuckus. I read through the troubleshooting section before starting this
lab, which actually kinda made me feel like I cheated, but Cisco recommends that you read this
section before starting any of the labs. What I found in the troubleshooting DHCP section was
the command “IP helper address X.X.X.X” applied under interface configuration mode. I
actually wasn’t aware of this command.

The command I am aware of, and what I use to forward DHCP is “ip dhcp relay information
option” applied in global. I use that command along with “ip name-server X.X.X.X” to relay
DNS with DHCP from an external server. However I use the DHCP command on Switches,
3550′s to be exact. When applied on the router, it doesn’t work like expected. At least that was
my experience.

So I used the IP helper address command on the interface connecting to the DHCP server, and it
still didn’t work. So I spent so long trying to figure out what was wrong, and finally I found. I
can’t believe I didn’t see this before fixing the relay information on the next hop router. The
DHCP pool was the wrong ip address, so I fixed it and everything worked. Then I excluded the
the default gateway and all is right in the data networking world. goodnight.

TASK 1: TROUBLE TICKET LAB 7-1 TT-A (7-25-2012)

These are the performance problems. Right away there is an unnecessary huge ACL that just
permits connections. I took that out, and where it was applied. IP CEF was not enabled either.
You are supposed to enable CEF and IP route cache under the interface. I honestly didn’t catch
this.

TASK 2: TROUBLE TICKET LAB 7-1 TT-B

The problem here is a huge BGP route table. In the lab, the instruction indicate that I only have
access to R1 and R2 Routers. R1 is the companies router, and R2 is the ISP router. Since I have
access to R2, and R2 is the Router advertising a ridiculousness amount of BGP routes. I applied
the command Aggregate-address 172.20.0.0 255.255.248.0 summary-only to summarize the
BGP routes advertised to the company. I some this coming from a mile away some how. These
labs are a little predictable. I wish the exam was going to be like this or I would sign up today.

TASK 1: TROUBLE TICKET LAB 9-1 TT-A 

RADIUS Lab using WINRADIUS. Awful software. I hate it. Standard cisco RADIUS port
mismatch and password.
TASK 2: TROUBLE TICKET LAB 9-1 TT-B

Standard SSH setup for authentication.

TASK 1: TROUBLE TICKET LAB 9-2 TT-A

This was a fun lab. DHCP snooping is enabled on the Access Layer, and the trust is not applied
to all upstream Port Channel interfaces. On the Distribution layer, the information relay agent
trust all command needs to be applied to both switches. I had to reference the proper syntax, but
for the most part I had this one in the bag. – “ip dhcp relay information trust-all” in global
configuration mode on the Distribution Layer.

TASK 2: TROUBLE TICKET LAB 9-2 TT-B

Right away I was stoked to do this one. It was your standard EIGRP authentication mismatch
and applied to the wrong interfaces. Fun Lab.

TASK 1: TROUBLE TICKET LAB 9-3 TT-A

These trouble tickest inluded data plane security, so your nat and firewall stuff. I actually had to
read forward on this one, I ended up reading section two, – the troubleshooting guide. I got an
idea of what I needed to do and created an access list that allows traffic to enter from the internet
and access the web server.

Again, there were some other little problems to start out with that need correcting.

TASK 2: TROUBLE TICKET LAB 9-3 TT-B

This one was a little tuff as well. Again, I had to read through section two, and after reading
through the trouble shooting guide I was able to solve the problem. When ever you have an
access-map like the one in this lab. The VLAN access-map deny’s all by default, so you have to
create a second VLAN access map to all traffic not matching the ACL. This lab also calls for
more modification to the Extended ACL as well. The ACL must explicitly deny traffic to the rest
of the VLANs.

Again, there were some other little problems to start out with that need correcting.

TASK 1: TROUBLE TICKET LAB 10-1 TT-A (7-26-2012)

This one was fun, the issue was relating to VLAN 100 not trunking across the proper links.
Nothing the command “switchport trunk allowed vlan add 100″ can’t fix.

TASK 2: TROUBLE TICKET LAB 10-1 TT-B

This was a problem relating to BGP and OSPF. The BGP route was not being redistributed
properly. I thought I could redistribute BGP into OSPF, however, it did not work like I expected.
In fact, it didn’t work at all. I was hoping that the redistribution would work like EIGRP did in
prior lab.

So I added the default-information originate command under OSPF on the ASBR. This gave the
downstream routers a default gateway/route, however it still didn’t work.

I proceeded to  stare at the blinking cursor for awhile. Thinking. Thinking. Debating. What to do.
What to do. . .

A default route will take care of this problem, but should I do that? So I decided not to, and
looked up what the fix was, and it turned out that R2 (the ISP) needed a default route back into
the eBGP AS. I thought that was a stupid solution, as they should not of introduced problems on
the ISP end. But none the less.

TASK 3: TROUBLE TICKET LAB 10-1 TT-C

Immediately there was a router-id mismatch. I checked the loopback interfaces for mismatches
and correctness. I then checked the router-id and corrected it under OSPF. There where also
some DHCP snooping issues that needed correcting.

The next problem was an access list preventing OSPF from forming a neighbor relationship with
the correct neighbor. I just took the ACL and access-group out of the configuration and of course
it worked. The proper way to solve this one was to keep the ACL and access-group applied to the
interface but add a statement to the access list that permits the directly connected link as well as
OSPF.

I went back and created a more secure and allowable ACL.

TASK 4: TROUBLE TICKET LAB 10-2 TT-D

This one was kind of a let down. I loaded up all the configs and got the lab running, and
everything worked. Everything works, and it’s a let down. Aww the irony.

It turned out the startup config has a configuration to change the boot order which is supposed to
stop the router from booting properly. Whelp, it didn’t. I went in and changed it from config-
register 0×2100 to config-register 0×2102.

These were really awesome labs! I got a lot of satisfaction out of them and learned quite a bit. I
would recommend these to anyone into networking that is looking for something to do on a
Friday night.

Fixed all the issues And all is right in the data networking world…
Labs

You might also like