Complex Network Services CCNP Lab 1
Complex Network Services CCNP Lab 1
Download Lab: GNS3
Previous Next
Complex Network Services CCNP Lab 1
Prerequisites:
Cisco IOSv (vios-adventerprisek9-m.vmdk.SPA.156-2.T)
Cisco IOSvL2 (vios_l2-adventerprisek9-m.03.2017.qcow2)
Introduction:
This lab is about network services taught in the CCNP route curriculum a great practice to improve your skills in the following networking technologies: NAT/PAT, DHCP, DNS, PBR, IP SLA, VRF-lite, including route-replication between multiple VRFs instances, prefix-list, route-map, additionally some switching and BGP external routing between multiple autonomous systems will be configured to support this lab needs. Servers SRV1 and SRV2 already configured.
Scenario:
Commercial building management agency rented out three floors of its brand new building to the companies which concern with the business of healthcare, insurance, and banking industries. Agency hosting at this location Internet access and data center services to its tenants. Building management has two contracts with ISPs to provide high-speed Internet. To comply with government standards, the strict requirements defined, to logically separate each company's internet and local traffic, vrf instances will be used.
The lab consists of multiple parts:
Part1: You will perform tasks related to autonomous systems, on the ISP1, ISP2, and Internet routers interfaces parameters, static routing, BGP routing, DNS server will be configured.
Part2: After ISPs' routers ready to provide Internet access to the customers, you will connect the edge router R1 to them using default static routes. IP SLA parameters will be defined here too.
Part3: At this point, it is necessary to connect the R1 router to its customers, VRF-lite, sub-interfaces, DHCP servers, VLAN trunking, and VLANs will be configured.
Part4: Configure NAT to multiple ISPs using route-map.
Part5: Connect companies' networks to the data center and configure Policy-based routing for each company.
Topology:
Lab procedures:
Part1:
Task1: Internet router.
Step1: Configure interfaces.
Internet(config)#
!
interface GigabitEthernet0/0
description Link to ISP1 int g0/0
ip address 1.1.1.1 255.255.255.252
no shutdown
!
interface GigabitEthernet0/1
description Link to ISP2 int g0/0
ip address 1.1.1.5 255.255.255.252
no shutdown
!
interface GigabitEthernet0/2
description Link to SRV1
ip address 1.1.1.9 255.255.255.252
no shutdown
!
interface Loopback0
description FOR BGP PEERING ONLY!
ip address 1.1.1.199 255.255.255.255
!
interface Loopback8
description THIS IS DNS SERVER IP ADDRESS
ip address 8.8.8.8 255.255.255.255
!
interface Loopback100
description THIS IS THE GOOGLE.COM
ip address 66.249.64.19 255.255.255.255
Step2: Configure static routes to the ISPs' routers Loopback0 addresses for eBGP peering.
!
Internet(config)#
ip route 50.0.1.254 255.255.255.255 1.1.1.2
ip route 50.0.2.254 255.255.255.255 1.1.1.6
Step3: Configure static routes for BGP protocol to advertise to other peers.
!
Internet(config)#
!
! This is AS1000's network will be advertised to ISP1 and ISP2
ip route 1.1.1.0 255.255.255.0 Null0
!
! This is Google's network will be advertised to ISP1 and ISP2
ip route 66.249.64.0 255.255.224.0 Null0
Step4: Configure BGP protocol for AS1000
!
Internet(config)#
router bgp 1000
bgp router-id 1.1.1.199
bgp log-neighbor-changes
network 1.1.1.0 mask 255.255.255.0
network 8.8.8.8 mask 255.255.255.255
network 66.249.64.0 mask 255.255.224.0
neighbor 50.0.1.254 remote-as 501
neighbor 50.0.1.254 ebgp-multihop 2
neighbor 50.0.1.254 update-source Loopback0
neighbor 50.0.2.254 remote-as 502
neighbor 50.0.2.254 ebgp-multihop 2
neighbor 50.0.2.254 update-source Loopback0
Step5: Configure DNS server.
!
Internet(config)#
ip dns server
ip host google.com 66.249.64.19
ip host srv1.com 1.1.1.10
ip name-server 8.8.8.8
ip domain lookup
Task2: ISP1 router.
Step1: Configure interfaces.
!
ISP1(config)#
interface Loopback0
description FOR BGP PEERING ONLY!
ip address 50.0.1.254 255.255.255.255
!
interface GigabitEthernet0/0
description Link to Internet int g0/0
ip address 1.1.1.2 255.255.255.252
no shutdown
!
interface GigabitEthernet0/1
description Link to ISP2 int g0/1
ip address 50.0.1.5 255.255.255.252
no shutdown
!
interface GigabitEthernet0/2
description Link to Customer Router R1 int g0/0
ip address 50.0.1.1 255.255.255.252
no shutdown
Step2: Configure static routes to the ISP2 and Internet routers' Loopback0 addresses for eBGP peering.
!
ISP1(config)#
ip route 1.1.1.199 255.255.255.255 1.1.1.1
ip route 50.0.2.254 255.255.255.255 50.0.1.6
Step3: Configure static routes for BGP protocol to advertise to other peers.
!
ISP1(config)#
ip route 50.0.1.0 255.255.255.0 Null0
Step4: Configure BGP protocol for AS501.
!
ISP1(config)#
router bgp 501
bgp router-id 50.0.1.254
bgp log-neighbor-changes
network 50.0.1.0 mask 255.255.255.0
neighbor 1.1.1.199 remote-as 1000
neighbor 1.1.1.199 ebgp-multihop 2
neighbor 1.1.1.199 update-source Loopback0
neighbor 50.0.2.254 remote-as 502
neighbor 50.0.2.254 ebgp-multihop 2
neighbor 50.0.2.254 update-source Loopback0
Step5: Configure the name server.
!
ISP1(config)#
ip name-server 8.8.8.8
ip domain lookup
Task3: ISP2 router.
Step1: Configure interfaces.
!
ISP2(config)#
interface Loopback0
description FOR BGP PEERING ONLY!
ip address 50.0.2.254 255.255.255.255
!
interface GigabitEthernet0/0
description Link to Internet int 0/1
ip address 1.1.1.6 255.255.255.252
no shutdown
!
interface GigabitEthernet0/1
description Link to ISP1 int g0/1
ip address 50.0.1.6 255.255.255.252
no shutdown
!
interface GigabitEthernet0/2
description Link to Customer Router R1 int g0/1
ip address 50.0.2.1 255.255.255.252
no shutdown
Step2: Configure static routes to the ISP1and Internet routers' Loopback0 addresses for eBGP peering.
!
ISP2(config)#
ip route 1.1.1.199 255.255.255.255 1.1.1.5
ip route 50.0.1.254 255.255.255.255 50.0.1.5
Step3: Configure static routes for BGP protocol to advertise to other peers.
!
ISP2(config)#
ip route 50.0.2.0 255.255.255.0 Null0
Step4: Configure BGP protocol for AS502.
!
ISP2(config)#
router bgp 502
bgp router-id 50.0.2.254
bgp log-neighbor-changes
network 50.0.2.0 mask 255.255.255.0
neighbor 1.1.1.199 remote-as 1000
neighbor 1.1.1.199 ebgp-multihop 2
neighbor 1.1.1.199 update-source Loopback0
neighbor 50.0.1.254 remote-as 501
neighbor 50.0.1.254 ebgp-multihop 2
neighbor 50.0.1.254 update-source Loopback0
Step5: Configure the name server.
!
ISP2(config)#
ip name-server 8.8.8.8
ip domain lookup
Verification for Part1: Using these commands verify proper operation of BGP and test connectivity to the domain names of google.com and srv1.com from both ISP1 and ISP2 routers.
Example:
ISP1# show ip bgp summary
ISP1# show ip bgp
ISP1# ping google.com
Part2:
Task1: Connect router R1 to the ISP1 and ISP2 by configuring its interfaces.
Step1: Configure interfaces:
!
R1(config)#
interface GigabitEthernet0/0
description Link to ISP1 int g0/2
ip address 50.0.1.2 255.255.255.252
no shutdown
!
interface GigabitEthernet0/1
description Link to ISP2 int g0/2
ip address 50.0.2.2 255.255.255.252
no shutdown
!
interface GigabitEthernet0/2
description SUPPORTS MULTIPLE SUB-INTERFACES
no shutdown
Step2: Verify interfaces' address assignment and the connectivity to ISPs.
!
R1#show ip interface brief
Interface IP-Address OK? Method Status Protocol
GigabitEthernet0/0 50.0.1.2 YES manual up up
GigabitEthernet0/1 50.0.2.2 YES manual up up
!
R1#ping 50.0.1.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 50.0.1.1, timeout is 2 seconds:
.!!!!
Success rate is 80 percent (4/5), round-trip min/avg/max = 2/2/3 ms
!
R1#ping 50.0.2.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 50.0.2.1, timeout is 2 seconds:
.!!!!
Success rate is 80 percent (4/5), round-trip min/avg/max = 2/2/3 ms
Task2: Configure two IP SLA for each ISP.
Step1: Configure IP SLA instance 1 and 2.
!
R1(config)#
ip sla 1
icmp-echo 50.0.1.1 source-interface GigabitEthernet0/0
threshold 500
timeout 800
frequency 1
!
ip sla 2
icmp-echo 50.0.2.1 source-interface GigabitEthernet0/1
threshold 500
timeout 800
frequency 1
Step2: Start IP SLA for both instances.
!
R1(config)#
ip sla schedule 1 life forever start-time now
ip sla schedule 2 life forever start-time now
Step3: Verify the IP SLA operation.
!
R1# show ip sla summary
IPSLAs Latest Operation Summary
Codes: * active, ^ inactive, ~ pending
ID Type Destination Stats Return Last
(ms) Code Run
-----------------------------------------------------------------------
*1 icmp-echo 50.0.1.1 RTT=2 OK 0 seconds ago
*2 icmp-echo 50.0.2.1 RTT=1 OK 0 seconds ago
Task3: Create a track object for each IP SLA instance.
Step1: Configure track object.
!
R1(config)#
track 1 ip sla 1
delay down 3 up 2
!
track 2 ip sla 2
delay down 3 up 2
Step2: Verify track object.
!
R1#show track
Track 1
IP SLA 1 state
State is Up
1 change, last change 00:00:04
Delay up 2 secs, down 3 secs
Latest operation return code: OK
Latest RTT (millisecs) 2
Track 2
IP SLA 2 state
State is Up
1 change, last change 00:00:04
Delay up 2 secs, down 3 secs
Latest operation return code: OK
Latest RTT (millisecs) 5
Task4: Specify two default routes with track object to each ISP.
Step1: Configure default routes.
!
R1(config)#
ip route 0.0.0.0 0.0.0.0 50.0.1.1 track 1
ip route 0.0.0.0 0.0.0.0 50.0.2.1 track 2
Step2: Verify default routes in the routing table.
!
R1#show ip route static
Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP
a - application route
+ - replicated route, % - next hop override, p - overrides from PfR
Gateway of last resort is 50.0.2.1 to network 0.0.0.0
S* 0.0.0.0/0 [1/0] via 50.0.2.1
[1/0] via 50.0.1.1
Task5: Configure name server.
R1(config)#
ip name-server 8.8.8.8
ip domain lookup
Task6: Test the connectivity to the internet.
Step1: Ping the google.com from R1's interface g0/0.
!
R1#ping google.com source g0/0
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 66.249.64.19, timeout is 2 seconds:
Packet sent with a source address of 50.0.1.2
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 3/3/4 ms
Step2: Ping srv1.com from R1's interface G0/1.
!
R1#ping srv1.com source g0/1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 1.1.1.10, timeout is 2 seconds:
Packet sent with a source address of 50.0.2.2
.!!!!
Success rate is 80 percent (4/5), round-trip min/avg/max = 3/254/1006 ms
R1#
Part3
Task1: Create VRF-lite instance for each company.
R1(config)#
!
ip vrf VRF_A
ip vrf VRF_B
ip vrf VRF_C
Task2: Configure subinterfaces and verify vrf parameters.
Step1: Subinterfaces.
!
R1(config)#
interface GigabitEthernet0/2.10
encapsulation dot1Q 10
ip vrf forwarding VRF_A
ip address 192.168.10.1 255.255.255.0
!
interface GigabitEthernet0/2.20
encapsulation dot1Q 20
ip vrf forwarding VRF_B
ip address 192.168.20.1 255.255.255.0
!
interface GigabitEthernet0/2.30
encapsulation dot1Q 30
ip vrf forwarding VRF_C
ip address 192.168.30.1 255.255.255.0
Step2: Verify VRF.
!
R1#show ip vrf
Name Default RD Interfaces
VRF_A Gi0/2.10
VRF_B Gi0/2.20
VRF_C Gi0/2.30
Task4: Configure DHCP servers for VRF instances.
Step1: Exclude ranges of reserve IP addresses.
!
R1(config)#
ip dhcp excluded-address vrf VRF_A 192.168.10.1 192.168.10.10
ip dhcp excluded-address vrf VRF_B 192.168.20.1 192.168.20.10
ip dhcp excluded-address vrf VRF_C 192.168.30.1 192.168.30.10
Step2: Configure DHCP servers.
!
R1(config)#
ip dhcp pool VRF_A
vrf VRF_A
network 192.168.10.0 255.255.255.0
default-router 192.168.10.1
dns-server 8.8.8.8
domain-name vrfa.local
!
ip dhcp pool VRF_B
vrf VRF_B
network 192.168.20.0 255.255.255.0
default-router 192.168.20.1
dns-server 8.8.8.8
domain-name vrfb.local
!
ip dhcp pool VRF_C
vrf VRF_C
network 192.168.30.0 255.255.255.0
default-router 192.168.30.1
dns-server 8.8.8.8
domain-name vrfc.local
Task5: SW1 switch, VLANs, Trunk, Access ports.
Step1: Configure VLAN trunking.
!
SW1(config)#
interface GigabitEthernet0/0
switchport trunk encapsulation dot1q
switchport mode trunk
switchport nonegotiate
Step2: Configure VLANs.
!
SW1(config)#
vlan 10
name VRF_A
!
vlan 20
name VRF_B
!
vlan 30
name VRF_C
Step3: Assign access ports to VLANs.
!
SW1(config)#
interface GigabitEthernet0/1
description LAN_A
switchport access vlan 10
switchport mode access
!
interface GigabitEthernet0/2
description LAN_B
switchport access vlan 20
switchport mode access
!
interface GigabitEthernet0/3
description LAN_C
switchport access vlan 30
switchport mode access
Step4: Verify SW1's switching configurations.
!
SW1# show vlan brief
SW1# show interfaces trunk
Task6: On each PC get ip address via DHCP and verify connectivities to default gateways.
PC1> ip dhcp
DDORA IP 192.168.10.11/24 GW 192.168.10.1
PC1> ping 192.168.10.1
84 bytes from 192.168.10.1 icmp_seq=1 ttl=255 time=5.143 ms
84 bytes from 192.168.10.1 icmp_seq=2 ttl=255 time=4.637 ms
!
PC2> ip dhcp
DDORA IP 192.168.20.11/24 GW 192.168.20.1
PC2> ping 192.168.20.1
84 bytes from 192.168.20.1 icmp_seq=1 ttl=255 time=3.416 ms
84 bytes from 192.168.20.1 icmp_seq=2 ttl=255 time=3.660 ms
!
PC3> ip dhcp
DDORA IP 192.168.30.11/24 GW 192.168.30.1
PC3> ping 192.168.30.1
84 bytes from 192.168.30.1 icmp_seq=1 ttl=255 time=5.242 ms
84 bytes from 192.168.30.1 icmp_seq=2 ttl=255 time=3.726 ms
Part4: With multiple ISPs when configuring NAT on the Cisco router you simply cannot have several nat overload statements because after you apply second line, it will replace first, leaving you just with one nat connection. Here you will use route-map to configure multiple NAT links.
Task1: Identify interfaces to be enabled for translation.
Step1: Enable NAT for outside interfaces.
!
R1(config)#
interface range g0/0-1
ip nat outside
Step2: Enable NAT for inside interface.
!
R1(config)#
interface g0/2.10
ip nat inside
interface g0/2.20
ip nat inside
interface g0/2.30
ip nat inside
Task2: Using standard ACL to identify subnets of VRFs to be subject for translation.
R1(config)#
ip access-list standard NAT_VRF_A
permit 192.168.10.0 0.0.0.255
ip access-list standard NAT_VRF_B
permit 192.168.20.0 0.0.0.255
ip access-list standard NAT_VRF_C
permit 192.168.30.0 0.0.0.255
Task3: Using route-map match subnet of each VRF to nat outside interfaces. Each vrf has to have two route-map instances for both paths, one via ISP1 and another via ISP2.
R1(config)#
route-map NAT_A_ONE permit 10
match ip address NAT_VRF_A
match interface GigabitEthernet0/0
!
route-map NAT_A_TWO permit 10
match ip address NAT_VRF_A
match interface GigabitEthernet0/1
!
route-map NAT_B_ONE permit 10
match ip address NAT_VRF_B
match interface GigabitEthernet0/0
!
route-map NAT_B_TWO permit 10
match ip address NAT_VRF_B
match interface GigabitEthernet0/1
!
route-map NAT_C_ONE permit 10
match ip address NAT_VRF_C
match interface GigabitEthernet0/0
!
route-map NAT_C_TWO permit 10
match ip address NAT_VRF_C
match interface GigabitEthernet0/1
Task4: Create six NAT overload statements to enable translation for three VRFs. Two statements per each VRF.
R1(config)#
ip nat inside source route-map NAT_A_ONE interface GigabitEthernet0/0 vrf VRF_A overload
ip nat inside source route-map NAT_A_TWO interface GigabitEthernet0/1 vrf VRF_A overload
ip nat inside source route-map NAT_B_ONE interface GigabitEthernet0/0 vrf VRF_B overload
ip nat inside source route-map NAT_B_TWO interface GigabitEthernet0/1 vrf VRF_B overload
ip nat inside source route-map NAT_C_ONE interface GigabitEthernet0/0 vrf VRF_C overload
ip nat inside source route-map NAT_C_TWO interface GigabitEthernet0/1 vrf VRF_C overload
Task5: From each PC ping google.com then check nat translation table for each VRF.
PC1> ping google.com
Cannot resolve google.com
PC2> ping google.com
Cannot resolve google.com
PC3> ping google.com
Cannot resolve google.com
R1#show ip nat translations vrf VRF_A
Clients unable to reach domain and there are no entries in the nat table. Problem is not with NAT but with routing itself.
Task6: Enable VRFs to route traffic to the outside networks.
Step1: Verify VRFs’ routing tables.
R1#show ip route vrf VRF_A
Routing Table: VRF_A
192.168.10.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.10.0/24 is directly connected, GigabitEthernet0/2.10
L 192.168.10.1/32 is directly connected, GigabitEthernet0/2.10
R1#show ip route vrf VRF_B
Routing Table: VRF_B
192.168.20.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.20.0/24 is directly connected, GigabitEthernet0/2.20
L 192.168.20.1/32 is directly connected, GigabitEthernet0/2.20
R1#show ip route vrf VRF_C
Routing Table: VRF_C
192.168.30.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.30.0/24 is directly connected, GigabitEthernet0/2.30
L 192.168.30.1/32 is directly connected, GigabitEthernet0/2.30
There is only one directly connected route per each VRF, not enough to get you out of a local network.
Step2: Because you need to take only the default route out of global routing table and copy it to each VRF routing table, it is a great idea to identify default route with prefix-list then match this prefix-list in route-map and use this route-map with route-replication command.
R1(config)#
ip prefix-list DF_ROUTE seq 5 permit 0.0.0.0/0
!
route-map RM_DF_ROUTE permit 10
match ip address prefix-list DF_ROUTE
Step3: Copy default route into every VRF’s routing table.
R1(config)#
ip vrf VRF_A
route-replicate from vrf global unicast static route-map RM_DF_ROUTE
!
ip vrf VRF_B
route-replicate from vrf global unicast static route-map RM_DF_ROUTE
!
ip vrf VRF_C
route-replicate from vrf global unicast static route-map RM_DF_ROUTE
Step4: Verify each VRF’s routing table again. Default route is present now.
R1#show ip route vrf VRF_A
Routing Table: VRF_A
Gateway of last resort is 50.0.2.1 to network 0.0.0.0
S* + 0.0.0.0/0 [1/0] via 50.0.2.1
[1/0] via 50.0.1.1
192.168.10.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.10.0/24 is directly connected, GigabitEthernet0/2.10
L 192.168.10.1/32 is directly connected, GigabitEthernet0/2.10
R1#show ip route vrf VRF_B
Routing Table: VRF_B
Gateway of last resort is 50.0.2.1 to network 0.0.0.0
S* + 0.0.0.0/0 [1/0] via 50.0.2.1
[1/0] via 50.0.1.1
192.168.20.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.20.0/24 is directly connected, GigabitEthernet0/2.20
L 192.168.20.1/32 is directly connected, GigabitEthernet0/2.20
R1#show ip route vrf VRF_C
Routing Table: VRF_C
Gateway of last resort is 50.0.2.1 to network 0.0.0.0
S* + 0.0.0.0/0 [1/0] via 50.0.2.1
[1/0] via 50.0.1.1
192.168.30.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.30.0/24 is directly connected, GigabitEthernet0/2.30
L 192.168.30.1/32 is directly connected, GigabitEthernet0/2.30
Step5: From each PC again ping google.com then check nat translation table for each VRF.
PC1> ping google.com
google.com resolved to 66.249.64.19
84 bytes from 66.249.64.19 icmp_seq=1 ttl=253 time=6.757 ms
84 bytes from 66.249.64.19 icmp_seq=2 ttl=253 time=6.964 ms
PC2> ping google.com
google.com resolved to 66.249.64.19
84 bytes from 66.249.64.19 icmp_seq=1 ttl=253 time=7.295 ms
84 bytes from 66.249.64.19 icmp_seq=2 ttl=253 time=7.027 ms
PC3> ping google.com
google.com resolved to 66.249.64.19
84 bytes from 66.249.64.19 icmp_seq=1 ttl=253 time=8.626 ms
84 bytes from 66.249.64.19 icmp_seq=2 ttl=253 time=7.289 ms
R1#show ip nat translations vrf VRF_C
Pro Inside global Inside local Outside local Outside global
udp 50.0.1.2:4457 192.168.30.11:4457 8.8.8.8:53 8.8.8.8:53
udp 50.0.1.2:6739 192.168.30.11:6739 8.8.8.8:53 8.8.8.8:53
udp 50.0.1.2:23990 192.168.30.11:23990 8.8.8.8:53 8.8.8.8:53
icmp 50.0.2.2:37532 192.168.30.11:37532 66.249.64.19:37532 66.249.64.19:37532
icmp 50.0.2.2:37788 192.168.30.11:37788 66.249.64.19:37788 66.249.64.19:37788
udp 50.0.1.2:40553 192.168.30.11:40553 8.8.8.8:53 8.8.8.8:53
Part5
Task1: Configure R1 for Datacenter access.
Step1: Create VRF instance for data center network and replicate customers’ routes in to it. This will allow data center services access customers’ VRFs.
R1(config)#
ip vrf VRF_SERVERS
route-replicate from vrf VRF_A unicast connected
route-replicate from vrf VRF_B unicast connected
route-replicate from vrf VRF_C unicast connected
Step2: Verify routing table of data center VRF.
R1#show ip route vrf VRF_SERVERS
Routing Table: VRF_SERVERS
Gateway of last resort is not set
192.168.10.0/24 is variably subnetted, 2 subnets, 2 masks
C + 192.168.10.0/24 is directly connected, GigabitEthernet0/2.10
L 192.168.10.1/32 is directly connected, GigabitEthernet0/2.10
192.168.20.0/24 is variably subnetted, 2 subnets, 2 masks
C + 192.168.20.0/24 is directly connected, GigabitEthernet0/2.20
L 192.168.20.1/32 is directly connected, GigabitEthernet0/2.20
192.168.30.0/24 is variably subnetted, 2 subnets, 2 masks
C + 192.168.30.0/24 is directly connected, GigabitEthernet0/2.30
L 192.168.30.1/32 is directly connected, GigabitEthernet0/2.30
Step3: Configure R1’s data center facing interface g0/3.
R1(config)#
interface GigabitEthernet0/3
description Link to SW2 int e0 (SERVER_FARM)
ip vrf forwarding VRF_SERVERS
ip address 192.168.40.1 255.255.255.0
no shutdown
Step4: Verify that 192.168.40.0/24 subnet is in VRF_SERVERS vrf routing table and ping server SRV2.
R1#show ip route vrf VRF_SERVERS | include 192.168.40.0
192.168.40.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.40.0/24 is directly connected, GigabitEthernet0/3
R1#ping vrf VRF_SERVERS 192.168.40.10
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.40.10, timeout is 2 seconds:
.!!!!
Success rate is 80 percent (4/5), round-trip min/avg/max = 1/1/1 ms
Step5: Customers still unable to access datacenter, due to the lack of proper routing, customers’ VRFs are missing data center subnet.
PC1> ping 192.168.40.10 *50.0.2.1 icmp_seq=1 ttl=254 time=8.496 ms (ICMP type:3, code:1, Destination host unreachable)
*50.0.2.1 icmp_seq=2 ttl=254 time=6.855 ms (ICMP type:3, code:1, Destination host unreachable)
PC2> ping 192.168.40.10
*50.0.2.1 icmp_seq=1 ttl=254 time=6.742 ms (ICMP type:3, code:1, Destination host unreachable)
*50.0.2.1 icmp_seq=2 ttl=254 time=7.356 ms (ICMP type:3, code:1, Destination host unreachable)
PC3> ping 192.168.40.10
*50.0.2.1 icmp_seq=1 ttl=254 time=6.916 ms (ICMP type:3, code:1, Destination host unreachable)
*50.0.2.1 icmp_seq=2 ttl=254 time=7.499 ms (ICMP type:3, code:1, Destination host unreachable)
Step6: Copy route for 192.168.40.0/24 from VRF_SERVERS to customers’ VRFs using route-map and prefix-list, the reason why you have to use route-map is because otherwise it will copy back into customers’ VRFs routes VRF_SERVER routing table has from customers VRFs and this will violate separation rule which states that customers VRFs cannot access each other.
R1(config)#
ip prefix-list VRF_SERVER_NETWORK seq 5 permit 192.168.40.0/24
!
route-map RM_VRF_SERVERS permit 10
match ip address prefix-list VRF_SERVER_NETWORK
!
ip vrf VRF_A
route-replicate from vrf VRF_SERVERS unicast connected route-map RM_VRF_SERVERS
!
ip vrf VRF_B
route-replicate from vrf VRF_SERVERS unicast connected route-map RM_VRF_SERVERS
!
ip vrf VRF_C
route-replicate from vrf VRF_SERVERS unicast connected route-map RM_VRF_SERVERS
Step7: Verify routing tables of customers’ VRFs.
R1#show ip route vrf VRF_A | include 192.168.40.0
192.168.40.0/24 is variably subnetted, 2 subnets, 2 masks
C + 192.168.40.0/24 is directly connected, GigabitEthernet0/3
R1#show ip route vrf VRF_B | include 192.168.40.0
192.168.40.0/24 is variably subnetted, 2 subnets, 2 masks
C + 192.168.40.0/24 is directly connected, GigabitEthernet0/3
R1#show ip route vrf VRF_C | include 192.168.40.0
192.168.40.0/24 is variably subnetted, 2 subnets, 2 masks
C + 192.168.40.0/24 is directly connected, GigabitEthernet0/3
Step8: Ping SRV2 from all PCs.
PC1> ping 192.168.40.10 192.168.40.10 icmp_seq=1 timeout 192.168.40.10 icmp_seq=2 timeout 84 bytes from 192.168.40.10 icmp_seq=3 ttl=63 time=5.308 ms 84 bytes from 192.168.40.10 icmp_seq=4 ttl=63 time=3.760 ms
PC2> ping 192.168.40.10 84 bytes from 192.168.40.10 icmp_seq=1 ttl=63 time=3.061 ms 84 bytes from 192.168.40.10 icmp_seq=2 ttl=63 time=5.045 ms
PC3> ping 192.168.40.10
84 bytes from 192.168.40.10 icmp_seq=1 ttl=63 time=4.000 ms
84 bytes from 192.168.40.10 icmp_seq=2 ttl=63 time=4.318 ms
Task2: Last thing left to finish is to configure policy-based routing. PBR will be configured for VRF A B and C. Policy will be the same for all subnets. HTTP, HTTPS and DNS for both protocols TCP and UDP originating from VRFs’ subnets have to be routed over the path to ISP1, the rest of traffic goes over the path to ISP2.
Step1: ACL statements define traffic for PBR.
R1(config)#
ip access-list extended VRF_A_ALL_TRAFFIC
permit ip 192.168.10.0 0.0.0.255 any
ip access-list extended VRF_A_POLICY
permit tcp 192.168.10.0 0.0.0.255 any eq www
permit tcp 192.168.10.0 0.0.0.255 any eq 443
permit tcp 192.168.10.0 0.0.0.255 any eq domain
permit udp 192.168.10.0 0.0.0.255 any eq domain
!
ip access-list extended VRF_B_ALL_TRAFFIC
permit ip 192.168.20.0 0.0.0.255 any
ip access-list extended VRF_B_POLICY
permit tcp 192.168.20.0 0.0.0.255 any eq www
permit tcp 192.168.20.0 0.0.0.255 any eq 443
permit tcp 192.168.20.0 0.0.0.255 any eq domain
permit udp 192.168.20.0 0.0.0.255 any eq domain
!
ip access-list extended VRF_C_ALL_TRAFFIC
permit ip 192.168.30.0 0.0.0.255 any
ip access-list extended VRF_C_POLICY
permit tcp 192.168.30.0 0.0.0.255 any eq www
permit tcp 192.168.30.0 0.0.0.255 any eq 443
permit tcp 192.168.30.0 0.0.0.255 any eq domain
permit udp 192.168.30.0 0.0.0.255 any eq domain
Step2: For each VRF create route-map policy match ACL statements accordingly. Sequence 10 of route-map routes policy traffic to ISP1 and 20 routes the rest of the traffic to ISP2. Earlier configured IP SLA instances and track used with set ip next-hop command to monitor links, in case one of ISPs become unavailable, the policy will be ignored and traffic will be routed with normal forwarding. If verify-availability feature is no configured and one of ISPs is no longer responding and router R1 is not aware of situation policy would still send traffic over to the bad path resulting in intermittent connectivity.
R1(config)#
route-map POLICY_A permit 10
match ip address VRF_A_POLICY
set ip next-hop verify-availability 50.0.1.1 10 track 1
route-map POLICY_A permit 20
match ip address VRF_A_ALL_TRAFFIC
set ip next-hop verify-availability 50.0.2.1 10 track 2
!
route-map POLICY_B permit 10
match ip address VRF_B_POLICY
set ip next-hop verify-availability 50.0.1.1 20 track 1
route-map POLICY_B permit 20
match ip address VRF_B_ALL_TRAFFIC
set ip next-hop verify-availability 50.0.2.1 20 track 2
!
route-map POLICY_C permit 10
match ip address VRF_C_POLICY
set ip next-hop verify-availability 50.0.1.1 30 track 1
route-map POLICY_C permit 20
match ip address VRF_C_ALL_TRAFFIC
set ip next-hop verify-availability 50.0.2.1 30 track 2
Step3: Apply route-map policies to the subinterface.
R1(config)#
interface GigabitEthernet0/2.10
ip policy route-map POLICY_A
!
interface GigabitEthernet0/2.20
ip policy route-map POLICY_B
!
interface GigabitEthernet0/2.30
ip policy route-map POLICY_C
Step4: Verify policy application.
R1#show ip policy
Interface Route map
Gi0/2.10 POLICY_A
Gi0/2.20 POLICY_B
Gi0/2.30 POLICY_C
Step5: Test PBR for each VRF. Enable debugging for ip policy then initiate policy traffic from PCs to srv1.com domain.
R1#debug ip policy
Policy routing debugging is on
Example for HTTP traffic:
PC1> ping srv1.com -P 6 -p 80
srv1.com resolved to 1.1.1.10
Connect 80@srv1.com seq=1 ttl=61 time=14.787 ms
SendData 80@srv1.com seq=1 ttl=61 time=6.401 ms
Close 80@srv1.com seq=1 ttl=61 time=8.461 ms
Connect 80@srv1.com seq=2 ttl=61 time=7.421 ms
SendData 80@srv1.com seq=2 ttl=61 time=7.366 ms
Close 80@srv1.com seq=2 ttl=61 time=6.320 ms
Connect 80@srv1.com seq=3 ttl=61 time=6.352 ms
SendData 80@srv1.com seq=3 ttl=61 time=5.322 ms
Close 80@srv1.com seq=3 ttl=61 time=7.418 ms
Connect 80@srv1.com seq=4 ttl=61 time=7.422 ms
SendData 80@srv1.com seq=4 ttl=61 time=7.466 ms
Close 80@srv1.com seq=4 ttl=61 time=8.474 ms
From PC1 point of view it seems that PBR working but check the output of debug. Highlighted line indicates that Next hop was rejected and another lines show that normal forwarding was performed, so PBR fails to route traffic.
Aug 3 16:34:25.859: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8, len 54, FIB policy match
*Aug 3 16:34:25.859: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8, len 54, PBR Counted
*Aug 3 16:34:25.859: CEF-IP-POLICY: fib for addr 50.0.1.1 is Not Attached; Nexthop rejected
*Aug 3 16:34:25.859: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8, len 54, FIB policy rejected - normal forwarding
*Aug 3 16:34:25.860: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8, len 54, policy match
*Aug 3 16:34:25.861: IP: route map POLICY_A, item 10, permit
*Aug 3 16:34:25.861: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8, len 54, policy rejected -- normal forwarding
*Aug 3 16:34:25.875: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 60, FIB policy match
*Aug 3 16:34:25.875: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 60, PBR Counted
*Aug 3 16:34:25.875: CEF-IP-POLICY: fib for addr 50.0.1.1 is Not Attached; Nexthop rejected
*Aug 3 16:34:25.875: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 60, FIB policy rejected - normal forwarding
*Aug 3 16:34:25.887: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 52, FIB policy match
Step6: Fixing PBR. The reason for failure is because again VRFs’ routing tables do not have appropriate routes, in this case next-hop address defined in the route-map are not in the RIBs.
R1#show ip route vrf VRF_A 50.0.1.1
Routing Table: VRF_A
% Network not in table
R1#show ip route vrf VRF_A 50.0.2.1
Routing Table: VRF_A
% Network not in table
The same goes for VRF_B and VRF_C. Again using prefix-list and route-map copy slash 30 subnets for the links between R1 and ISPs into customers’ VRFs, this will fix the problem.
R1(config)#
ip prefix-list ISP1_IP seq 5 permit 50.0.1.0/30
ip prefix-list ISP2_IP seq 5 permit 50.0.2.0/30
!
route-map RM_ISPs_IPs permit 10
match ip address prefix-list ISP1_IP ISP2_IP
!
ip vrf VRF_A
route-replicate from vrf global unicast connected route-map RM_ISPs_IPs
!
ip vrf VRF_B
route-replicate from vrf global unicast connected route-map RM_ISPs_IPs
!
ip vrf VRF_C
route-replicate from vrf global unicast connected route-map RM_ISPs_IPs
Step7: Verify VRFs’ routing tables.
R1#show ip route vrf VRF_A
Routing Table: VRF_A
Gateway of last resort is 50.0.2.1 to network 0.0.0.0
S* + 0.0.0.0/0 [1/0] via 50.0.2.1
[1/0] via 50.0.1.1
50.0.0.0/8 is variably subnetted, 4 subnets, 2 masks
C + 50.0.1.0/30 is directly connected, GigabitEthernet0/0
L 50.0.1.2/32 is directly connected, GigabitEthernet0/0
C + 50.0.2.0/30 is directly connected, GigabitEthernet0/1
L 50.0.2.2/32 is directly connected, GigabitEthernet0/1
192.168.10.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.10.0/24 is directly connected, GigabitEthernet0/2.10
L 192.168.10.1/32 is directly connected, GigabitEthernet0/2.10
192.168.40.0/24 is variably subnetted, 2 subnets, 2 masks
C + 192.168.40.0/24 is directly connected, GigabitEthernet0/3
L 192.168.40.1/32 is directly connected, GigabitEthernet0/3
R1#show ip route vrf VRF_B
Routing Table: VRF_B
Gateway of last resort is 50.0.2.1 to network 0.0.0.0
S* + 0.0.0.0/0 [1/0] via 50.0.2.1
[1/0] via 50.0.1.1
50.0.0.0/8 is variably subnetted, 4 subnets, 2 masks
C + 50.0.1.0/30 is directly connected, GigabitEthernet0/0
L 50.0.1.2/32 is directly connected, GigabitEthernet0/0
C + 50.0.2.0/30 is directly connected, GigabitEthernet0/1
L 50.0.2.2/32 is directly connected, GigabitEthernet0/1
192.168.20.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.20.0/24 is directly connected, GigabitEthernet0/2.20
L 192.168.20.1/32 is directly connected, GigabitEthernet0/2.20
192.168.40.0/24 is variably subnetted, 2 subnets, 2 masks
C + 192.168.40.0/24 is directly connected, GigabitEthernet0/3
L 192.168.40.1/32 is directly connected, GigabitEthernet0/3
R1#show ip route vrf VRF_C
Routing Table: VRF_C
Gateway of last resort is 50.0.2.1 to network 0.0.0.0
S* + 0.0.0.0/0 [1/0] via 50.0.2.1
[1/0] via 50.0.1.1
50.0.0.0/8 is variably subnetted, 4 subnets, 2 masks
C + 50.0.1.0/30 is directly connected, GigabitEthernet0/0
L 50.0.1.2/32 is directly connected, GigabitEthernet0/0
C + 50.0.2.0/30 is directly connected, GigabitEthernet0/1
L 50.0.2.2/32 is directly connected, GigabitEthernet0/1
192.168.30.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.30.0/24 is directly connected, GigabitEthernet0/2.30
L 192.168.30.1/32 is directly connected, GigabitEthernet0/2.30
192.168.40.0/24 is variably subnetted, 2 subnets, 2 masks
C + 192.168.40.0/24 is directly connected, GigabitEthernet0/3
L 192.168.40.1/32 is directly connected, GigabitEthernet0/3
Step8: Now that routes of links to ISPs are in the VRFs’ routing tables you can test PBR again.
R1#debug ip policy
Policy routing debugging is on
PC1> ping srv1.com -P 6 -p 80
srv1.com resolved to 1.1.1.10
Connect 80@srv1.com timeout
Connect 80@srv1.com seq=2 ttl=61 time=7.437 ms
SendData 80@srv1.com seq=2 ttl=61 time=8.401 ms
Debug ip policy output:
*Aug 3 17:02:48.793: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8, len 54, FIB policy match
*Aug 3 17:02:48.793: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8, len 54, PBR Counted
*Aug 3 17:02:48.793: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8, g=50.0.1.1, len 54, FIB policy routed
*Aug 3 17:02:48.794: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8, len 54, policy match
*Aug 3 17:02:48.795: IP: route map POLICY_A, item 10, permit
*Aug 3 17:02:48.795: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8 (GigabitEthernet0/0), len 54, policy routed
*Aug 3 17:02:48.795: IP: GigabitEthernet0/2.10 to GigabitEthernet0/0 50.0.1.1
R1#
*Aug 3 17:02:48.809: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 60, FIB policy match
*Aug 3 17:02:48.809: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 60, PBR Counted
*Aug 3 17:02:48.809: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, g=50.0.1.1, len 60, FIB policy routed
*Aug 3 17:02:49.807: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 60, FIB policy match
*Aug 3 17:02:49.807: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 60, PBR Counted
*Aug 3 17:02:49.807: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, g=50.0.1.1, len 60, FIB policy routed
R1#
*Aug 3 17:02:50.808: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 60, FIB policy match
*Aug 3 17:02:50.808: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 60, PBR Counted
*Aug 3 17:02:50.808: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, g=50.0.1.1, len 60, FIB policy routed
Matching counters in parentheses of show ip access-list command indicates that policy traffic goes as expected.
R1#show ip access-lists
Extended IP access list VRF_A_ALL_TRAFFIC
10 permit ip 192.168.10.0 0.0.0.255 any
Extended IP access list VRF_A_POLICY
10 permit tcp 192.168.10.0 0.0.0.255 any eq www (48 matches)
20 permit tcp 192.168.10.0 0.0.0.255 any eq 443
30 permit tcp 192.168.10.0 0.0.0.255 any eq domain
40 permit udp 192.168.10.0 0.0.0.255 any eq domain (4 matches)
Try to simulate traffic on all PCs to test the rest of the policy then check ACL matching counters again.
HTTP: ping srv1.com -P 6 -p 80
HTTPS: ping srv1.com -P 6 -p 443
DNS UDP ping srv1.com -P 17 -p 53
DNS TCP ping srv1.com -P 6 -p 53
ICMP ping 1.1.1.10
Step9: Last thing to do is to verify that PCs in VRFs still unable to access each other.
PC1> ping 192.168.20.11
*50.0.2.1 icmp_seq=1 ttl=254 time=8.795 ms (ICMP type:3, code:1, Destination host unreachable)
*50.0.2.1 icmp_seq=2 ttl=254 time=8.430 ms (ICMP type:3, code:1, Destination host unreachable)
^C
PC2> ping 192.168.30.11
*50.0.2.1 icmp_seq=1 ttl=254 time=7.667 ms (ICMP type:3, code:1, Destination host unreachable)
*50.0.2.1 icmp_seq=2 ttl=254 time=8.140 ms (ICMP type:3, code:1, Destination host unreachable)
THIS LAB WILL BE BASE FOR MY NEXT MULTIPLE TROUBLESHOOTING SCENARIO LABS.
Download Lab: GNS3
Previous Next
Complex Network Services CCNP Lab 1
Prerequisites:
Cisco IOSv (vios-adventerprisek9-m.vmdk.SPA.156-2.T)
Cisco IOSvL2 (vios_l2-adventerprisek9-m.03.2017.qcow2)
Introduction:
This lab is about network services taught in the CCNP route curriculum a great practice to improve your skills in the following networking technologies: NAT/PAT, DHCP, DNS, PBR, IP SLA, VRF-lite, including route-replication between multiple VRFs instances, prefix-list, route-map, additionally some switching and BGP external routing between multiple autonomous systems will be configured to support this lab needs. Servers SRV1 and SRV2 already configured.
Scenario:
Commercial building management agency rented out three floors of its brand new building to the companies which concern with the business of healthcare, insurance, and banking industries. Agency hosting at this location Internet access and data center services to its tenants. Building management has two contracts with ISPs to provide high-speed Internet. To comply with government standards, the strict requirements defined, to logically separate each company's internet and local traffic, vrf instances will be used.
The lab consists of multiple parts:
Part1: You will perform tasks related to autonomous systems, on the ISP1, ISP2, and Internet routers interfaces parameters, static routing, BGP routing, DNS server will be configured.
Part2: After ISPs' routers ready to provide Internet access to the customers, you will connect the edge router R1 to them using default static routes. IP SLA parameters will be defined here too.
Part3: At this point, it is necessary to connect the R1 router to its customers, VRF-lite, sub-interfaces, DHCP servers, VLAN trunking, and VLANs will be configured.
Part4: Configure NAT to multiple ISPs using route-map.
Part5: Connect companies' networks to the data center and configure Policy-based routing for each company.
Topology:
Lab procedures:
Part1:
Task1: Internet router.
Step1: Configure interfaces.
Internet(config)#
!
interface GigabitEthernet0/0
description Link to ISP1 int g0/0
ip address 1.1.1.1 255.255.255.252
no shutdown
!
interface GigabitEthernet0/1
description Link to ISP2 int g0/0
ip address 1.1.1.5 255.255.255.252
no shutdown
!
interface GigabitEthernet0/2
description Link to SRV1
ip address 1.1.1.9 255.255.255.252
no shutdown
!
interface Loopback0
description FOR BGP PEERING ONLY!
ip address 1.1.1.199 255.255.255.255
!
interface Loopback8
description THIS IS DNS SERVER IP ADDRESS
ip address 8.8.8.8 255.255.255.255
!
interface Loopback100
description THIS IS THE GOOGLE.COM
ip address 66.249.64.19 255.255.255.255
Step2: Configure static routes to the ISPs' routers Loopback0 addresses for eBGP peering.
!
Internet(config)#
ip route 50.0.1.254 255.255.255.255 1.1.1.2
ip route 50.0.2.254 255.255.255.255 1.1.1.6
Step3: Configure static routes for BGP protocol to advertise to other peers.
!
Internet(config)#
!
! This is AS1000's network will be advertised to ISP1 and ISP2
ip route 1.1.1.0 255.255.255.0 Null0
!
! This is Google's network will be advertised to ISP1 and ISP2
ip route 66.249.64.0 255.255.224.0 Null0
Step4: Configure BGP protocol for AS1000
!
Internet(config)#
router bgp 1000
bgp router-id 1.1.1.199
bgp log-neighbor-changes
network 1.1.1.0 mask 255.255.255.0
network 8.8.8.8 mask 255.255.255.255
network 66.249.64.0 mask 255.255.224.0
neighbor 50.0.1.254 remote-as 501
neighbor 50.0.1.254 ebgp-multihop 2
neighbor 50.0.1.254 update-source Loopback0
neighbor 50.0.2.254 remote-as 502
neighbor 50.0.2.254 ebgp-multihop 2
neighbor 50.0.2.254 update-source Loopback0
Step5: Configure DNS server.
!
Internet(config)#
ip dns server
ip host google.com 66.249.64.19
ip host srv1.com 1.1.1.10
ip name-server 8.8.8.8
ip domain lookup
Task2: ISP1 router.
Step1: Configure interfaces.
!
ISP1(config)#
interface Loopback0
description FOR BGP PEERING ONLY!
ip address 50.0.1.254 255.255.255.255
!
interface GigabitEthernet0/0
description Link to Internet int g0/0
ip address 1.1.1.2 255.255.255.252
no shutdown
!
interface GigabitEthernet0/1
description Link to ISP2 int g0/1
ip address 50.0.1.5 255.255.255.252
no shutdown
!
interface GigabitEthernet0/2
description Link to Customer Router R1 int g0/0
ip address 50.0.1.1 255.255.255.252
no shutdown
Step2: Configure static routes to the ISP2 and Internet routers' Loopback0 addresses for eBGP peering.
!
ISP1(config)#
ip route 1.1.1.199 255.255.255.255 1.1.1.1
ip route 50.0.2.254 255.255.255.255 50.0.1.6
Step3: Configure static routes for BGP protocol to advertise to other peers.
!
ISP1(config)#
ip route 50.0.1.0 255.255.255.0 Null0
Step4: Configure BGP protocol for AS501.
!
ISP1(config)#
router bgp 501
bgp router-id 50.0.1.254
bgp log-neighbor-changes
network 50.0.1.0 mask 255.255.255.0
neighbor 1.1.1.199 remote-as 1000
neighbor 1.1.1.199 ebgp-multihop 2
neighbor 1.1.1.199 update-source Loopback0
neighbor 50.0.2.254 remote-as 502
neighbor 50.0.2.254 ebgp-multihop 2
neighbor 50.0.2.254 update-source Loopback0
Step5: Configure the name server.
!
ISP1(config)#
ip name-server 8.8.8.8
ip domain lookup
Task3: ISP2 router.
Step1: Configure interfaces.
!
ISP2(config)#
interface Loopback0
description FOR BGP PEERING ONLY!
ip address 50.0.2.254 255.255.255.255
!
interface GigabitEthernet0/0
description Link to Internet int 0/1
ip address 1.1.1.6 255.255.255.252
no shutdown
!
interface GigabitEthernet0/1
description Link to ISP1 int g0/1
ip address 50.0.1.6 255.255.255.252
no shutdown
!
interface GigabitEthernet0/2
description Link to Customer Router R1 int g0/1
ip address 50.0.2.1 255.255.255.252
no shutdown
Step2: Configure static routes to the ISP1and Internet routers' Loopback0 addresses for eBGP peering.
!
ISP2(config)#
ip route 1.1.1.199 255.255.255.255 1.1.1.5
ip route 50.0.1.254 255.255.255.255 50.0.1.5
Step3: Configure static routes for BGP protocol to advertise to other peers.
!
ISP2(config)#
ip route 50.0.2.0 255.255.255.0 Null0
Step4: Configure BGP protocol for AS502.
!
ISP2(config)#
router bgp 502
bgp router-id 50.0.2.254
bgp log-neighbor-changes
network 50.0.2.0 mask 255.255.255.0
neighbor 1.1.1.199 remote-as 1000
neighbor 1.1.1.199 ebgp-multihop 2
neighbor 1.1.1.199 update-source Loopback0
neighbor 50.0.1.254 remote-as 501
neighbor 50.0.1.254 ebgp-multihop 2
neighbor 50.0.1.254 update-source Loopback0
Step5: Configure the name server.
!
ISP2(config)#
ip name-server 8.8.8.8
ip domain lookup
Verification for Part1: Using these commands verify proper operation of BGP and test connectivity to the domain names of google.com and srv1.com from both ISP1 and ISP2 routers.
Example:
ISP1# show ip bgp summary
ISP1# show ip bgp
ISP1# ping google.com
Part2:
Task1: Connect router R1 to the ISP1 and ISP2 by configuring its interfaces.
Step1: Configure interfaces:
!
R1(config)#
interface GigabitEthernet0/0
description Link to ISP1 int g0/2
ip address 50.0.1.2 255.255.255.252
no shutdown
!
interface GigabitEthernet0/1
description Link to ISP2 int g0/2
ip address 50.0.2.2 255.255.255.252
no shutdown
!
interface GigabitEthernet0/2
description SUPPORTS MULTIPLE SUB-INTERFACES
no shutdown
Step2: Verify interfaces' address assignment and the connectivity to ISPs.
!
R1#show ip interface brief
Interface IP-Address OK? Method Status Protocol
GigabitEthernet0/0 50.0.1.2 YES manual up up
GigabitEthernet0/1 50.0.2.2 YES manual up up
!
R1#ping 50.0.1.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 50.0.1.1, timeout is 2 seconds:
.!!!!
Success rate is 80 percent (4/5), round-trip min/avg/max = 2/2/3 ms
!
R1#ping 50.0.2.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 50.0.2.1, timeout is 2 seconds:
.!!!!
Success rate is 80 percent (4/5), round-trip min/avg/max = 2/2/3 ms
Task2: Configure two IP SLA for each ISP.
Step1: Configure IP SLA instance 1 and 2.
!
R1(config)#
ip sla 1
icmp-echo 50.0.1.1 source-interface GigabitEthernet0/0
threshold 500
timeout 800
frequency 1
!
ip sla 2
icmp-echo 50.0.2.1 source-interface GigabitEthernet0/1
threshold 500
timeout 800
frequency 1
Step2: Start IP SLA for both instances.
!
R1(config)#
ip sla schedule 1 life forever start-time now
ip sla schedule 2 life forever start-time now
Step3: Verify the IP SLA operation.
!
R1# show ip sla summary
IPSLAs Latest Operation Summary
Codes: * active, ^ inactive, ~ pending
ID Type Destination Stats Return Last
(ms) Code Run
-----------------------------------------------------------------------
*1 icmp-echo 50.0.1.1 RTT=2 OK 0 seconds ago
*2 icmp-echo 50.0.2.1 RTT=1 OK 0 seconds ago
Task3: Create a track object for each IP SLA instance.
Step1: Configure track object.
!
R1(config)#
track 1 ip sla 1
delay down 3 up 2
!
track 2 ip sla 2
delay down 3 up 2
Step2: Verify track object.
!
R1#show track
Track 1
IP SLA 1 state
State is Up
1 change, last change 00:00:04
Delay up 2 secs, down 3 secs
Latest operation return code: OK
Latest RTT (millisecs) 2
Track 2
IP SLA 2 state
State is Up
1 change, last change 00:00:04
Delay up 2 secs, down 3 secs
Latest operation return code: OK
Latest RTT (millisecs) 5
Task4: Specify two default routes with track object to each ISP.
Step1: Configure default routes.
!
R1(config)#
ip route 0.0.0.0 0.0.0.0 50.0.1.1 track 1
ip route 0.0.0.0 0.0.0.0 50.0.2.1 track 2
Step2: Verify default routes in the routing table.
!
R1#show ip route static
Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP
a - application route
+ - replicated route, % - next hop override, p - overrides from PfR
Gateway of last resort is 50.0.2.1 to network 0.0.0.0
S* 0.0.0.0/0 [1/0] via 50.0.2.1
[1/0] via 50.0.1.1
Task5: Configure name server.
R1(config)#
ip name-server 8.8.8.8
ip domain lookup
Task6: Test the connectivity to the internet.
Step1: Ping the google.com from R1's interface g0/0.
!
R1#ping google.com source g0/0
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 66.249.64.19, timeout is 2 seconds:
Packet sent with a source address of 50.0.1.2
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 3/3/4 ms
Step2: Ping srv1.com from R1's interface G0/1.
!
R1#ping srv1.com source g0/1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 1.1.1.10, timeout is 2 seconds:
Packet sent with a source address of 50.0.2.2
.!!!!
Success rate is 80 percent (4/5), round-trip min/avg/max = 3/254/1006 ms
R1#
Part3
Task1: Create VRF-lite instance for each company.
R1(config)#
!
ip vrf VRF_A
ip vrf VRF_B
ip vrf VRF_C
Task2: Configure subinterfaces and verify vrf parameters.
Step1: Subinterfaces.
!
R1(config)#
interface GigabitEthernet0/2.10
encapsulation dot1Q 10
ip vrf forwarding VRF_A
ip address 192.168.10.1 255.255.255.0
!
interface GigabitEthernet0/2.20
encapsulation dot1Q 20
ip vrf forwarding VRF_B
ip address 192.168.20.1 255.255.255.0
!
interface GigabitEthernet0/2.30
encapsulation dot1Q 30
ip vrf forwarding VRF_C
ip address 192.168.30.1 255.255.255.0
Step2: Verify VRF.
!
R1#show ip vrf
Name Default RD Interfaces
VRF_A Gi0/2.10
VRF_B Gi0/2.20
VRF_C Gi0/2.30
Task4: Configure DHCP servers for VRF instances.
Step1: Exclude ranges of reserve IP addresses.
!
R1(config)#
ip dhcp excluded-address vrf VRF_A 192.168.10.1 192.168.10.10
ip dhcp excluded-address vrf VRF_B 192.168.20.1 192.168.20.10
ip dhcp excluded-address vrf VRF_C 192.168.30.1 192.168.30.10
Step2: Configure DHCP servers.
!
R1(config)#
ip dhcp pool VRF_A
vrf VRF_A
network 192.168.10.0 255.255.255.0
default-router 192.168.10.1
dns-server 8.8.8.8
domain-name vrfa.local
!
ip dhcp pool VRF_B
vrf VRF_B
network 192.168.20.0 255.255.255.0
default-router 192.168.20.1
dns-server 8.8.8.8
domain-name vrfb.local
!
ip dhcp pool VRF_C
vrf VRF_C
network 192.168.30.0 255.255.255.0
default-router 192.168.30.1
dns-server 8.8.8.8
domain-name vrfc.local
Task5: SW1 switch, VLANs, Trunk, Access ports.
Step1: Configure VLAN trunking.
!
SW1(config)#
interface GigabitEthernet0/0
switchport trunk encapsulation dot1q
switchport mode trunk
switchport nonegotiate
Step2: Configure VLANs.
!
SW1(config)#
vlan 10
name VRF_A
!
vlan 20
name VRF_B
!
vlan 30
name VRF_C
Step3: Assign access ports to VLANs.
!
SW1(config)#
interface GigabitEthernet0/1
description LAN_A
switchport access vlan 10
switchport mode access
!
interface GigabitEthernet0/2
description LAN_B
switchport access vlan 20
switchport mode access
!
interface GigabitEthernet0/3
description LAN_C
switchport access vlan 30
switchport mode access
Step4: Verify SW1's switching configurations.
!
SW1# show vlan brief
SW1# show interfaces trunk
Task6: On each PC get ip address via DHCP and verify connectivities to default gateways.
PC1> ip dhcp
DDORA IP 192.168.10.11/24 GW 192.168.10.1
PC1> ping 192.168.10.1
84 bytes from 192.168.10.1 icmp_seq=1 ttl=255 time=5.143 ms
84 bytes from 192.168.10.1 icmp_seq=2 ttl=255 time=4.637 ms
!
PC2> ip dhcp
DDORA IP 192.168.20.11/24 GW 192.168.20.1
PC2> ping 192.168.20.1
84 bytes from 192.168.20.1 icmp_seq=1 ttl=255 time=3.416 ms
84 bytes from 192.168.20.1 icmp_seq=2 ttl=255 time=3.660 ms
!
PC3> ip dhcp
DDORA IP 192.168.30.11/24 GW 192.168.30.1
PC3> ping 192.168.30.1
84 bytes from 192.168.30.1 icmp_seq=1 ttl=255 time=5.242 ms
84 bytes from 192.168.30.1 icmp_seq=2 ttl=255 time=3.726 ms
Part4: With multiple ISPs when configuring NAT on the Cisco router you simply cannot have several nat overload statements because after you apply second line, it will replace first, leaving you just with one nat connection. Here you will use route-map to configure multiple NAT links.
Task1: Identify interfaces to be enabled for translation.
Step1: Enable NAT for outside interfaces.
!
R1(config)#
interface range g0/0-1
ip nat outside
Step2: Enable NAT for inside interface.
!
R1(config)#
interface g0/2.10
ip nat inside
interface g0/2.20
ip nat inside
interface g0/2.30
ip nat inside
Task2: Using standard ACL to identify subnets of VRFs to be subject for translation.
R1(config)#
ip access-list standard NAT_VRF_A
permit 192.168.10.0 0.0.0.255
ip access-list standard NAT_VRF_B
permit 192.168.20.0 0.0.0.255
ip access-list standard NAT_VRF_C
permit 192.168.30.0 0.0.0.255
Task3: Using route-map match subnet of each VRF to nat outside interfaces. Each vrf has to have two route-map instances for both paths, one via ISP1 and another via ISP2.
R1(config)#
route-map NAT_A_ONE permit 10
match ip address NAT_VRF_A
match interface GigabitEthernet0/0
!
route-map NAT_A_TWO permit 10
match ip address NAT_VRF_A
match interface GigabitEthernet0/1
!
route-map NAT_B_ONE permit 10
match ip address NAT_VRF_B
match interface GigabitEthernet0/0
!
route-map NAT_B_TWO permit 10
match ip address NAT_VRF_B
match interface GigabitEthernet0/1
!
route-map NAT_C_ONE permit 10
match ip address NAT_VRF_C
match interface GigabitEthernet0/0
!
route-map NAT_C_TWO permit 10
match ip address NAT_VRF_C
match interface GigabitEthernet0/1
Task4: Create six NAT overload statements to enable translation for three VRFs. Two statements per each VRF.
R1(config)#
ip nat inside source route-map NAT_A_ONE interface GigabitEthernet0/0 vrf VRF_A overload
ip nat inside source route-map NAT_A_TWO interface GigabitEthernet0/1 vrf VRF_A overload
ip nat inside source route-map NAT_B_ONE interface GigabitEthernet0/0 vrf VRF_B overload
ip nat inside source route-map NAT_B_TWO interface GigabitEthernet0/1 vrf VRF_B overload
ip nat inside source route-map NAT_C_ONE interface GigabitEthernet0/0 vrf VRF_C overload
ip nat inside source route-map NAT_C_TWO interface GigabitEthernet0/1 vrf VRF_C overload
Task5: From each PC ping google.com then check nat translation table for each VRF.
PC1> ping google.com
Cannot resolve google.com
PC2> ping google.com
Cannot resolve google.com
PC3> ping google.com
Cannot resolve google.com
R1#show ip nat translations vrf VRF_A
Clients unable to reach domain and there are no entries in the nat table. Problem is not with NAT but with routing itself.
Task6: Enable VRFs to route traffic to the outside networks.
Step1: Verify VRFs’ routing tables.
R1#show ip route vrf VRF_A
Routing Table: VRF_A
192.168.10.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.10.0/24 is directly connected, GigabitEthernet0/2.10
L 192.168.10.1/32 is directly connected, GigabitEthernet0/2.10
R1#show ip route vrf VRF_B
Routing Table: VRF_B
192.168.20.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.20.0/24 is directly connected, GigabitEthernet0/2.20
L 192.168.20.1/32 is directly connected, GigabitEthernet0/2.20
R1#show ip route vrf VRF_C
Routing Table: VRF_C
192.168.30.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.30.0/24 is directly connected, GigabitEthernet0/2.30
L 192.168.30.1/32 is directly connected, GigabitEthernet0/2.30
There is only one directly connected route per each VRF, not enough to get you out of a local network.
Step2: Because you need to take only the default route out of global routing table and copy it to each VRF routing table, it is a great idea to identify default route with prefix-list then match this prefix-list in route-map and use this route-map with route-replication command.
R1(config)#
ip prefix-list DF_ROUTE seq 5 permit 0.0.0.0/0
!
route-map RM_DF_ROUTE permit 10
match ip address prefix-list DF_ROUTE
Step3: Copy default route into every VRF’s routing table.
R1(config)#
ip vrf VRF_A
route-replicate from vrf global unicast static route-map RM_DF_ROUTE
!
ip vrf VRF_B
route-replicate from vrf global unicast static route-map RM_DF_ROUTE
!
ip vrf VRF_C
route-replicate from vrf global unicast static route-map RM_DF_ROUTE
Step4: Verify each VRF’s routing table again. Default route is present now.
R1#show ip route vrf VRF_A
Routing Table: VRF_A
Gateway of last resort is 50.0.2.1 to network 0.0.0.0
S* + 0.0.0.0/0 [1/0] via 50.0.2.1
[1/0] via 50.0.1.1
192.168.10.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.10.0/24 is directly connected, GigabitEthernet0/2.10
L 192.168.10.1/32 is directly connected, GigabitEthernet0/2.10
R1#show ip route vrf VRF_B
Routing Table: VRF_B
Gateway of last resort is 50.0.2.1 to network 0.0.0.0
S* + 0.0.0.0/0 [1/0] via 50.0.2.1
[1/0] via 50.0.1.1
192.168.20.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.20.0/24 is directly connected, GigabitEthernet0/2.20
L 192.168.20.1/32 is directly connected, GigabitEthernet0/2.20
R1#show ip route vrf VRF_C
Routing Table: VRF_C
Gateway of last resort is 50.0.2.1 to network 0.0.0.0
S* + 0.0.0.0/0 [1/0] via 50.0.2.1
[1/0] via 50.0.1.1
192.168.30.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.30.0/24 is directly connected, GigabitEthernet0/2.30
L 192.168.30.1/32 is directly connected, GigabitEthernet0/2.30
Step5: From each PC again ping google.com then check nat translation table for each VRF.
PC1> ping google.com
google.com resolved to 66.249.64.19
84 bytes from 66.249.64.19 icmp_seq=1 ttl=253 time=6.757 ms
84 bytes from 66.249.64.19 icmp_seq=2 ttl=253 time=6.964 ms
PC2> ping google.com
google.com resolved to 66.249.64.19
84 bytes from 66.249.64.19 icmp_seq=1 ttl=253 time=7.295 ms
84 bytes from 66.249.64.19 icmp_seq=2 ttl=253 time=7.027 ms
PC3> ping google.com
google.com resolved to 66.249.64.19
84 bytes from 66.249.64.19 icmp_seq=1 ttl=253 time=8.626 ms
84 bytes from 66.249.64.19 icmp_seq=2 ttl=253 time=7.289 ms
R1#show ip nat translations vrf VRF_C
Pro Inside global Inside local Outside local Outside global
udp 50.0.1.2:4457 192.168.30.11:4457 8.8.8.8:53 8.8.8.8:53
udp 50.0.1.2:6739 192.168.30.11:6739 8.8.8.8:53 8.8.8.8:53
udp 50.0.1.2:23990 192.168.30.11:23990 8.8.8.8:53 8.8.8.8:53
icmp 50.0.2.2:37532 192.168.30.11:37532 66.249.64.19:37532 66.249.64.19:37532
icmp 50.0.2.2:37788 192.168.30.11:37788 66.249.64.19:37788 66.249.64.19:37788
udp 50.0.1.2:40553 192.168.30.11:40553 8.8.8.8:53 8.8.8.8:53
Part5
Task1: Configure R1 for Datacenter access.
Step1: Create VRF instance for data center network and replicate customers’ routes in to it. This will allow data center services access customers’ VRFs.
R1(config)#
ip vrf VRF_SERVERS
route-replicate from vrf VRF_A unicast connected
route-replicate from vrf VRF_B unicast connected
route-replicate from vrf VRF_C unicast connected
Step2: Verify routing table of data center VRF.
R1#show ip route vrf VRF_SERVERS
Routing Table: VRF_SERVERS
Gateway of last resort is not set
192.168.10.0/24 is variably subnetted, 2 subnets, 2 masks
C + 192.168.10.0/24 is directly connected, GigabitEthernet0/2.10
L 192.168.10.1/32 is directly connected, GigabitEthernet0/2.10
192.168.20.0/24 is variably subnetted, 2 subnets, 2 masks
C + 192.168.20.0/24 is directly connected, GigabitEthernet0/2.20
L 192.168.20.1/32 is directly connected, GigabitEthernet0/2.20
192.168.30.0/24 is variably subnetted, 2 subnets, 2 masks
C + 192.168.30.0/24 is directly connected, GigabitEthernet0/2.30
L 192.168.30.1/32 is directly connected, GigabitEthernet0/2.30
Step3: Configure R1’s data center facing interface g0/3.
R1(config)#
interface GigabitEthernet0/3
description Link to SW2 int e0 (SERVER_FARM)
ip vrf forwarding VRF_SERVERS
ip address 192.168.40.1 255.255.255.0
no shutdown
Step4: Verify that 192.168.40.0/24 subnet is in VRF_SERVERS vrf routing table and ping server SRV2.
R1#show ip route vrf VRF_SERVERS | include 192.168.40.0
192.168.40.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.40.0/24 is directly connected, GigabitEthernet0/3
R1#ping vrf VRF_SERVERS 192.168.40.10
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.40.10, timeout is 2 seconds:
.!!!!
Success rate is 80 percent (4/5), round-trip min/avg/max = 1/1/1 ms
Step5: Customers still unable to access datacenter, due to the lack of proper routing, customers’ VRFs are missing data center subnet.
PC1> ping 192.168.40.10 *50.0.2.1 icmp_seq=1 ttl=254 time=8.496 ms (ICMP type:3, code:1, Destination host unreachable)
*50.0.2.1 icmp_seq=2 ttl=254 time=6.855 ms (ICMP type:3, code:1, Destination host unreachable)
PC2> ping 192.168.40.10
*50.0.2.1 icmp_seq=1 ttl=254 time=6.742 ms (ICMP type:3, code:1, Destination host unreachable)
*50.0.2.1 icmp_seq=2 ttl=254 time=7.356 ms (ICMP type:3, code:1, Destination host unreachable)
PC3> ping 192.168.40.10
*50.0.2.1 icmp_seq=1 ttl=254 time=6.916 ms (ICMP type:3, code:1, Destination host unreachable)
*50.0.2.1 icmp_seq=2 ttl=254 time=7.499 ms (ICMP type:3, code:1, Destination host unreachable)
Step6: Copy route for 192.168.40.0/24 from VRF_SERVERS to customers’ VRFs using route-map and prefix-list, the reason why you have to use route-map is because otherwise it will copy back into customers’ VRFs routes VRF_SERVER routing table has from customers VRFs and this will violate separation rule which states that customers VRFs cannot access each other.
R1(config)#
ip prefix-list VRF_SERVER_NETWORK seq 5 permit 192.168.40.0/24
!
route-map RM_VRF_SERVERS permit 10
match ip address prefix-list VRF_SERVER_NETWORK
!
ip vrf VRF_A
route-replicate from vrf VRF_SERVERS unicast connected route-map RM_VRF_SERVERS
!
ip vrf VRF_B
route-replicate from vrf VRF_SERVERS unicast connected route-map RM_VRF_SERVERS
!
ip vrf VRF_C
route-replicate from vrf VRF_SERVERS unicast connected route-map RM_VRF_SERVERS
Step7: Verify routing tables of customers’ VRFs.
R1#show ip route vrf VRF_A | include 192.168.40.0
192.168.40.0/24 is variably subnetted, 2 subnets, 2 masks
C + 192.168.40.0/24 is directly connected, GigabitEthernet0/3
R1#show ip route vrf VRF_B | include 192.168.40.0
192.168.40.0/24 is variably subnetted, 2 subnets, 2 masks
C + 192.168.40.0/24 is directly connected, GigabitEthernet0/3
R1#show ip route vrf VRF_C | include 192.168.40.0
192.168.40.0/24 is variably subnetted, 2 subnets, 2 masks
C + 192.168.40.0/24 is directly connected, GigabitEthernet0/3
Step8: Ping SRV2 from all PCs.
PC1> ping 192.168.40.10 192.168.40.10 icmp_seq=1 timeout 192.168.40.10 icmp_seq=2 timeout 84 bytes from 192.168.40.10 icmp_seq=3 ttl=63 time=5.308 ms 84 bytes from 192.168.40.10 icmp_seq=4 ttl=63 time=3.760 ms
PC2> ping 192.168.40.10 84 bytes from 192.168.40.10 icmp_seq=1 ttl=63 time=3.061 ms 84 bytes from 192.168.40.10 icmp_seq=2 ttl=63 time=5.045 ms
PC3> ping 192.168.40.10
84 bytes from 192.168.40.10 icmp_seq=1 ttl=63 time=4.000 ms
84 bytes from 192.168.40.10 icmp_seq=2 ttl=63 time=4.318 ms
Task2: Last thing left to finish is to configure policy-based routing. PBR will be configured for VRF A B and C. Policy will be the same for all subnets. HTTP, HTTPS and DNS for both protocols TCP and UDP originating from VRFs’ subnets have to be routed over the path to ISP1, the rest of traffic goes over the path to ISP2.
Step1: ACL statements define traffic for PBR.
R1(config)#
ip access-list extended VRF_A_ALL_TRAFFIC
permit ip 192.168.10.0 0.0.0.255 any
ip access-list extended VRF_A_POLICY
permit tcp 192.168.10.0 0.0.0.255 any eq www
permit tcp 192.168.10.0 0.0.0.255 any eq 443
permit tcp 192.168.10.0 0.0.0.255 any eq domain
permit udp 192.168.10.0 0.0.0.255 any eq domain
!
ip access-list extended VRF_B_ALL_TRAFFIC
permit ip 192.168.20.0 0.0.0.255 any
ip access-list extended VRF_B_POLICY
permit tcp 192.168.20.0 0.0.0.255 any eq www
permit tcp 192.168.20.0 0.0.0.255 any eq 443
permit tcp 192.168.20.0 0.0.0.255 any eq domain
permit udp 192.168.20.0 0.0.0.255 any eq domain
!
ip access-list extended VRF_C_ALL_TRAFFIC
permit ip 192.168.30.0 0.0.0.255 any
ip access-list extended VRF_C_POLICY
permit tcp 192.168.30.0 0.0.0.255 any eq www
permit tcp 192.168.30.0 0.0.0.255 any eq 443
permit tcp 192.168.30.0 0.0.0.255 any eq domain
permit udp 192.168.30.0 0.0.0.255 any eq domain
Step2: For each VRF create route-map policy match ACL statements accordingly. Sequence 10 of route-map routes policy traffic to ISP1 and 20 routes the rest of the traffic to ISP2. Earlier configured IP SLA instances and track used with set ip next-hop command to monitor links, in case one of ISPs become unavailable, the policy will be ignored and traffic will be routed with normal forwarding. If verify-availability feature is no configured and one of ISPs is no longer responding and router R1 is not aware of situation policy would still send traffic over to the bad path resulting in intermittent connectivity.
R1(config)#
route-map POLICY_A permit 10
match ip address VRF_A_POLICY
set ip next-hop verify-availability 50.0.1.1 10 track 1
route-map POLICY_A permit 20
match ip address VRF_A_ALL_TRAFFIC
set ip next-hop verify-availability 50.0.2.1 10 track 2
!
route-map POLICY_B permit 10
match ip address VRF_B_POLICY
set ip next-hop verify-availability 50.0.1.1 20 track 1
route-map POLICY_B permit 20
match ip address VRF_B_ALL_TRAFFIC
set ip next-hop verify-availability 50.0.2.1 20 track 2
!
route-map POLICY_C permit 10
match ip address VRF_C_POLICY
set ip next-hop verify-availability 50.0.1.1 30 track 1
route-map POLICY_C permit 20
match ip address VRF_C_ALL_TRAFFIC
set ip next-hop verify-availability 50.0.2.1 30 track 2
Step3: Apply route-map policies to the subinterface.
R1(config)#
interface GigabitEthernet0/2.10
ip policy route-map POLICY_A
!
interface GigabitEthernet0/2.20
ip policy route-map POLICY_B
!
interface GigabitEthernet0/2.30
ip policy route-map POLICY_C
Step4: Verify policy application.
R1#show ip policy
Interface Route map
Gi0/2.10 POLICY_A
Gi0/2.20 POLICY_B
Gi0/2.30 POLICY_C
Step5: Test PBR for each VRF. Enable debugging for ip policy then initiate policy traffic from PCs to srv1.com domain.
R1#debug ip policy
Policy routing debugging is on
Example for HTTP traffic:
PC1> ping srv1.com -P 6 -p 80
srv1.com resolved to 1.1.1.10
Connect 80@srv1.com seq=1 ttl=61 time=14.787 ms
SendData 80@srv1.com seq=1 ttl=61 time=6.401 ms
Close 80@srv1.com seq=1 ttl=61 time=8.461 ms
Connect 80@srv1.com seq=2 ttl=61 time=7.421 ms
SendData 80@srv1.com seq=2 ttl=61 time=7.366 ms
Close 80@srv1.com seq=2 ttl=61 time=6.320 ms
Connect 80@srv1.com seq=3 ttl=61 time=6.352 ms
SendData 80@srv1.com seq=3 ttl=61 time=5.322 ms
Close 80@srv1.com seq=3 ttl=61 time=7.418 ms
Connect 80@srv1.com seq=4 ttl=61 time=7.422 ms
SendData 80@srv1.com seq=4 ttl=61 time=7.466 ms
Close 80@srv1.com seq=4 ttl=61 time=8.474 ms
From PC1 point of view it seems that PBR working but check the output of debug. Highlighted line indicates that Next hop was rejected and another lines show that normal forwarding was performed, so PBR fails to route traffic.
Aug 3 16:34:25.859: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8, len 54, FIB policy match
*Aug 3 16:34:25.859: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8, len 54, PBR Counted
*Aug 3 16:34:25.859: CEF-IP-POLICY: fib for addr 50.0.1.1 is Not Attached; Nexthop rejected
*Aug 3 16:34:25.859: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8, len 54, FIB policy rejected - normal forwarding
*Aug 3 16:34:25.860: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8, len 54, policy match
*Aug 3 16:34:25.861: IP: route map POLICY_A, item 10, permit
*Aug 3 16:34:25.861: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8, len 54, policy rejected -- normal forwarding
*Aug 3 16:34:25.875: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 60, FIB policy match
*Aug 3 16:34:25.875: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 60, PBR Counted
*Aug 3 16:34:25.875: CEF-IP-POLICY: fib for addr 50.0.1.1 is Not Attached; Nexthop rejected
*Aug 3 16:34:25.875: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 60, FIB policy rejected - normal forwarding
*Aug 3 16:34:25.887: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 52, FIB policy match
Step6: Fixing PBR. The reason for failure is because again VRFs’ routing tables do not have appropriate routes, in this case next-hop address defined in the route-map are not in the RIBs.
R1#show ip route vrf VRF_A 50.0.1.1
Routing Table: VRF_A
% Network not in table
R1#show ip route vrf VRF_A 50.0.2.1
Routing Table: VRF_A
% Network not in table
The same goes for VRF_B and VRF_C. Again using prefix-list and route-map copy slash 30 subnets for the links between R1 and ISPs into customers’ VRFs, this will fix the problem.
R1(config)#
ip prefix-list ISP1_IP seq 5 permit 50.0.1.0/30
ip prefix-list ISP2_IP seq 5 permit 50.0.2.0/30
!
route-map RM_ISPs_IPs permit 10
match ip address prefix-list ISP1_IP ISP2_IP
!
ip vrf VRF_A
route-replicate from vrf global unicast connected route-map RM_ISPs_IPs
!
ip vrf VRF_B
route-replicate from vrf global unicast connected route-map RM_ISPs_IPs
!
ip vrf VRF_C
route-replicate from vrf global unicast connected route-map RM_ISPs_IPs
Step7: Verify VRFs’ routing tables.
R1#show ip route vrf VRF_A
Routing Table: VRF_A
Gateway of last resort is 50.0.2.1 to network 0.0.0.0
S* + 0.0.0.0/0 [1/0] via 50.0.2.1
[1/0] via 50.0.1.1
50.0.0.0/8 is variably subnetted, 4 subnets, 2 masks
C + 50.0.1.0/30 is directly connected, GigabitEthernet0/0
L 50.0.1.2/32 is directly connected, GigabitEthernet0/0
C + 50.0.2.0/30 is directly connected, GigabitEthernet0/1
L 50.0.2.2/32 is directly connected, GigabitEthernet0/1
192.168.10.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.10.0/24 is directly connected, GigabitEthernet0/2.10
L 192.168.10.1/32 is directly connected, GigabitEthernet0/2.10
192.168.40.0/24 is variably subnetted, 2 subnets, 2 masks
C + 192.168.40.0/24 is directly connected, GigabitEthernet0/3
L 192.168.40.1/32 is directly connected, GigabitEthernet0/3
R1#show ip route vrf VRF_B
Routing Table: VRF_B
Gateway of last resort is 50.0.2.1 to network 0.0.0.0
S* + 0.0.0.0/0 [1/0] via 50.0.2.1
[1/0] via 50.0.1.1
50.0.0.0/8 is variably subnetted, 4 subnets, 2 masks
C + 50.0.1.0/30 is directly connected, GigabitEthernet0/0
L 50.0.1.2/32 is directly connected, GigabitEthernet0/0
C + 50.0.2.0/30 is directly connected, GigabitEthernet0/1
L 50.0.2.2/32 is directly connected, GigabitEthernet0/1
192.168.20.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.20.0/24 is directly connected, GigabitEthernet0/2.20
L 192.168.20.1/32 is directly connected, GigabitEthernet0/2.20
192.168.40.0/24 is variably subnetted, 2 subnets, 2 masks
C + 192.168.40.0/24 is directly connected, GigabitEthernet0/3
L 192.168.40.1/32 is directly connected, GigabitEthernet0/3
R1#show ip route vrf VRF_C
Routing Table: VRF_C
Gateway of last resort is 50.0.2.1 to network 0.0.0.0
S* + 0.0.0.0/0 [1/0] via 50.0.2.1
[1/0] via 50.0.1.1
50.0.0.0/8 is variably subnetted, 4 subnets, 2 masks
C + 50.0.1.0/30 is directly connected, GigabitEthernet0/0
L 50.0.1.2/32 is directly connected, GigabitEthernet0/0
C + 50.0.2.0/30 is directly connected, GigabitEthernet0/1
L 50.0.2.2/32 is directly connected, GigabitEthernet0/1
192.168.30.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.30.0/24 is directly connected, GigabitEthernet0/2.30
L 192.168.30.1/32 is directly connected, GigabitEthernet0/2.30
192.168.40.0/24 is variably subnetted, 2 subnets, 2 masks
C + 192.168.40.0/24 is directly connected, GigabitEthernet0/3
L 192.168.40.1/32 is directly connected, GigabitEthernet0/3
Step8: Now that routes of links to ISPs are in the VRFs’ routing tables you can test PBR again.
R1#debug ip policy
Policy routing debugging is on
PC1> ping srv1.com -P 6 -p 80
srv1.com resolved to 1.1.1.10
Connect 80@srv1.com timeout
Connect 80@srv1.com seq=2 ttl=61 time=7.437 ms
SendData 80@srv1.com seq=2 ttl=61 time=8.401 ms
Debug ip policy output:
*Aug 3 17:02:48.793: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8, len 54, FIB policy match
*Aug 3 17:02:48.793: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8, len 54, PBR Counted
*Aug 3 17:02:48.793: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8, g=50.0.1.1, len 54, FIB policy routed
*Aug 3 17:02:48.794: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8, len 54, policy match
*Aug 3 17:02:48.795: IP: route map POLICY_A, item 10, permit
*Aug 3 17:02:48.795: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=8.8.8.8 (GigabitEthernet0/0), len 54, policy routed
*Aug 3 17:02:48.795: IP: GigabitEthernet0/2.10 to GigabitEthernet0/0 50.0.1.1
R1#
*Aug 3 17:02:48.809: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 60, FIB policy match
*Aug 3 17:02:48.809: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 60, PBR Counted
*Aug 3 17:02:48.809: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, g=50.0.1.1, len 60, FIB policy routed
*Aug 3 17:02:49.807: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 60, FIB policy match
*Aug 3 17:02:49.807: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 60, PBR Counted
*Aug 3 17:02:49.807: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, g=50.0.1.1, len 60, FIB policy routed
R1#
*Aug 3 17:02:50.808: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 60, FIB policy match
*Aug 3 17:02:50.808: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, len 60, PBR Counted
*Aug 3 17:02:50.808: IP: s=192.168.10.11 (GigabitEthernet0/2.10), d=1.1.1.10, g=50.0.1.1, len 60, FIB policy routed
Matching counters in parentheses of show ip access-list command indicates that policy traffic goes as expected.
R1#show ip access-lists
Extended IP access list VRF_A_ALL_TRAFFIC
10 permit ip 192.168.10.0 0.0.0.255 any
Extended IP access list VRF_A_POLICY
10 permit tcp 192.168.10.0 0.0.0.255 any eq www (48 matches)
20 permit tcp 192.168.10.0 0.0.0.255 any eq 443
30 permit tcp 192.168.10.0 0.0.0.255 any eq domain
40 permit udp 192.168.10.0 0.0.0.255 any eq domain (4 matches)
Try to simulate traffic on all PCs to test the rest of the policy then check ACL matching counters again.
HTTP: ping srv1.com -P 6 -p 80
HTTPS: ping srv1.com -P 6 -p 443
DNS UDP ping srv1.com -P 17 -p 53
DNS TCP ping srv1.com -P 6 -p 53
ICMP ping 1.1.1.10
Step9: Last thing to do is to verify that PCs in VRFs still unable to access each other.
PC1> ping 192.168.20.11
*50.0.2.1 icmp_seq=1 ttl=254 time=8.795 ms (ICMP type:3, code:1, Destination host unreachable)
*50.0.2.1 icmp_seq=2 ttl=254 time=8.430 ms (ICMP type:3, code:1, Destination host unreachable)
^C
PC2> ping 192.168.30.11
*50.0.2.1 icmp_seq=1 ttl=254 time=7.667 ms (ICMP type:3, code:1, Destination host unreachable)
*50.0.2.1 icmp_seq=2 ttl=254 time=8.140 ms (ICMP type:3, code:1, Destination host unreachable)
THIS LAB WILL BE BASE FOR MY NEXT MULTIPLE TROUBLESHOOTING SCENARIO LABS.
Comments
Post a Comment