Monday, March 29, 2010

Multicast Basics Lab #1: PIM-DM

Hi all,
last week I've tryed to solve the great Dan Shechter's Troubleshooting Challenge
( http://dans-net.com/Blog/files/TS_Challenge_is_on.html ), I have to admit that was really difficult and challenging for me, at least the second half of the trouble tickets, I've spent a lot of time on it.
I have to thanks a lot Dan for wasting his time reading my configurations THREE times, because I have sent him three different versions, modifying my solutions based on his feedback.

His last feedback was literally "Don't break something when fixing something else"!
Need to print this one and put under my desk glass, really... :-)

Anyway, one of my weakness coming out this challenge was multicast, I really need to do a basic lab to refresh and try the various technologies related.

So here the topology:


R4 will act as server and R6 / R1 / R2 as clients. As promised all my labs will be dual stack, so a nice practice on IPv6 multicast will be also useful (but in the next posts only... since there's nothing on IPv6 that looks like dense mode...)


Here the initial configs:




#################### R1
## basic config
ena
conf t
no ip domain-look
username cisco priv 15 pass 0 cisco

line con 0
loggin syn
no exec-time
line vty 0 5
login local

hostname R1
## end basic config

## IPv4/v6 addressing
ipv6 unicast-routing

int lo 0
ip address 1.1.1.1 255.255.255.255
ipv6 ena
ipv6 address fc00:1::1/64

int ser 0/0/1
desc R1 - R3
ip address 10.0.13.1 255.255.255.0
ipv6 ena
ipv6 addr fe80::1 link-local
ipv6 addr 2001:cc1e:13::1/64
no shut
## end IPv4/v6 addressing

## IPv4/v6 routing
router ospf 1
router-id 1.1.1.1

int lo 0
ip ospf 1 area 0
ipv6 ospf 1 area 0
ip ospf net point-to-point

int ser 0/0/1
ip ospf 1 area 0
ipv6 ospf 1 area 0
## end IPv4/v6 routing
#################### end R1

#################### R2
## basic config
ena
conf t
no ip domain-look
username cisco priv 15 pass 0 cisco

line con 0
loggin syn
no exec-time
line vty 0 5
login local

hostname R2
## end basic config


## IPv4/v6 addressing
ipv6 unicast-routing

int lo 0
ip address 2.2.2.2 255.255.255.255
ipv6 ena
ipv6 address fc00:2::2/64

int ser 0/1/0
desc R2 - R3
ip address 10.0.23.2 255.255.255.0
ipv6 ena
ipv6 addr fe80::2 link-local
ipv6 addr 2001:cc1e:23::2/64
no shut
## end IPv4/v6 addressing

## IPv4/v6 routing
router ospf 1
router-id 2.2.2.2

int lo 0
ip ospf 1 area 0
ipv6 ospf 1 area 0
ip ospf net point-to-point

int ser 0/1/0
ip ospf 1 area 0
ipv6 ospf 1 area 0
## end IPv4/v6 routing
#################### end R2

#################### R3
## basic config
ena
conf t
no ip domain-look
username cisco priv 15 pass 0 cisco

line con 0
loggin syn
no exec-time
line vty 0 5
login local

hostname R3
## end basic config

## IPv4/v6 addressing
ipv6 unicast-routing

int lo 0
ip address 3.3.3.3 255.255.255.255
ipv6 ena
ipv6 address fc00:3::3/64

int ser 0/0/0
desc R3 - R5 Frame Relay dlci 305
encap frame
ip address 10.0.35.3 255.255.255.0
ipv6 ena
ipv6 addr fe80::3 link-local
ipv6 addr 2001:cc1e:35::3/64
frame map ipv6 fe80::5 305 broad
frame map ipv6 2001:cc1e:35::5 305 broad
no shut

int ser 0/0/0.1 point-to-point
desc R3 - R6 Frame Relay p2p
frame interface-dlci 306
ip address 10.0.36.3 255.255.255.0
ipv6 ena
ipv6 addre fe80::3 link-local
ipv6 addre 2001:cc1e:36::3/64
no shut

int ser 0/1/0
desc R3 - R1
ip address 10.0.13.3 255.255.255.0
ipv6 ena
ipv6 addre fe80::3 link-local
ipv6 addre 2001:cc1e:13::3/64
clock rate 128000
no shut

int ser 0/1/1
desc R3 - R2
ip address 10.0.23.3 255.255.255.0
ipv6 ena
ipv6 addre fe80::3 link-local
ipv6 addre 2001:cc1e:23::3/64
clock rate 128000
no shut
## end IPv4/v6 addressing

## IPv4/v6 routing
router ospf 1
router-id 3.3.3.3

int lo 0
ip ospf 1 area 0
ipv6 ospf 1 area 0
ip ospf net point-to-point

int ser 0/1/0
ip ospf 1 area 0
ipv6 ospf 1 area 0

int ser 0/1/1
ip ospf 1 area 0
ipv6 ospf 1 area 0

int ser 0/0/0
ip ospf 1 area 0
ipv6 ospf 1 area 0
ip ospf net point-to-point
ipv6 ospf net point-to-point

int ser 0/0/0.1
ip ospf 1 area 0
ipv6 ospf 1 area 0
ip ospf net point-to-point
ipv6 ospf net point-to-point
## end IPv4/v6 routing
#################### end R3

#################### R4
## basic config
ena
conf t
no ip domain-look
username cisco priv 15 pass 0 cisco

line con 0
loggin syn
no exec-time
line vty 0 5
login local

hostname R4
## end basic config

## IPv4/v6 addressing
ipv6 unicast-routing

int lo 0
ip address 4.4.4.4 255.255.255.255
ipv6 ena
ipv6 address fc00:4::4/64

int ser 0/1/0
desc R4 - R5
ip address 10.0.45.4 255.255.255.0
ipv6 ena
ipv6 addr fe80::4 link-local
ipv6 addr 2001:cc1e:45::4/64
no shut
## end IPv4/v6 addressing

## IPv4/v6 routing
router ospf 1
router-id 4.4.4.4

int lo 0
ip ospf 1 area 0
ipv6 ospf 1 area 0
ip ospf net point-to-point

int ser 0/1/0
ip ospf 1 area 0
ipv6 ospf 1 area 0
## end IPv4/v6 routing
#################### end R4

#################### R5
## basic config
ena
conf t
no ip domain-look
username cisco priv 15 pass 0 cisco

line con 0
loggin syn
no exec-time
line vty 0 5
login local

hostname R5
## end basic config


## IPv4/v6 addressing
ipv6 unicast-routing

int lo 0
ip address 5.5.5.5 255.255.255.255
ipv6 ena
ipv6 address fc00:5::5/64

int ser 0/1/0
desc R5 - R4
ip address 10.0.45.5 255.255.255.0
ipv6 ena
ipv6 addr fe80::5 link-local
ipv6 addr 2001:cc1e:45::5/64
clock rate 128000
no shut

int ser 0/0/0
desc R5 - R3 Frame Relay dlci 503
encap frame
ip address 10.0.35.5 255.255.255.0
ipv6 ena
ipv6 addre fe80::5 link-local
ipv6 address 2001:cc1e:35::5/64
frame map ipv6 fe80::3 503 broad
frame map ipv6 2001:cc1e:35::3 503 broad
no shut
## end IPv4/v6 addressing

## IPv4/v6 routing
router ospf 1
router-id 5.5.5.5

int lo 0
ip ospf 1 area 0
ipv6 ospf 1 area 0
ip ospf net point-to-point

int ser 0/1/0
ip ospf 1 area 0
ipv6 ospf 1 area 0

int ser 0/0/0
ip ospf 1 area 0
ipv6 ospf 1 area 0
ip ospf net point-to-point
ipv6 ospf net point-to-point
## end IPv4/v6 routing
#################### end R5

#################### R6
## basic config
ena
conf t
no ip domain-look
username cisco priv 15 pass 0 cisco

line con 0
loggin syn
no exec-time
line vty 0 5
login local

hostname R6
## end basic config

## IPv4/v6 addressing
ipv6 unicast-routing

int lo 0
ip address 6.6.6.6 255.255.255.255
ipv6 ena
ipv6 address fc00:6::6/64

int ser 0/0/0
desc R6 - R3 frame relay dlci 603
encap frame
ip address 10.0.36.6 255.255.255.0
ipv6 ena
ipv6 addr fe80::6 link-local
ipv6 addr 2001:cc1e:36::6/64
frame-relay map ipv6 2001:CC1E:36::3 603 broadcast
frame-relay map ipv6 FE80::3 603 broadcast
frame-relay map ip 10.0.36.3 603 broadcast
no shut
## end IPv4/v6 addressing

## IPv4/v6 routing
router ospf 1
router-id 6.6.6.6

int lo 0
ip ospf 1 area 0
ipv6 ospf 1 area 0
ip ospf net point-to-point

int ser 0/0/0
ip ospf 1 area 0
ipv6 ospf 1 area 0
ip ospf net point-to-point
ipv6 ospf net point-to-point
## end IPv4/v6 routing
#################### end R6



After this initial config, and obviuosly after ensuring you have full rechability (that evil inverse-arp u know...), we can start configuring the different multicast modes, let's start...


IPv4 PIM DENSE-MODE:
It's the very basic one, floods and prunes your network by default every 3 minutes, we have to configure it on every interface that must running multicast:


## R1 dense mode multicast
ip multicast-routing

int ser 0/0/1
ip pim dense-mode

## end R1 dense mode multicast

## R2 dense mode multicast
ip multicast-routing

int ser 0/1/0
ip pim dense-mode
## end R2 dense mode multicast

## R3 dense mode multicast
ip multicast-routing

int ser 0/0/0
ip pim dense-mode

int ser 0/0/0.1
ip pim dense-mode

int ser 0/1/0
ip pim dense-mode

int ser 0/1/1
ip pim dense-mode
## end R3 dense mode multicast

## R4 dense mode multicast
ip multicast-routing

int ser 0/1/0
ip pim dense-mode

## end R4 dense mode multicast

## R5 dense mode multicast
ip multicast-routing

int ser 0/0/0
ip pim dense-mode

int ser 0/1/0
ip pim dense-mode
## end R5 dense mode multicast

## R6 dense mode multicast
ip multicast-routing

int ser 0/0/0
ip pim dense-mode

## end R6 dense mode multicast


well done, and now how to test it?
We can try to join a multicast group on R1 loopback and generating traffic with a simple ping from R4.
We expect that:
-R1 will receive the multicast traffic and replyes to the ping
-R2/R6 will receive initially the multicast traffic too, since is the initial flood, then R2/R6 will send a PRUNE message to R3, their upstream neighbor, for that multicast group.

First, enable a debug ip packet on R2, to see the initial flood, and enable a debug ip pim on R3 to see the prune messages:

!-- on R2
R2(config)#access-list 100 permit ip any host 239.1.1.2
R2(config)#do debug ip pack 100 detail
IP packet debugging is on (detailed) for access list 100
!-- on R3:
R3(config-if)#do debug ip pim
PIM debugging is on
R3(config-if)#


Then Join a group on R1 and send a ping from R4:

!-- on R1:
R1(config)#int lo 0
R1(config-if)#ip igmp join-group 239.1.1.2

!-- on R4:
R4(config-if)#do ping 239.1.1.2

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.1.1.2, timeout is 2 seconds:

Reply to request 0 from 10.0.13.1, 64 ms
R4(config-if)#


On R2 you will see the initial flooding, R2 has received the traffic althrough hasn't joined that group:

R2(config)#
*Mar 26 20:37:52.282: IP: s=4.4.4.4 (Serial0/1/0), d=239.1.1.2, len 100, input feature
*Mar 26 20:37:52.282: ICMP type=8, code=0, MCI Check(59), rtype 0, forus FALSE, sendself FALSE, mtu 0
*Mar 26 20:37:52.282: FIBipv4-packet-proc: route packet from Serial0/1/0 src 4.4.4.4 dst 239.1.1.2
*Mar 26 20:37:52.282: FIBfwd-proc: Default:224.0.0.0/4 multicast entry
*Mar 26 20:37:52.282: FIBipv4-packet-proc: packet routing failed


On R3 you will see the pim prune messages from R2 and R6, each prune is sent two times, just to make shure that is received in case of congestion:

R3(config-if)#
*Mar 26 20:47:29.974: PIM(0): Received v2 Join/Prune on Serial0/1/1 from 10.0.23.2, to us
*Mar 26 20:47:29.974: PIM(0): Prune-list: (4.4.4.4/32, 239.1.1.2)
*Mar 26 20:47:29.974: PIM(0): Prune Serial0/1/1/239.1.1.2 from (4.4.4.4/32, 239.1.1.2)
*Mar 26 20:47:30.174: PIM(0): Received v2 Join/Prune on Serial0/0/0.1 from 10.0.36.6, to us
*Mar 26 20:47:30.174: PIM(0): Prune-list: (4.4.4.4/32, 239.1.1.2)
*Mar 26 20:47:30.174: PIM(0): Prune Serial0/0/0.1/239.1.1.2 from (4.4.4.4/32, 239.1.1.2)
*Mar 26 20:47:31.074: PIM(0): Received v2 Join/Prune on Serial0/0/0.1 from 10.0.36.6, to us
*Mar 26 20:47:31.074: PIM(0): Prune-list: (4.4.4.4/32, 239.1.1.2)
*Mar 26 20:47:36.974: PIM(0): Received v2 Join/Prune on Serial0/1/1 from 10.0.23.2, to us
*Mar 26 20:47:36.974: PIM(0): Prune-list: (4.4.4.4/32, 239.1.1.2)
R3(config-if)#


Ok, what if R2 loopback joins the same group now?
well, with dense mode R2 won't send any pim message to inform R3 of the new joined group, so it will wait for the next flood to receive the traffic, waiting 3 minutes maximum, by default.

!-- on R2:
R2(config)#int lo 0
R2(config-if)#ip igmp join-group 239.1.1.2

!-- on R4:
R4(config-if)#do ping 239.1.1.2

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.1.1.2, timeout is 2 seconds:

Reply to request 0 from 10.0.13.1, 60 ms
R4(config-if)#

So, no response from R2, let's wait a couple of minutes...

R4(config-if)#do ping 239.1.1.2

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.1.1.2, timeout is 2 seconds:

Reply to request 0 from 10.0.13.1, 60 ms
Reply to request 0 from 10.0.23.2, 64 ms
R4(config-if)#


Let's take a closer look to the multicast routing table of R3:

R3(config-if)#do sh ip mroute 239.1.1.2
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group,
V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.2), 00:00:03/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial0/1/1, Forward/Dense, 00:00:03/00:00:00
Serial0/1/0, Forward/Dense, 00:00:03/00:00:00
Serial0/0/0.1, Forward/Dense, 00:00:03/00:00:00
Serial0/0/0, Forward/Dense, 00:00:03/00:00:00

(4.4.4.4, 239.1.1.2), 00:00:04/00:02:55, flags: T
Incoming interface: Serial0/0/0, RPF nbr 10.0.35.5
Outgoing interface list:
Serial0/0/0.1, Prune/Dense, 00:00:04/00:02:55
Serial0/1/0, Forward/Dense, 00:00:04/00:00:00
Serial0/1/1, Forward/Dense, 00:00:04/00:00:00

The first entry, the (*,G), is unused on PIM-DM, and that's why the incoming interface of this entry is always empty. (ref: Doyle, Carroll, "Routing TCP-IP vol.II", Cisco Press, page 535)

The second entry, the (S,G) is the only used by PIM-DM, note:
-the flag "T" means that R3 is on the SPT but doesn't have any local interface with hosts that have joined the group
-the interface Serial0/0/0.1 is in state PRUNED since R6 has sent the prune message to R3
-the default timeout the PRUNED interfaces entries is 180 sec, 3 minutes, after that timer, the pruned entry is removed and flood starts again

Ok what if I add a link between R4 and R3 through the frame-relay cloud?
Let's try it...assuring that R4 will generate continuous traffic destined to the 239.1.1.2 group

## on R4
int ser 0/0/0
shut
encapsulation frame-relay
int ser 0/0/0.1 point
frame-relay interface-dlci 403
ip address 10.0.34.4 255.255.255.0
ip ospf 1 area 0
ipv6 enable
ipv6 address fe80::4 link
ipv6 address 2001:cc1e:34::4/64
ipv6 ospf 1 area 0
ip pim dense-mode
exit

ip sla 1
icmp-echo 239.1.1.2 source-interface Loopback0
request-data-size 1450
timeout 2000
threshold 3000
frequency 2
ip sla schedule 1 life forever start-time now


## on R5
int ser 0/0/0.2 point
frame-relay interface-dlci 304
ip address 10.0.34.3 255.255.255.0
ip ospf 1 area 0
ipv6 enable
ipv6 address fe80::3 link
ipv6 address 2001:cc1e:34::3/64
ipv6 ospf 1 area 0
ip pim dense-mode
exit

Here I have left the Serial0/0/0 in shutdown on R4, just to take the time to enable some debug and see what happens on the SPT when R3 finds a best route to R4.

So let's enable a couple of debug on R3 and type a no shut on R4 serial0/0/0:

R3#debug ip mrouting
IP multicast routing debugging is on
R3#debug ip routing
IP routing debugging is on
R3#


R4(config-subif)# int ser 0/0/0
R4(config-if)#no shutdown
R4(config-if)#
*Mar 29 21:31:17.581: %LINK-3-UPDOWN: Interface Serial0/0/0, changed state to up
*Mar 29 21:31:18.581: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0/0/0, changed state to up
*Mar 29 21:31:37.717: %PIM-5-NBRCHG: neighbor 10.0.34.3 UP on interface Serial0/0/0.1
*Mar 29 21:31:38.409: %OSPF-5-ADJCHG: Process 1, Nbr 3.3.3.3 on Serial0/0/0.1 from LOADING to FULL, Loading Done
*Mar 29 21:31:39.621: %OSPFv3-5-ADJCHG: Process 1, Nbr 3.3.3.3 on Serial0/0/0.1 from LOADING to FULL, Loading Done
R4(config-if)#


After a little while, on R3:

*Mar 29 21:37:38.021: RT: add 10.0.34.0/24 via 10.0.35.5, ospf metric [110/192]
*Mar 29 21:37:38.021: RT: NET-RED 10.0.34.0/24
*Mar 29 21:37:45.521: PIM(*): Frame-relay DLCI active now on Serial0/0/0.2
*Mar 29 21:37:45.521: RT: is_up: Serial0/0/0.2 1 state: 4 sub state: 1 line: 0 has_route: False
*Mar 29 21:37:45.521: RT: closer admin distance for 10.0.34.0, flushing 1 routes
*Mar 29 21:37:45.521: RT: NET-RED 10.0.34.0/24
*Mar 29 21:37:45.525: RT: add 10.0.34.0/24 via 0.0.0.0, connected metric [0/0]
*Mar 29 21:37:45.525: RT: NET-RED 10.0.34.0/24
*Mar 29 21:37:45.525: RT: interface Serial0/0/0.2 added to routing table
*Mar 29 21:37:52.517: %PIM-5-NBRCHG: neighbor 10.0.34.4 UP on interface Serial0/0/0.2
*Mar 29 21:37:52.517: MRT(0): WAVL Insert interface: Serial0/0/0.2 in (4.4.4.4,239.1.1.2) Successful
*Mar 29 21:37:52.521: MRT(0): set min mtu for (4.4.4.4, 239.1.1.2) 1500->1500
*Mar 29 21:37:52.521: MRT(0): Add Serial0/0/0.2/239.1.1.2 to the olist of (4.4.4.4, 239.1.1.2), Forward state - MAC not built
*Mar 29 21:37:52.521: PIM(0): Add Serial0/0/0.2/10.0.34.4 to (4.4.4.4, 239.1.1.2), Forward state, by PIM *G Join
*Mar 29 21:37:52.521: MRT(0): Add Serial0/0/0.2/239.1.1.2 to the olist of (4.4.4.4, 239.1.1.2), Forward state - MAC not built
*Mar 29 21:37:52.521: MRT(0): WAVL Insert interface: Serial0/0/0.2 in (* ,239.1.1.2) Successful
*Mar 29 21:37:52.521: MRT(0): set min mtu for (0.0.0.0, 239.1.1.2) 1500->1500
*Mar 29 21:37:52.521: MRT(0): Add Serial0/0/0.2/239.1.1.2 to the olist of (*, 239.1.1.2), Forward state - MAC not built
*Mar 29 21:37:52.521: PIM(0): Add Serial0/0/0.2/10.0.34.4 to (*, 239.1.1.2), Forward state, by PIM *G Join
*Mar 29 21:37:52.521: MRT(0): Add Serial0/0/0.2/239.1.1.2 to the olist of (*, 239.1.1.2), Forward state - MAC not built
*Mar 29 21:37:52.525: MRT(0): WAVL Insert interface: Serial0/0/0.2 in (* ,224.0.1.40) Successful
*Mar 29 21:37:52.529: MRT(0): set min mtu for (0.0.0.0, 224.0.1.40) 1500->1500
*Mar 29 21:37:52.529: MRT(0): Add Serial0/0/0.2/224.0.1.40 to the olist of (*, 224.0.1.40), Forward state - MAC not built
*Mar 29 21:37:52.529: PIM(0): Add Serial0/0/0.2/10.0.34.4 to (*, 224.0.1.40), Forward state, by PIM *G Join
*Mar 29 21:37:52.529: MRT(0): Add Serial0/0/0.2/224.0.1.40 to the olist of (*, 224.0.1.40), Forward state - MAC not built
*Mar 29 21:37:52.585: PIM(0): Building Graft message for 224.0.1.40, Serial0/0/0.2: no entries
*Mar 29 21:37:52.585: PIM(0): Building Graft message for 224.0.1.40, Serial0/0/0.1: no entries
*Mar 29 21:37:52.585: PIM(0): Building Graft message for 224.0.1.40, Serial0/1/1: no entries
*Mar 29 21:37:52.585: PIM(0): Building Graft message for 224.0.1.40, Serial0/1/0: no entries
*Mar 29 21:37:52.585: PIM(0): Building Graft message for 224.0.1.40, Serial0/0/0: no entries
*Mar 29 21:37:52.585: PIM(0): Building Graft message for 239.1.1.2, Serial0/0/0.2: no entries
*Mar 29 21:37:52.585: PIM(0): Building Graft message for 239.1.1.2, Serial0/0/0.1: no entries
*Mar 29 21:37:52.585: PIM(0): Building Graft message for 239.1.1.2, Serial0/1/1: no entries
*Mar 29 21:37:52.585: PIM(0): Building Graft message for 239.1.1.2, Serial0/1/0: no entries
*Mar 29 21:37:52.585: PIM(0): Building Graft message for 239.1.1.2, Serial0/0/0: no entries
*Mar 29 21:37:52.789: PIM(0): Send v2 Assert on Serial0/0/0.2 for 239.1.1.2, source 4.4.4.4, metric [110/129]
*Mar 29 21:37:52.789: PIM(0): Assert metric to source 4.4.4.4 is [110/129]
*Mar 29 21:37:52.793: PIM(0): We win, our metric [110/129]
*Mar 29 21:37:52.793: PIM(0): Prune Serial0/0/0.2/239.1.1.2 from (4.4.4.4/32, 239.1.1.2)
*Mar 29 21:37:52.793: PIM(0): Pruning immediately Serial0/0/0.2 (p2p)
*Mar 29 21:37:53.101: %OSPF-5-ADJCHG: Process 1, Nbr 4.4.4.4 on Serial0/0/0.2 from LOADING to FULL, Loading Done
*Mar 29 21:37:54.397: %OSPFv3-5-ADJCHG: Process 1, Nbr 4.4.4.4 on Serial0/0/0.2 from LOADING to FULL, Loading Done
*Mar 29 21:37:54.789: PIM(0): Send v2 Assert on Serial0/0/0.2 for 239.1.1.2, source 4.4.4.4, metric [110/129]
*Mar 29 21:37:55.893: PIM(0): Received v2 Assert on Serial0/0/0.2 from 10.0.34.4
*Mar 29 21:37:55.893: PIM(0): Assert metric to source 4.4.4.4 is [0/0]
*Mar 29 21:37:55.893: PIM(0): We lose, our metric [110/129]
*Mar 29 21:37:55.893: PIM(0): Insert (4.4.4.4,239.1.1.2) prune in nbr 10.0.34.4's queue
*Mar 29 21:37:55.893: PIM(0): Send (4.4.4.4, 239.1.1.2) PIM-DM prune to oif Serial0/0/0.2 in Prune state
*Mar 29 21:37:55.893: PIM(0): (4.4.4.4/32, 239.1.1.2) oif Serial0/0/0.2 in Prune state
*Mar 29 21:37:55.893: PIM(0): Building Join/Prune packet for nbr 10.0.34.4
*Mar 29 21:37:55.893: PIM(0): Adding v2 (4.4.4.4/32, 239.1.1.2) Prune
*Mar 29 21:37:55.893: PIM(0): Send v2 join/prune to 10.0.34.4 (Serial0/0/0.2)
*Mar 29 21:37:58.021: RT: del 4.4.4.4/32 via 10.0.35.5, ospf metric [110/129]
*Mar 29 21:37:58.021: RT: add 4.4.4.4/32 via 10.0.34.4, ospf metric [110/65]

*Mar 29 21:37:58.021: RT: NET-RED 4.4.4.4/32
*Mar 29 21:37:58.021: RT: add 10.0.45.0/24 via 10.0.34.4, ospf metric [110/128]
*Mar 29 21:37:58.021: RT: NET-RED 10.0.45.0/24
*Mar 29 21:37:58.489: MRT(0): Delete Serial0/0/0.2/239.1.1.2 from the olist of (4.4.4.4, 239.1.1.2)
*Mar 29 21:37:58.489: MRT(0): (4.4.4.4,239.1.1.2), RPF change from Serial0/0/0/10.0.35.5 to Serial0/0/0.2/10.0.34.4
*Mar 29 21:37:58.489: MRT(0): WAVL Insert interface: Serial0/0/0 in (4.4.4.4,239.1.1.2) Successful
*Mar 29 21:37:58.489: MRT(0): set min mtu for (4.4.4.4, 239.1.1.2) 1500->1500
*Mar 29 21:37:58.489: MRT(0): Add Serial0/0/0/239.1.1.2 to the olist of (4.4.4.4, 239.1.1.2), Forward state - MAC not built
*Mar 29 21:37:58.585: PIM(0): Building Graft message for 239.1.1.2, Serial0/0/0.2:
4.4.4.4/32 count 1
*Mar 29 21:37:58.585: PIM(0): Send v2 Graft to 10.0.34.4 (Serial0/0/0.2)

*Mar 29 21:37:58.585: PIM(0): Building Graft message for 239.1.1.2, Serial0/0/0.1: no entries
*Mar 29 21:37:58.585: PIM(0): Building Graft message for 239.1.1.2, Serial0/1/1: no entries
*Mar 29 21:37:58.585: PIM(0): Building Graft message for 239.1.1.2, Serial0/1/0: no entries
*Mar 29 21:37:58.585: PIM(0): Building Graft message for 239.1.1.2, Serial0/0/0: no entries
*Mar 29 21:37:58.597: PIM(0): Received v2 Graft-Ack on Serial0/0/0.2 from 10.0.34.4
*Mar 29 21:37:58.597: Group 239.1.1.2:
4.4.4.4/32

*Mar 29 21:37:58.881: PIM(0): Send v2 Assert on Serial0/0/0 for 239.1.1.2, source 4.4.4.4, metric [110/65]
*Mar 29 21:37:58.881: PIM(0): Assert metric to source 4.4.4.4 is [110/65]
*Mar 29 21:37:58.885: PIM(0): We win, our metric [110/65]
*Mar 29 21:37:58.885: PIM(0): Schedule to prune Serial0/0/0
*Mar 29 21:37:58.885: PIM(0): (4.4.4.4/32, 239.1.1.2) oif Serial0/0/0 in Forward state
*Mar 29 21:37:58.917: PIM(0): Received v2 Assert on Serial0/0/0 from 10.0.35.5
*Mar 29 21:37:58.917: PIM(0): Assert metric to source 4.4.4.4 is [110/65]
*Mar 29 21:37:58.917: PIM(0): We lose, our metric [110/65]
*Mar 29 21:37:58.917: PIM(0): Prune Serial0/0/0/239.1.1.2 from (4.4.4.4/32, 239.1.1.2)
*Mar 29 21:37:58.917: PIM(0): Insert (4.4.4.4,239.1.1.2) prune in nbr 10.0.35.5's queue
*Mar 29 21:37:58.917: PIM(0): Send (4.4.4.4, 239.1.1.2) PIM-DM prune to oif Serial0/0/0 in Prune state
*Mar 29 21:37:58.917: PIM(0): (4.4.4.4/32, 239.1.1.2) oif Serial0/0/0 in Prune state
*Mar 29 21:37:58.917: PIM(0): Building Join/Prune packet for nbr 10.0.35.5
*Mar 29 21:37:58.921: PIM(0): Adding v2 (4.4.4.4/32, 239.1.1.2) Prune
*Mar 29 21:37:58.921: PIM(0): Send v2 join/prune to 10.0.35.5 (Serial0/0/0)

*Mar 29 21:37:59.917: PIM(0): Received v2 Assert on Serial0/0/0 from 10.0.35.5
*Mar 29 21:37:59.917: PIM(0): Assert metric to source 4.4.4.4 is [110/65]
*Mar 29 21:37:59.917: PIM(0): We lose, our metric [110/65]
*Mar 29 21:37:59.917: PIM(0): Insert (4.4.4.4,239.1.1.2) prune in nbr 10.0.35.5's queue
*Mar 29 21:37:59.917: PIM(0): Send (4.4.4.4, 239.1.1.2) PIM-DM prune to oif Serial0/0/0 in Prune state
*Mar 29 21:37:59.917: PIM(0): (4.4.4.4/32, 239.1.1.2) oif Serial0/0/0 in Prune state
*Mar 29 21:37:59.917: PIM(0): Building Join/Prune packet for nbr 10.0.35.5
*Mar 29 21:37:59.917: PIM(0): Adding v2 (4.4.4.4/32, 239.1.1.2) Prune
*Mar 29 21:37:59.921: PIM(0): Send v2 join/prune to 10.0.35.5 (Serial0/0/0)


Ok from this long debug we can read something like:
-interface comes up
-R3 build his PIM adjacency with R4, and send a prune message because the best route to reach R4 still through R5
-after the ospf convergence, the best route to R4 is through the p2p serial link
-R3 sends a GRAFT message for our group 239.1.1.2 to R4 in order to receive the traffic from that interface
-R3 sends a PRUNE message for group 239.1.1.2 to R5, telling to stop sending traffic for that group (since he has a better route to the source..)


enough for today... the next posts will continue with this topology using PIM sparse-dense-mode and PIM-SM, then I'll study the IPv6 part.

Marco

No comments: