Sunday, July 26, 2009

sunday's tip: Disabling DTP on access ports

Hi all,
this sunday morning I'm changing the spanning-tree mode for some switches (from pvst to rapid-pvst) and I noticed that DTP was enabled on several access ports.

Remember that DTP (Dynamic Trunking Protocol) is used to negotiate trunks between switches, so it's not a good idea to keep it enabled on access ports, especially if access ports are in public places...

This is the configuration I've found:

2950_1#sh ver | inc IOS
IOS (tm) C2950 Software (C2950-I6Q4L2-M), Version 12.1(22)EA4, RELEASE SOFTWARE (fc1)

2950_1#sh run int fa 0/10 | beg int
interface FastEthernet0/10
switchport access vlan 55
spanning-tree portfast

Nothing strange here, the port is working in access mode (see "operational mode" below), but without the "switchport mode access" command, DTP still enabled on port:

2950_1#sh dtp int fa 0/10
DTP information for FastEthernet0/10:
TOS/TAS/TNS: ACCESS/DESIRABLE/ACCESS
TOT/TAT/TNT: NATIVE/802.1Q/802.1Q
Neighbor address 1: 000000000000
Neighbor address 2: 000000000000
Hello timer expiration (sec/state): 11/RUNNING
Access timer expiration (sec/state): never/STOPPED
Negotiation timer expiration (sec/state): never/STOPPED
Multidrop timer expiration (sec/state): never/STOPPED
FSM state: S2:ACCESS
# times multi & trunk 0
Enabled: yes
In STP: no

Statistics
----------
0 packets received (0 good)
0 packets dropped
0 nonegotiate, 0 bad version, 0 domain mismatches,
0 bad TLVs, 0 bad TAS, 0 bad TAT, 0 bad TOT, 0 other
857578 packets output (857578 good)
428789 native, 428789 software encap isl, 0 isl hardware native
0 output errors
0 trunk timeouts
20 link ups, last link up on Fri Feb 27 2009, 13:15:09
19 link downs, last link down on Fri Feb 27 2009, 13:15:07

2950_1#sh int fa 0/10 switchport
Name: Fa0/10
Switchport: Enabled
Administrative Mode: dynamic desirable
Operational Mode: static access
Administrative Trunking Encapsulation: dot1q
Operational Trunking Encapsulation: native
Negotiation of Trunking: On
Access Mode VLAN: 55 (Ingresso)
Trunking Native Mode VLAN: 1 (default)
Voice VLAN: none
Administrative private-vlan host-association: none
Administrative private-vlan mapping: none
Administrative private-vlan trunk native VLAN: none
Administrative private-vlan trunk encapsulation: dot1q
Administrative private-vlan trunk normal VLANs: none
Administrative private-vlan trunk private VLANs: none
Operational private-vlan: none
Trunking VLANs Enabled: ALL
Pruning VLANs Enabled: 2-1001
Capture Mode Disabled
Capture VLANs Allowed: ALL
Protected: false
Appliance trust: none


So, it can happen that a malicius user see the DTP hellos coming out from that port and try to negotiate a trunk.
To avoid this, let's disable DTP through the static access mode:


2950_1#sh run int fa 0/10 | beg int
interface FastEthernet0/10
switchport access vlan 55
switchport mode access
spanning-tree portfast
end


Now DTP must be disabled, let's check:

2950_1#sh dtp int fa 0/10
DTP information for FastEthernet0/10:
TOS/TAS/TNS: ACCESS/OFF/ACCESS

TOT/TAT/TNT: NATIVE/802.1Q/NATIVE
Neighbor address 1: 000000000000
Neighbor address 2: 000000000000
Hello timer expiration (sec/state): never/STOPPED
Access timer expiration (sec/state): never/STOPPED
Negotiation timer expiration (sec/state): never/STOPPED
Multidrop timer expiration (sec/state): never/STOPPED
FSM state: S1:OFF
# times multi & trunk 0
Enabled: no
In STP: no

Statistics
----------
0 packets received (0 good)
0 packets dropped
0 nonegotiate, 0 bad version, 0 domain mismatches,
0 bad TLVs, 0 bad TAS, 0 bad TAT, 0 bad TOT, 0 other
0 packets output (0 good)
0 native, 0 software encap isl, 0 isl hardware native
0 output errors
0 trunk timeouts
20 link ups, last link up on Fri Feb 27 2009, 13:15:09
20 link downs, last link down on Sun Jul 26 2009, 12:12:22

2950_1#sh int fa 0/10 switchport
Name: Fa0/10
Switchport: Enabled
Administrative Mode: static access
Operational Mode: static access

Administrative Trunking Encapsulation: dot1q
Operational Trunking Encapsulation: native
Negotiation of Trunking: Off
Access Mode VLAN: 55 (Ingresso)
Trunking Native Mode VLAN: 1 (default)
Voice VLAN: none
Administrative private-vlan host-association: none
Administrative private-vlan mapping: none
Administrative private-vlan trunk native VLAN: none
Administrative private-vlan trunk encapsulation: dot1q
Administrative private-vlan trunk normal VLANs: none
Administrative private-vlan trunk private VLANs: none
Operational private-vlan: none
Trunking VLANs Enabled: ALL
Pruning VLANs Enabled: 2-1001
Capture Mode Disabled
Capture VLANs Allowed: ALL
Protected: false
Appliance trust: none


Well done, now the port state is "static access", no more negotiation.

Wednesday, July 15, 2009

Monitoring 6500 switch fabric, trying to understand switching mode

Hi all, last week I had an informal meeting (also called a beer) with Nicola Modena CCIE #19119 (thanks buddy)!

During the long and interesting conversation, he asked me what was the utilization of my 6500s...
I answered someting like "Cpu is under 5%", and he said "well, but traffic isn't process switched, what's about the fabric?"

Good point Nic!

So today I've read the "Configuring a Supervisor Engine 720" section of the Catalyst 6500 Release 12.2SXF and Rebuilds Software Configuration Guide...

...and tryed to understand the various show fabric outputs.

Here is my show module and show fabric:
6509# sh module 
Mod Ports Card Type Model Serial No.
--- ----- -------------------------------------- ------------------ -----------
1 24 CEF720 24 port 1000mb SFP WS-X6724-SFP Serials omitted
2 10 WiSM WLAN Service Module WS-SVC-WISM-1-K9 S..
3 48 CEF720 48 port 10/100/1000mb Ethernet WS-X6748-GE-TX S..
5 2 Supervisor Engine 720 (Active) WS-SUP720-3B S..
6 2 Supervisor Engine 720 (Hot) WS-SUP720-3B S..

Mod MAC addresses Hw Fw Sw Status
--- ---------------------------------- ------ ------------ ------------ -------
1 001b.d4ec.7860 to 001b.d4ec.7877 2.6 12.2(14r)S5 12.2(18)SXF9 Ok
2 001c.5843.7cb0 to 001c.5843.7cbf 2.0 12.2(14r)S5 12.2(18)SXF9 Ok
3 001c.587b.20a0 to 001c.587b.20cf 2.6 12.2(14r)S5 12.2(18)SXF9 Ok
5 0019.e7d3.a2ac to 0019.e7d3.a2af 5.4 8.4(2) 12.2(18)SXF9 Ok
6 001a.2f3b.f80c to 001a.2f3b.f80f 5.4 8.4(2) 12.2(18)SXF9 Ok

Mod Sub-Module Model Serial Hw Status
---- --------------------------- ------------------ ----------- ------- -------
1 Centralized Forwarding Card WS-F6700-CFC S.. 3.1 Ok
2 Centralized Forwarding Card WS-SVC-WISM-1-K9-D S.. 2.0 Ok
3 Centralized Forwarding Card WS-F6700-CFC S.. 3.1 Ok
5 Policy Feature Card 3 WS-F6K-PFC3B S.. 2.3 Ok
5 MSFC3 Daughterboard WS-SUP720 S.. 3.0 Ok
6 Policy Feature Card 3 WS-F6K-PFC3B S.. 2.3 Ok
6 MSFC3 Daughterboard WS-SUP720 S.. 3.0 Ok

Mod Online Diag Status
---- -------------------
1 Pass
2 Pass
3 Pass
5 Pass
6 Pass
6509#
6509#sh fabric
show fabric active:
Active fabric card in slot 5
Backup fabric card in slot 6

show fabric mode:
Global switching mode is Compact
dCEF mode is not enforced for system to operate
Fabric module is not required for system to operate
Modules are allowed to operate in bus mode
Truncated mode is allowed, due to presence of CEF720, Standby supervisor module

Module Slot Switching Mode
1 Crossbar
2 Crossbar
3 Crossbar
5 dCEF
6 Crossbar

show fabric congestion management
:
Fabric clear-block is off.

show fabric status all:
slot channel speed module fabric
status status
1 0 20G OK OK
2 1 N/A OK OK
3 0 20G OK OK
3 1 20G OK OK
5 0 20G OK OK
6 0 20G OK OK

show fabric utilization all:
slot channel speed Ingress % Egress %
1 0 20G 1 1
2 1 8G 0 0
3 0 20G 0 0
3 1 20G 0 1
5 0 20G 0 1
6 0 20G 0 0

show fabric errors all:
Module errors:
slot channel crc hbeat sync DDR sync
1 0 0 0 0 0
2 1 0 0 0 0
3 0 0 0 0 0
3 1 0 0 0 0
5 0 0 0 0 0
6 0 0 0 0 0

Fabric errors:
slot channel sync buffer timeout
1 0 0 0 0
2 1 0 0 0
3 0 0 0 0
3 1 0 0 0
5 0 0 0 0
6 0 0 0 0



This is our core L3, it has two Sup720-3B installed and configured as hot-standby, two switching modules (24 sfp and 48 Rj45) and a (double) Wism installed.
As you can see from show fabric utilization, we're using 1% of the 20Gbps bus capacity for modules 1,3,5....

For a better understanding of the 6500 architecture, read the interesting white paper: Cisco Catalyst 6500 Architecture White Paper
Another interesting resource is: Notes on Cisco Catalyst 6500 Architecture

If I don't have misunderstood, here the "global switching mode" is "Compact", due to the presence of CEF720 switching modules that have a "Crossbar" connection and due to SUP720-3B, so according with the whitepaper: "In this mode of operation, the switch can achieve centralized performance of up to 30Mpps independent of packet size.".

Not bad! We are under-utilizing a lot this platform!

Let's take a look to a distribution L3 switch:

6506#sh module
Mod Ports Card Type Model Serial No.
--- ----- -------------------------------------- ------------------ -----------
1 24 CEF720 24 port 1000mb SFP WS-X6724-SFP Serials omitted
2 24 CEF720 24 port 1000mb SFP WS-X6724-SFP S...
3 48 48 port 10/100/1000mb EtherModule WS-X6148-GE-TX S...
5 2 Supervisor Engine 720 (Active) WS-SUP720-3B S...

Mod MAC addresses Hw Fw Sw Status
--- ---------------------------------- ------ ------------ ------------ -------
1 0021.a0b4.00b0 to 0021.a0b4.00c7 3.3 12.2(18r)S1 12.2(33)SXH4 Ok
2 0021.a07e.fbd8 to 0021.a07e.fbef 3.3 12.2(18r)S1 12.2(33)SXH4 Ok
3 0021.a08c.aa10 to 0021.a08c.aa3f 7.2 7.2(1) 8.7(0.22)BUB Ok
5 0021.1bff.09dc to 0021.1bff.09df 5.7 8.5(2) 12.2(33)SXH4 Ok

Mod Sub-Module Model Serial Hw Status
---- --------------------------- ------------------ ----------- ------- -------
1 Centralized Forwarding Card WS-F6700-CFC S.. 4.1 Ok
2 Centralized Forwarding Card WS-F6700-CFC S.. 4.1 Ok
5 Policy Feature Card 3 WS-F6K-PFC3B S.. 2.4 Ok
5 MSFC3 Daughterboard WS-SUP720 S.. 3.2 Ok

Mod Online Diag Status
---- -------------------
1 Pass
2 Pass
3 Pass
5 Pass

6506#sh fabric
show fabric active:
Active fabric card in slot 5
No backup fabric card in the system

show fabric mode:
Global switching mode is Truncated
dCEF mode is not enforced for system to operate
Fabric module is not required for system to operate
Modules are allowed to operate in bus mode
Truncated mode is allowed, due to presence of CEF720 module

Module Slot Switching Mode
1 Crossbar
2 Crossbar
3 Bus
5 Bus

show fabric congestion management:
Fabric clear-block is off (operational).

show fabric status all:
slot channel speed module fabric hotStandby Standby Standby
status status support module fabric
1 0 20G OK OK Y(not-hot)
2 0 20G OK OK Y(not-hot)
5 0 20G OK OK Y(not-hot)

show fabric utilization all:
slot channel speed Ingress % Egress %
1 0 20G 0 0
2 0 20G 0 0
5 0 20G 0 0

show fabric errors all:
Module errors:
slot channel crc hbeat sync DDR sync
1 0 0 0 0 0
2 0 0 0 0 0
5 0 0 0 0 0

Fabric errors:
slot channel sync buffer timeout
1 0 0 0 0
2 0 0 0 0
5 0 0 0 0


with a quick look to "Ethernet and Gigabit Ethernet Switching modules" I realized that module 3 in this 6506E, 48 port 10/100/1000mb EtherModule (WS-X6148-GE-TX) has only a "BUS" connection, it's a so called "classic line card", that's why the switching mode is "Truncated".

Let's take a look to wikipedia: http://en.wikipedia.org/wiki/Catalyst_6500 simple, nice photo..
6509 from wikipedia

Sunday, July 12, 2009

Private Vlans

Hi all, I've started to read the Wendell Odom's book "CCIE R&S Exam Certification Guide" and I was a little surprised that Private Vlans on chapter 2 are only explained in theory, without configuration examples.
Maybe that topic is not so relevant for R&S exam... but enough to stimulate my brain to do a lab ;-)

Well, I've started reading the "Configuring Private Vlans" guide for 3750s, but configuration still the same on other platforms...

So here's the lab topology, I've used routers R2 - 6 as hosts, with R2 and R3 in the same community, R4 and R5 as isolated, and R6 as promiscuous. SW1 act as L3 switch.



The expected results are:
SW1 pings all hosts (R2 - R6)
R2 pings SW1, R3 and R6
R3 pings SW1, R2 and R6
R4 pings SW1 and R6
R5 pings SW1 and R6
R6 pings all hosts (R2 - R6)

Well, first, configure all R2 - R6 interfaces on the same subnet:
Pod1-R2#sh run int fa 0/0 | beg int
interface FastEthernet0/0
ip address 10.0.0.2 255.255.255.0
duplex auto
speed auto
end

Pod1-R3#sh run int fa 0/0 | beg int
interface FastEthernet0/0
ip address 10.0.0.3 255.255.255.0
duplex auto
speed auto
end

Pod1-R4#sh run int fa 0/0 | beg int
interface FastEthernet0/0
ip address 10.0.0.4 255.255.255.0
duplex auto
speed auto
end

Pod1-R5#sh run int fa 0/0 | beg int
interface FastEthernet0/0
ip address 10.0.0.5 255.255.255.0
duplex auto
speed auto
end

Pod1-R6#sh run int fa 0/1 | beg int
interface FastEthernet0/1
ip address 10.0.0.6 255.255.255.0
duplex auto
speed auto
end


Ok, hosts are ready, no special configuration is required, note that they're all in the same subnet.
Now on SW1 we must create the vlans, vlan 10 is the primary, and 101-102 are secondary.

SW1(config)#
vlan 101
private-vlan community
!
vlan 102
private-vlan isolated
!
vlan 10
private-vlan primary
private-vlan association 101-102

well done, let's verify with:

SW1#sh vlan private-vlan

Primary Secondary Type Ports
------- --------- ----------------- ------------------------------------------
10 101 community
10 102 isolated

There are no ports assigned to the private vlans, so let's configure it!

SW1#sh cdp nei | inc R2
Pod1-R2 Fas 1/0/11 139 R S I 2621 Fas 0/0
SW1#sh run int fa 1/0/11 | beg int
interface FastEthernet1/0/11
description SW1 <-> R2
switchport private-vlan host-association 10 101
switchport mode private-vlan host
spanning-tree portfast
end

SW1#sh cdp nei | inc R3
Pod1-R3 Fas 1/0/4 144 R S I 2621 Fas 0/0
SW1#sh run int fa 1/0/4 | beg int
interface FastEthernet1/0/4
description SW1 <-> R3
switchport private-vlan host-association 10 101
switchport mode private-vlan host
spanning-tree portfast
end

SW1#sh cdp nei | inc R4
Pod1-R4 Fas 1/0/9 128 R S I 2621 Fas 0/0
SW1#sh run int fa 1/0/9 | beg int
interface FastEthernet1/0/9
description SW1 <-> R4
switchport private-vlan host-association 10 102
switchport mode private-vlan host
spanning-tree portfast
end

SW1#sh cdp nei | inc R5
Pod1-R5 Fas 2/0/11 164 R S I 2621 Fas 0/0
SW1#sh run int fa 2/0/11 | beg int
interface FastEthernet2/0/11
description SW1 <-> R5
switchport private-vlan host-association 10 102
switchport mode private-vlan host
spanning-tree portfast
end

SW1#sh cdp nei | inc R6
Pod1-R6 Fas 2/0/2 146 R S I 2621 Fas 0/1
SW1#sh run int fa 2/0/2 | beg int
interface FastEthernet2/0/2
description SW1 <-> R6
switchport private-vlan mapping 10 101-102
switchport mode private-vlan promiscuous
end

SW1#

let's verify if the L2 ports are correctly assigned to the private vlans:

SW1#sh vlan private-vlan

Primary Secondary Type Ports
------- --------- ----------------- ------------------------------------------
10 101 community Fa1/0/4, Fa1/0/11, Fa2/0/2
10 102 isolated Fa1/0/9, Fa2/0/2, Fa2/0/11

Ok, Fa 2/0/2 is assigned to both the secondary vlans, due to the promiscuous mode.

Now we must complete the configuration with the L3 interface on SW1, basically we have to map the secondary private vlans to the L3 svi interface:
SW1#sh run int vlan 10 | beg int
interface Vlan10
ip address 10.0.0.1 255.255.255.0
private-vlan mapping 101-102
end

SW1#sh int vlan 10 private-vlan mapping
Interface Secondary VLANs
--------- --------------------------------------------------------------------
vlan10 101, 102
SW1#


well done, let's verify our results:
1) SW1 pings all hosts (R2 - R6)
SW1#ping 10.0.0.2

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
SW1#ping 10.0.0.3

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.3, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/9 ms
SW1#ping 10.0.0.4

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.4, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/9 ms
SW1#ping 10.0.0.5

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.5, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/8 ms
SW1#ping 10.0.0.6

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.6, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
SW1#

2) R2 pings SW1, R3 and R6
Pod1-R2#ping 10.0.0.1

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms
Pod1-R2#ping 10.0.0.3

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.3, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
Pod1-R2#ping 10.0.0.4

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.4, timeout is 2 seconds:
.....
Success rate is 0 percent (0/5)
Pod1-R2#ping 10.0.0.5

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.5, timeout is 2 seconds:
.....
Success rate is 0 percent (0/5)
Pod1-R2#ping 10.0.0.6

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.6, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
Pod1-R2#

3) R3 pings SW1, R2 and R6
Pod1-R3#ping 10.0.0.1

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
Pod1-R3#ping 10.0.0.2

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
Pod1-R3#ping 10.0.0.4

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.4, timeout is 2 seconds:
.....
Success rate is 0 percent (0/5)
Pod1-R3#ping 10.0.0.5

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.5, timeout is 2 seconds:
.....
Success rate is 0 percent (0/5)
Pod1-R3#ping 10.0.0.6

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.6, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms
Pod1-R3#

4) R4 pings SW1 and R6
Pod1-R4#ping 10.0.0.1

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
Pod1-R4#ping 10.0.0.2

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.2, timeout is 2 seconds:
.....
Success rate is 0 percent (0/5)
Pod1-R4#ping 10.0.0.3

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.3, timeout is 2 seconds:
.....
Success rate is 0 percent (0/5)
Pod1-R4#ping 10.0.0.5

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.5, timeout is 2 seconds:
.....
Success rate is 0 percent (0/5)
Pod1-R4#ping 10.0.0.6

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.6, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
Pod1-R4#

5) R5 pings SW1 and R6
Pod1-R5#ping 10.0.0.1

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms
Pod1-R5#ping 10.0.0.2

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.2, timeout is 2 seconds:
.....
Success rate is 0 percent (0/5)
Pod1-R5#ping 10.0.0.3

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.3, timeout is 2 seconds:
.....
Success rate is 0 percent (0/5)
Pod1-R5#ping 10.0.0.4

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.4, timeout is 2 seconds:
.....
Success rate is 0 percent (0/5)
Pod1-R5#ping 10.0.0.6

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.6, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
Pod1-R5#

6) R6 pings all hosts (R2 - R6)
Pod1-R6#ping 10.0.0.1

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms
Pod1-R6#ping 10.0.0.2

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
Pod1-R6#ping 10.0.0.3

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.3, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms
Pod1-R6#ping 10.0.0.4

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.4, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
Pod1-R6#ping 10.0.0.5

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.5, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
Pod1-R6#


well, seems that all is working as expected, isolated ports can ping only the L3 primary vlan svi and the promiscouous ports...

Let's take a look to the arp table and mac-address-table of SW1, they looks pretty unusual:
SW1#sh arp
Protocol Address Age (min) Hardware Addr Type Interface
Internet 10.0.0.2 8 0014.a925.72a0 ARPA Vlan10 pv 101
Internet 10.0.0.3 5 0014.a925.4cd8 ARPA Vlan10 pv 101
Internet 10.0.0.1 - 0014.a98c.87c1 ARPA Vlan10
Internet 10.0.0.6 70 0014.a909.78d1 ARPA Vlan10
Internet 10.0.0.4 4 0014.a909.7870 ARPA Vlan10 pv 102
Internet 10.0.0.5 4 0014.a925.6460 ARPA Vlan10 pv 102

SW1#sh mac-address-table
Mac Address Table
-------------------------------------------

Vlan Mac Address Type Ports
---- ----------- -------- -----
All 0100.0ccc.cccc STATIC CPU
....
10 0014.a909.7870 DYNAMIC pv Fa1/0/9
10 0014.a909.78d1 DYNAMIC Fa2/0/2
10 0014.a925.4cd8 DYNAMIC pv Fa1/0/4
10 0014.a925.6460 DYNAMIC pv Fa2/0/11
10 0014.a925.72a0 DYNAMIC pv Fa1/0/11
101 0014.a909.78d1 DYNAMIC pv Fa2/0/2
101 0014.a925.4cd8 DYNAMIC Fa1/0/4
101 0014.a925.72a0 DYNAMIC Fa1/0/11
102 0014.a909.7870 BLOCKED Fa1/0/9
102 0014.a909.78d1 DYNAMIC pv Fa2/0/2
102 0014.a925.6460 BLOCKED Fa2/0/11

SW1#


note that isolated hosts have "BLOCKED" mac-address in the mac-address-table!

Thursday, July 9, 2009

QoS settings for ATA 186

Hi all,
Playing with Qos on switches today, I've found a small configuration mistake with ATA 186 ports.

This is the config I've found:
interface FastEthernet0/10
description ATA 186
switchport access vlan 4
switchport mode access
priority-queue out
mls qos trust device cisco-phone
mls qos trust cos
spanning-tree portfast
end


Well, we must recall that ATA 186 it's NOT a cisco-phone, so with this settings, the port will result in untrusted mode. In addition, the port is in access mode, so there's no COS, because cos is a L2 marking sent only on trunk ports.

3560#sh cdp neighbors | inc 0/10
SEP00070E36E2C0 Fas 0/10 175 H ATA 186 Port 1

3560#sh mls qos int fa 0/10
FastEthernet0/10
trust state: not trusted
trust mode: trust cos
trust enabled flag: dis
COS override: dis
default COS: 0
DSCP Mutation Map: Default DSCP Mutation Map
Trust device: cisco-phone
qos mode: port-based


I've read the "QoS recommendations" for ATA 186 on http://www.cisco.com/en/US/docs/voice_ip_comm/cucme/srnd/design/guide/endpts.html#wp1063240

The final configuration for ATA 186 will be:
interface FastEthernet0/10
description Voip ATA 186
switchport access vlan 4
switchport mode access
priority-queue out
mls qos trust dscp
spanning-tree portfast
end


and the port will be in trust state, traffic will be prioritized according to the rest of qos configs.
3560#sh mls qos int fa 0/10
FastEthernet0/10
trust state: trust dscp
trust mode: trust dscp
trust enabled flag: ena
COS override: dis
default COS: 0
DSCP Mutation Map: Default DSCP Mutation Map
Trust device: none
qos mode: port-based


... let's return telnetting around ;-)

Monday, July 6, 2009

configuring QoS on 3560

Hi all, after a little break used to pass the QoS exam and become CCIP certified, I started "playing" with qos on my production lan.

So, For each platform I'm trying to think about qos.

Let's start with 3560 platform... we use Cisco Ip phones with pcs connected to the phones switched port.

Carefully read the following documents:
-Catalyst 3560 Switch Software Configuration Guide, 12.2(20)SE - Configuring QoS
(all pictures on this post are links to this guide)

and look at the basic Qos model scheme (Fig.31-2):

basic QoS model


Let's take a look on the default config when you enable mls qos on 3560s:


3650G-PoE#sh ver | inc Software|image
Cisco IOS Software, C3560 Software (C3560-IPBASE-M), Version 12.2(44)SE2, RELEASE SOFTWARE (fc2)
System image file is "flash:c3560-ipbase-mz.122-44.SE2/c3560-ipbase-mz.122-44.SE2.bin"
3650G-PoE#

3650G-PoE#sh mls qos
QoS is enabled
QoS ip packet dscp rewrite is enabled

ok, qos is enabled and now?
The task is only started, DON'T leave the default configuration, keep in mind that by default, all switchports are in untrusted mode, and SRR is enabled with a shape of 25% bandwidth for the queue 1, that serves cos 5 traffic.

Well, first of all, you need to define "trust" on the ports connected to users and to other switches, to avoid remarking of all traffic to cos 0..

By default ports are in untrusted mode:

3650G-PoE#sh run int gi 0/7 | beg int
interface GigabitEthernet0/7
description *** IP phone (vlan 4) + PC (vlan 30) ***
switchport trunk encapsulation dot1q
switchport trunk native vlan 30
switchport trunk allowed vlan 4,30
switchport mode trunk
switchport voice vlan 4
end

3650G-PoE#sh mls qos int gi 0/7
GigabitEthernet0/7
trust state: not trusted
trust mode: not trusted
trust enabled flag: ena
COS override: dis
default COS: 0
DSCP Mutation Map: Default DSCP Mutation Map
Trust device: none
qos mode: port-based


ok, let's configure trust on the ip phone + pc port:

3650G-PoE#sh run int gi 0/7 | beg int
interface GigabitEthernet0/7
description *** IP phone (vlan 4) + PC (vlan 30) ***
switchport trunk encapsulation dot1q
switchport trunk native vlan 30
switchport trunk allowed vlan 4,30
switchport mode trunk
switchport voice vlan 4
mls qos trust device cisco-phone
mls qos trust cos
end

3650G-PoE#sh mls qos int gi 0/7
GigabitEthernet0/7
trust state: trust cos
trust mode: trust cos
trust enabled flag: ena
COS override: dis
default COS: 0
DSCP Mutation Map: Default DSCP Mutation Map
Trust device: cisco-phone
qos mode: port-based


3650G-PoE#sh loggin | inc TRUST
Jul 6 17:37:56: %SWITCH_QOS_TB-5-TRUST_DEVICE_DETECTED: cisco-phone detected on port Gi0/7,
port's configured trust state is now operational.
3650G-PoE#


Ok, as CDP is enabled globally, so the switch "senses" cisco ip phones location and trusts that ports using cos, according to our configuration.

Well, now look on the cos-to-dscp map that is used to assign an internal dscp for incoming traffic:
3650G-PoE#sh mls qos maps cos-dscp
Cos-dscp map:
cos: 0 1 2 3 4 5 6 7
--------------------------------
dscp: 0 8 16 24 32 40 48 56


This is the default cos-to-dscp map, remember that cisco ip phones marks as cos 5 the voice traffic, as cos 3 the voice signaling and cos 0 the pc port traffic (unless extend trust it configured)
Note that the default cos-dscp map assigns cos 5 (default for voice traffic) to dscp 40 (and it's most likely to map it with dscp 46 = ef... so we need to modify this map)

Let's Modify this map as follows:

3650G-PoE#conf t
Enter configuration commands, one per line. End with CNTL/Z.
3650G-PoE(config)#mls qos map cos-dscp 0 8 16 26 32 46 48 56
3650G-PoE(config)#end
3650G-PoE#sh mls qos maps cos-dscp
Cos-dscp map:
cos: 0 1 2 3 4 5 6 7
--------------------------------
dscp: 0 8 16 26 32 46 48 56

Note: this is also the "auto-qos voip" cos-dscp map...

After the classification stage, it's the turn of Policer stage, by default none is enabled:

3650G-PoE#sh mls qos interface gi 0/7 policers
GigabitEthernet0/7

3650G-PoE#sh mls qos aggregate-policer
3650G-PoE#

So all our incoming traffic will be "in profile" and not remarked at "mark" stage.

Now it's the turn of the Scheduling and Queuing stage, before the TX internal ring, the so called "Ingress queues": look at figure 31-5:


Scheduling and Queuing


There are two ingress queues, with the sharing option only (no shape on ingress!)
The traffic will be placed on the two queues according to the cos-input-q map (since we have trusted cos.... I'm not completely shure about it.. if someone can confirm it pls...)

3650G-PoE#sh mls qos maps cos-input-q
Cos-inputq-threshold map:
cos: 0 1 2 3 4 5 6 7
------------------------------------
queue-threshold: 1-1 1-1 1-1 1-1 1-1 2-1 1-1 1-1


3650G-PoE#sh mls qos input-queue
Queue : 1 2
----------------------------------------------
buffers : 90 10
bandwidth : 4 4
priority : 0 10
threshold1: 100 100
threshold2: 100 100


Well, by default only cos 5 traffic it's placed on queue 2.
Queue 2 have less bandwidth (10% it's expected that voice traffic is less than data) but has priority = 10, so queue 2 is served more often by the scheduler.

The "auto-qos voip" generated configuration for input queue is:

no mls qos srr-queue input priority-queue 1
no mls qos srr-queue input priority-queue 2

mls qos srr-queue input bandwidth 90 10
mls qos srr-queue input threshold 1 8 16
mls qos srr-queue input threshold 2 34 66
mls qos srr-queue input buffers 67 33

mls qos srr-queue input cos-map queue 1 threshold 2 1
mls qos srr-queue input cos-map queue 1 threshold 3 0
mls qos srr-queue input cos-map queue 2 threshold 1 2
mls qos srr-queue input cos-map queue 2 threshold 2 4 6 7
mls qos srr-queue input cos-map queue 2 threshold 3 3 5
[..dscp input settings omitted]

In fact, since this config is applyed to a port with a cisco phone connected, it's expected to receive only cos 0,3,5. But you can configure policer and (re)mark out-profile traffic, so we can "read" this auto-qos config as:
-queue 1 thresold 2 cos 1 (maybe you use it for scavenger traffic)
-queue 1 thresold 3 cos 0 (normal data traffic expected)
-queue 2 thresold 1 cos 2 (what kind of traffic here? out-profile? video?)
-queue 2 thresold 2 cos 4 6 7 (with an attacched ip phone)
-queue 2 thresold 3 cos 3 5 (voice signaling and voice traffic)

I guess that it can be fine for our purposes.
The new map will be:
3650G-PoE#sh mls qos maps cos-input-q
Cos-inputq-threshold map:
cos: 0 1 2 3 4 5 6 7
------------------------------------
queue-threshold: 1-3 1-2 2-1 2-3 2-2 2-3 2-2 2-2

3650G-PoE#sh mls qos input-queue
Queue : 1 2
----------------------------------------------
buffers : 67 33
bandwidth : 90 10
priority : 0 10
threshold1: 8 34
threshold2: 16 66



At this stage, the traffic is switched into the internal ring and it's placed in the egress queue of the egress interface...
To better understand the egress queue, read this interesting article by Petr Lapukhov:
Quick Notes on the 3560 Egress Queuing

Let's take a look to the "auto-qos voip" generated config for cos and egress queue...

mls qos srr-queue output cos-map queue 1 threshold 3 5
mls qos srr-queue output cos-map queue 2 threshold 3 3 6 7
mls qos srr-queue output cos-map queue 3 threshold 3 2 4
mls qos srr-queue output cos-map queue 4 threshold 2 1
mls qos srr-queue output cos-map queue 4 threshold 3 0
mls qos queue-set output 1 threshold 1 138 138 92 138
mls qos queue-set output 1 threshold 2 138 138 92 400
mls qos queue-set output 1 threshold 3 36 77 100 318
mls qos queue-set output 1 threshold 4 20 50 67 400
mls qos queue-set output 2 threshold 1 149 149 100 149
mls qos queue-set output 2 threshold 2 118 118 100 235
mls qos queue-set output 2 threshold 3 41 68 100 272
mls qos queue-set output 2 threshold 4 42 72 100 242
mls qos queue-set output 1 buffers 10 10 26 54
mls qos queue-set output 2 buffers 16 6 17 61


recall that you must enable priority queue on egress queue:

3650G-PoE(config)#int gi 0/7
3650G-PoE(config-if)#priority-queue out
3650G-PoE(config-if)#end
3650G-PoE#sh mls qos interface gi 0/7 queueing
GigabitEthernet0/7
Egress Priority Queue : enabled
Shaped queue weights (absolute) : 25 0 0 0
Shared queue weights : 10 10 60 20
The port bandwidth limit : 100 (Operational Bandwidth:100.0)
The port is mapped to qset : 2


[... to be continued ...]