Multilayer Switching: More Routed Ports and Comparisons with SVIs

Remember the ‘follow the path’ technique with troubleshooting at Layer 3.

Router On A Stick is not scalable. (Doh!)

SVI Checklist

  • Create the VLAN before the SVI. The VLAN must be active when the SVI is created – that VLAN will not be dynamically created at that time.
  • Theoretically, you need to open the SVI with no shutdown just as you would open a physical interface after configuring an IP address
  • The SVI and VLAN have an association, but they’re not the same thing..
  • The only SVI on the switch by default is the SVI for VLAN 1, intended to allow remote switch administration and configuration.

SVIs are a great way to allow interVLAN communication, but you must have a routing protocol configured in addition to the SVIs.

Fallback Bridging – (Uncommon in the real world)

CEF has a limitation in that IPX, SNA, LAT, and AppleTalk are either not supported by CEF or, in the case of SNA and LAT, are nonroutable protocols. If you’re running any of these on an CEF-enabled switch, you’ll need fallback bridging to get this traffic from one VLAN to another.

Fallback bridging involves the creation of bridge groups, and the SVIs will have to be added to these bridge groups.

To create a bridge group:

MLS(config)# bridge-group 1

To join a SVI to a bridge group:

MLS(config)#interface vlan 10
MLS(config-if)#bridge-group 1

SVI Advantages

  • No single point of failure
  • Faster than ROAS
  • Don’t need to configure a trunk between the L2 switch and the router

If you have an L3 switch, you’re much better off using SVIs for inter-VLAN communication rather than ROAS.

A black hole in routing is the result of an SVI remaining up when there are actually no “up/up” interfaces in that VLAN except for those connected to network monitors or similar devices.

To avoid this, we can exclude such ports from the “up/up” calculation with the switchport autostate exclude command. Using that interface-level command on ports like the one previous described will exclude that port from the “up/up” determination.

Multilayer Switching: MLS Fundamentals

Fundamentals

Multilayer switches can perform packet switching up to ten times as fast as a pure L3 router.

When it comes to Cisco Catalyst switches, this hardware switching is performed by a router processor (or L3 engine). This processor must download routing information to the hardware itself. To make this hardware-based packet processing happen, Cat switches will run either the older Multilayer Switching (MLS), or the newer Cisco Express Forwarding (CEF).

Application-Specific Integrated Circuits (ASICs) will perform the L2 rewriting operation of these packets. With multilayer switching, it’s the ASICs that perform this L2 address overwriting.

in addition to the CAM table we have a TCAM table – Ternary Content Addressable Memory. Basically, the TCAM table stores everything the CAM table can’t, including info about ACLs and QoS.

Route Caching

Route caching devices have both a routing processor and a switching engine. The routing processor routes a flow’s first packet, the switching engine snoops in on that packet and the destination, and the switching engine takes over and forwards the rest of the packets in that flow. Route Caching can be effective, but there’s one slight drawback – the first packet in any flow will be switched by software.

CEF (Cisco Express Forwarding)

Cisco Express Forwarding (CEF) is a highly popular method of multilayer switching. Primarily designed for backbone switches, this topology-based switching method requires special hardware, so it’s not available on all L3 switches. CEF is highly scalable, and is also easier on a switch’s CPU than route caching.

CEF has two major components – the Forwarding Information Base and the Adjacency Table.

The Forwarding Information Base (FIB) that contains the usual routing information – the destination networks, their masks, the next-hop IP addresses, etc – and CEF will use the FIB to make L3 prefix-based decisions. The FIB’s contents will mirror that of the IP routing table. (show ip cef)

The routing information in the FIB is updated dynamically as change notifications are received from the L3 engine. Since the FIB is prepopulated with the information from the routing table, the MLS can find the routing information quickly.

*If the TCAM table ever was full, there is a wildcard entry that will redirect traffic to the routing engine.

The Adjacency Table (AT) As adjacent hosts are discovered via ARP, that next-hop L2 information is kept in this table for CEF switching.

  • Moving packets from the L3 engine to software = ‘punt adjacency’
  • Sending packets to nowhere = ‘null adjacency’

The Control Plane And The Data Plane

Control Plane

  • “CEF control plane”
  • “control plane”
  • “Layer 3 engine” or “Layer 3 forwarding engine”

The control plane’s job is to first build the ARP and IP routing tables.

Data Plane

  • “data plane”
  • “hardware engine”
  • “ASIC”

The data plane that places data in the L3 switch’s memory while the FIB and AT tables are consulted, and then performs any necessary encapsulation before forwarding the data to the next hop.

Exceptions To The Rule

Packets that CANNOT be hardware switched:

  • Packets with IP header options
  • Packets that will be fragmented before transmission (because they’re exceeding the MTU)
  • NAT packets
  • Packets that came to the MLS with an invalid encap type

Switching Speeds

Fastest to slowest as per Cisco best practice:

  • 1. Distributed CEF (DCEF). The name is the recipe – the CEF workload is distributed over multiple CPUs.
  • 2. CEF
  • 3. Fast Switching
  • 4. Process Switching (sometimes jokingly referred to as “slow switching” – it’s quite an involved process and is a real CPU hog)