Topic: Cobranet & Layer 3 routing/switching
A recent project of mine encountered some problems with Cobranet and the use of Layer 3 switches. Scott mentioned that he has encountered similar problems and was interested in our solution. What follows is a description of the system, the problem and solution in hopes that it may be helpful to anyone who experiences similar difficulties.
The network is comprised of three VLANs, two of which are for Cobranet and the third for control only. The physical layout was composed of typical edge switches at each of the rack rooms all linked back to frame based core switches in the data center. All of the control protocols utilized IP so the core switches were configured to provide Layer 3 switching(routing) amongst the three VLANs. This kept the Cobranet multicast traffic isolated from the control hosts whilst allowing control traffic to devices which utilized the same LAN port for both Cobranet and control. To provide this Layer 3 switching, the core switch was configured with a Layer 3 interface in each VLAN which acted as the default gateway for each associated VLAN. As opposed to a traditional, distinct routing device which would typically have a real physical interface for each network, these interfaces were virtual, within the core switch itself.
As usual, broadcast and unknown multicast/unicast traffic is forwarded to every port on a VLAN. This includes both real physical ports and virtual interfaces as described above. This exposed the Layer 3 interfaces within the core switch to the multicast Cobranet traffic in their respective VLAN regardless of the fact that said traffic has no Layer 3 information and therefore cannot be routed. The internal routing processor still processed each frame if only to drop it. The internal routing processor is a software entity and has far less bandwidth than the Layer 2 switching fabric that is implemented in hardware. We were seeing consistent 99% cpu utilization in the core switches, causing performance degradation on other networks(this was a single integrated, facility wide network) which were running applications that required the services of the routing processor. The Cobranet traffic however, being native Layer 2 and therefore forwarded by hardware, was not affected.
Here is the solution that we came up with. The Layer 3 interfaces in each VLAN were removed. This immediately solved the CPU utilization problem, but at the same time prohibited control communication to devices on the Cobranet VLANs. For each VLAN a 'proxy' VLAN was created in the core switch. These new VLANs do not need to be propagated to the edge switches. The Layer 3 interfaces which were previously assigned to the original VLANs were moved to each respective proxy VLAN. Each VLAN, both normal and proxy, was given a new physical access(non-trunking) port on the core switch. Now each original VLAN was connected to its proxy via these six new ports with patch cables. Finally, each proxy VLAN was configured with a hardware based traffic filter that dropped all Cobranet traffic, based on Ethernet type values of 0x8819. Cobranet traffic, and any other intrasubnet traffic flowed as before within each original VLAN, however any that was forwarded to the proxy VLAN was subject to the filter before it reached the routing processor. Traffic needing to leave the VLAN, which by definition is IP(Ethernet type 0x0800), passed unhibited through the traffic filter to the Layer 3 interface for normal routing.