Skip to content
ATO Pathways
Log In
Overview
Search
Catalogs
SCAP
OSCAL
Catalogs
Profiles
Documents
References
Knowledge Base
Platform Documentation
Compliance Dictionary
Platform Changelog
About
Catalogs
XCCDF
SDN Using NV Security Technical Implementation Guide
Profiles
No profile (default benchmark)
No profile (default benchmark)
An XCCDF Profile
Details
Items
Prose
24 rules organized in 24 groups
NET-SDN-002
1 Rule
<GroupDescription></GroupDescription>
Northbound API traffic received by the SDN controller must be authenticated using a FIPS-approved message authentication code algorithm.
High Severity
<VulnDiscussion>The SDN controller determines how traffic should flow through physical and virtual network devices based on application profiles, network infrastructure resources, security policies, and business requirements that it receives via the northbound API. It also receives network service requests from orchestration and management systems to deploy and configure network elements via this API. In turn, the northbound API presents a network abstraction to these orchestration and management systems. If attackers could leverage a vulnerable northbound API, they would have control over the SDN infrastructure through the controller by inserting polices. If the SDN controller were to receive fictitious information from a rogue application or orchestration system, non-optimized network paths would be produced that could disrupt network operations, resulting in inefficient application and business processes. Hence, it is imperative that all northbound API traffic received by the SDN controller is authenticated.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-003
1 Rule
<GroupDescription></GroupDescription>
Access to the SDN management and orchestration systems must be authenticated using a FIPS-approved message authentication code algorithm.
Medium Severity
<VulnDiscussion>The SDN controller receives network service requests from orchestration and management systems to deploy and configure network elements via the northbound API. In turn, the Northbound API presents a network abstraction to these systems. If either the orchestration or management system were breached, a rogue user could make modifications to the business or security policy that could disrupt network operations, resulting in inefficient application and business processes as well as bypassing security controls. In addition, invalid network service requests could be processed that could exhaust compute, storage, and network resources, leaving no resources available for legitimate business requirements.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-004
1 Rule
<GroupDescription></GroupDescription>
Southbound API control plane traffic must traverse an out-of-band path or be encrypted using a FIPS-validated cryptographic module.
High Severity
<VulnDiscussion>Southbound APIs such as OpenFlow provide the forwarding tables to network devices such as switches and routers, both physical and virtual (hypervisor-based). The SDN controllers use the concept of flows to identify network traffic based on predefined rules that can be statically or dynamically programmed by the SDN control software, thereby determining how traffic should flow through network devices based on usage patterns, applications, and policy that can optimize traffic paths based on business requirements and not network infrastructure design. If an SDN-aware router or switch received erroneous forwarding information from a rogue controller, traffic could be black-holed or even forwarded to a malicious user to sniff traffic and perform a man-in-the-middle attack. Hence, it is imperative to secure flow table updates by encrypting all southbound API traffic or deploying an out-of-band network for this traffic to traverse.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-005
1 Rule
<GroupDescription></GroupDescription>
Northbound API traffic must traverse an out-of-band path or be encrypted using a FIPS-validated cryptographic module.
High Severity
<VulnDiscussion>The SDN controller receives network service requests from orchestration and management systems to deploy and configure network elements via the northbound API. In turn, the northbound API presents a network abstraction to these systems. If either the orchestration or management system were breached, a rogue user could make modifications to the business or security policy that could disrupt network operations, resulting in inefficient application and business processes and bypassing security controls. In addition, invalid network service requests could be processed that could exhaust compute, storage, and network resources, leaving no resources available for legitimate business requirements. Hence, it is imperative that all southbound API traffic is secured by encrypting the traffic or deploying an out-of-band network for this traffic to traverse.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-006
1 Rule
<GroupDescription></GroupDescription>
Southbound API management plane traffic for provisioning and configuring virtual network elements within the SDN infrastructure must be authenticated using a FIPS-approved message authentication code algorithm.
Medium Severity
<VulnDiscussion>Management and orchestration systems within the SDN framework instantiate, deploy, and configure virtual network elements. These systems also define the virtual network topology by specifying the connectivity between the network elements and the workloads, both virtual and physical. If a hypervisor host within the SDN infrastructure were to receive fictitious information from a rogue management or orchestration system as a result of no authentication, the virtual network topology could be altered by deploying rogue network elements to create non-optimized network paths, resulting in inefficient application and business processes. By altering the network topology, the attacker would have the ability force traffic to bypass security controls.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-007
1 Rule
<GroupDescription></GroupDescription>
Southbound API management plane traffic for provisioning and configuring virtual network elements within the SDN infrastructure must traverse an out-of-band path or be encrypted using a using a FIPS-validated cryptographic module.
Medium Severity
<VulnDiscussion>Management and orchestration systems within the SDN framework instantiate, deploy, and configure network elements within the SDN infrastructure. These systems also define the virtual network topology by specifying the connectivity between the network elements and the workloads, both virtual and physical. If a hypervisor host within the SDN infrastructure were to receive fictitious information from a rogue management or orchestration system, the virtual network topology could be altered by deploying rogue network elements to create non-optimized network paths, resulting in inefficient application and business processes. By altering the network topology, the attacker would have the ability to force traffic to bypass security controls. Spoofed management plane traffic generated by a rogue management system could result in a denial-of-service attack on the hypervisor hosts, exhausting the computing resources and disrupting workload processing or even creating a network outage. Hence, it is imperative that all SDN management plane traffic is secured by encrypting the traffic or deploying an out-of-band network for this traffic to traverse.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-008
1 Rule
<GroupDescription></GroupDescription>
Southbound API management plane traffic for configuring SDN parameters on physical network elements must be authenticated using DOD PKI certificate-based authentication.
Medium Severity
<VulnDiscussion>Physical SDN-enabled switches are dependent on the SDN controller for their forwarding tables as well as their configuration and service parameters. This information is provided to the switches via SDN management plane protocols such as Network Configuration Protocol (NETCONF) and Open vSwitch Database Management Protocol (OVSDB). The latter provides configuration support for OpenFlow-enabled switches such as Open vSwitch, as well as many vendor switches. Without authenticating management packets, physical switches within the SDN infrastructure could receive fictitious information from a rogue management system that could shut down interfaces, thereby altering the physical network topology. By altering the network topology, the attacker would have the ability to force traffic to bypass security controls. Legitimate traffic could be dropped by deploying access control lists to active interfaces. Spoofed management plane traffic generated by a rogue management system could result in a denial-of-service attack on the switches, resulting in a network outage.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-009
1 Rule
<GroupDescription></GroupDescription>
Southbound API management plane traffic for configuring SDN parameters on physical network elements must be encrypted using a FIPS-validated cryptographic module.
Medium Severity
<VulnDiscussion>Physical SDN-enabled switches are dependent on the SDN controller for their forwarding tables, as well as their configuration and service parameters. This information is provided to the switches via SDN management plane protocols such as Network Configuration Protocol (NETCONF) and Open vSwitch Database Management Protocol (OVSDB). The latter provides configuration support for OpenFlow-enabled switches such as Open vSwitch, as well as many vendor switches. If a switch within the SDN infrastructure were to receive fictitious information from a rogue management system, the physical network topology could be altered by shutting down interfaces. Legitimate traffic could be dropped by deploying access control lists to active interfaces. By altering the network topology, the attacker would have the ability to force traffic to bypass security controls. Spoofed management plane traffic generated by a rogue management system could result in a denial-of-service attack on the switches, resulting in a network outage. Hence, it is imperative that all SDN management plane traffic is secured by encrypting the traffic using a FIPS-validated cryptographic module.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-010
1 Rule
<GroupDescription></GroupDescription>
Physical SDN controllers and servers hosting SDN applications must reside within the management network with multiple paths that are secured by a firewall to inspect all ingress traffic.
Medium Severity
<VulnDiscussion>Management and orchestration systems deploy and configure network devices such as switches and routers, both physical and virtual. SDN controllers are made aware of the deployments and are able to define the network topology through abstraction. The controllers are then able to provide forwarding table information to each router or switch instance within the SDN infrastructure. If an SDN-aware router or switch received erroneous forwarding information from a rogue controller, traffic could be black-holed or even forwarded to a malicious user to sniff traffic and to perform a man-in-the-middle attack. If attackers could leverage a vulnerable northbound API, they would have control over the SDN infrastructure through the controller by creating their own polices. If the SDN controller were to receive fictitious information from a rogue application, non-optimized network paths would be produced that could disrupt network operations, resulting in inefficient application and business processes. If either the orchestration or management system were breached, invalid network service requests could be processed that could exhaust compute, storage, and network resources, leaving no resources available for legitimate business requirements.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-011
1 Rule
<GroupDescription></GroupDescription>
SDN-enabled routers and switches must provide link state information to the SDN controller to create new forwarding decisions for the network elements.
Low Severity
<VulnDiscussion>Southbound APIs such as OpenFlow provide the forwarding tables to network devices such as switches and routers. SDN controllers have an abstraction of the network topology based on discovery and provisioning information provided by management and orchestration systems. The SDN controllers use the concept of flows to identify network traffic based on predefined rules that can be statically or dynamically programmed by the SDN control software. With the network topology abstraction, they are able to determine how traffic should flow through network devices based on application data, business policy, bandwidth, and path availability. If the SDN-enabled network elements do not provide updated link state information, the SDN controller is not able to reconverge the network to verify there is reachability to all destinations.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-012
1 Rule
<GroupDescription></GroupDescription>
Quality of service (QoS) must be implemented on the underlying IP network to provide preferred treatment for traffic between the SDN controllers and SDN-enabled switches and hypervisors.
Low Severity
<VulnDiscussion>With the network topology abstraction, the SDN controllers are able to determine how traffic should flow through network devices based on application data, business policy, bandwidth, and path availability. When updated link state information is provided by the network elements, the SDN controller must recalculate the optimized paths for network reconvergence and provide the new forwarding tables to the network elements. When network congestion occurs, all traffic has an equal chance of being dropped. QoS provisioning categorizes network traffic, prioritizes it according to its relative importance, and provides preferential treatment using various priority queuing techniques. Prioritization of both link state updates and control plane traffic must be implemented to verify that during periods of severe network congestion, the network can converge.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-013
1 Rule
<GroupDescription></GroupDescription>
SDN controllers must be deployed as clusters and on separate physical hosts to eliminate single point of failure.
Medium Severity
<VulnDiscussion>SDN relies heavily on control messages between a controller and the forwarding devices for network convergence. The controller uses node and link state discovery information to calculate and determine optimum pathing within the SDN network infrastructure based on application, business, and security policies. Operating in the proactive flow instantiation mode, the SDN controller populates forwarding tables to the SDN-aware forwarding devices. At times, the SDN controller must function in reactive flow instantiation mode; that is, when a forwarding device receives a packet for a flow not found in its forwarding table, it must send it to the controller to receive forwarding instructions. With total dependence on the SDN controller for determining forwarding decisions and path optimization within the SDN infrastructure for both proactive and reactive flow modes of operation, having a single point of failure is not acceptable. A controller failure with no failover backup leaves the network in an unmanaged state. Hence, it is imperative that the SDN controllers are deployed as clusters on separate physical hosts to guarantee network high availability.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-014
1 Rule
<GroupDescription></GroupDescription>
Physical devices hosting an SDN controller must be connected to two switches for high-availability.
Low Severity
<VulnDiscussion>SDN relies heavily on control messages between a controller and the forwarding devices for network convergence. The controller uses node and link state discovery information to calculate and determine optimum pathing within the SDN network infrastructure based on application, business, and security policies. Operating in the proactive flow instantiation mode, the SDN controller populates forwarding tables to the SDN-aware forwarding devices. At times, the SDN controller must function in reactive flow instantiation mode; that is, when a forwarding device receives a packet for a flow not found in its forwarding table, it must send it to the controller to receive forwarding instructions. With total dependence on the SDN controller for determining forwarding decisions and path optimization within the SDN infrastructure for both proactive and reactive flow modes of operation, having a single point of failure is not acceptable. Hence, it is imperative that all physical devices hosting an SDN controller are connected to two switches using NIC teaming to guarantee network high availability.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-015
1 Rule
<GroupDescription></GroupDescription>
SDN-enabled routers and switches must rate limit the amount of unknown data plane packets that are punted to the SDN controller.
Low Severity
<VulnDiscussion>SDN-enabled forwarding devices are dependent on the SDN controller for their forwarding tables as well as their configuration and service parameters. The controller uses node and link state discovery information to calculate and determine optimum pathing within the SDN network infrastructure based on application, business, and security policies. Operating in the proactive flow instantiation mode, the SDN controller pre-populates forwarding tables to the forwarding devices. At times, the SDN controller must function in reactive flow instantiation mode; that is, when a forwarding device receives a packet for a flow not found in its forwarding table, it must send or punt it to the controller to receive forwarding instructions. Upon receiving the punted packet, the controller must determine how to forward the packet, create a rule, and populate a new forwarding table to the forwarding device. High rates of punted packets result in excessive controller CPU and memory utilization. Hence, a denial-of-serve attack targeting the SDN controller can be perpetrated either inadvertently or maliciously, involving high rates of packets for new flows that must be punted to the controller.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-016
1 Rule
<GroupDescription></GroupDescription>
Servers hosting SDN controllers must have logging enabled.
Medium Severity
<VulnDiscussion>It is critical for both network and security personnel to be aware of the state of the SDN infrastructure to maintain network stability. Associating logged events that have occurred within the SDN controller as well as network state information provided by the SDN-enabled components is essential to compile an accurate risk assessment and troubleshoot network outages.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-018
1 Rule
<GroupDescription></GroupDescription>
Servers hosting SDN controllers must have an HIDS implemented to detect unauthorized changes.
Medium Severity
<VulnDiscussion>The SDN controller is the backbone of the SDN infrastructure. If the server hosting the SDN controller is breached or if unauthorized changes are made to the device, the SDN controller may not have the appropriate resources to function properly or may even be disabled. A host intrusion detection system (HIDS) can monitor and report system configuration changes and prevent malicious or anomalous activity.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-020
1 Rule
<GroupDescription></GroupDescription>
All Virtual Extensible Local Area Network (VXLAN) enabled switches must be configured with the appropriate VXLAN network identifier (VNI) to ensure VMs can send and receive all associated traffic for their Layer 2 domain.
Medium Severity
<VulnDiscussion>VXLAN is a Layer 2 network that overlays a Layer 3 network; that is, it creates a Layer 2 adjacency across a routed IP fabric. Each Layer 2 overlay network is known as a VXLAN segment and is identified by a unique segment ID called a VXLAN Network Identifier (VNI). The VXLAN network enables virtual machines with the same VNI deployed on different hosts to communicate with each other. Virtual machines are identified uniquely by the combination of the MAC addresses of their virtual network interface card (NIC) and VNI. Hence, it is possible to have duplicate MAC addresses within the SDN infrastructure while in different VXLAN segments. Within the VXLAN architecture, virtual tunnel endpoints (VTEPs) perform the encapsulation and de-encapsulation of the layer-2 traffic. The VXLAN segments are independent of the underlying network topology; conversely, the underlying IP network between VTEPs is independent of the VXLAN overlay. It routes the encapsulated packets based on the outer IP address header, which has the initiating VTEP as the source IP address and the terminating VTEP as the destination IP address. The VTEP must be configured with the appropriate VNIs to enable the VTEP to build forwarding tables for active VXLAN segments (Layer 2 domains) by learning MAC addresses per VNI packet flows.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-021
1 Rule
<GroupDescription></GroupDescription>
Virtual Extensible Local Area Network (VXLAN) identifiers must be mapped to the appropriate VLAN identifiers.
Medium Severity
<VulnDiscussion>VXLAN is a Layer 2 network that overlays a Layer 3 network; that is, it creates a Layer 2 adjacency across a routed IP fabric. Each Layer 2 overlay network is known as a VXLAN segment and is identified by a unique segment ID called a VXLAN Network Identifier (VNI). The VXLAN network enables virtual machines with the same VNI deployed on different hosts to communicate with each other. Virtual machines are identified uniquely by the combination of the MAC addresses of their virtual network interface card (NIC) and VNI. Hence, it is possible to have duplicate MAC addresses within the SDN infrastructure while in different VXLAN segments. Within the VXLAN architecture, virtual tunnel endpoints (VTEPs) perform the encapsulation and de-encapsulation of the layer-2 traffic. The VXLAN segments are independent of the underlying network topology; conversely, the underlying IP network between VTEPs is independent of the VXLAN overlay. It routes the encapsulated packets based on the outer IP address header, which has the initiating VTEP as the source IP address and the terminating VTEP as the destination IP address. VTEP-enabled switches will determine the VNI to insert into the VXLAN header based on the 802.1Q VLAN tag of each frame received from the hypervisor host connected via trunk link or the VLAN assignment of an access switchport. The mapping of VLAN to VNI is configured on the switch. Since the VNI is used to segregate all Layer 2 domains, the correct mapping is critical to ensure all traffic for each Layer 2 domain within the SDN infrastructure is forwarded correctly and that broadcast and multicast traffic does not leak into the wrong domain.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-022
1 Rule
<GroupDescription></GroupDescription>
The proper multicast group for each Virtual Extensible Local Area Network (VXLAN) identifier must be mapped to the appropriate virtual tunnel endpoint (VTEP) so the VTEP will join the associated multicast groups.
Medium Severity
<VulnDiscussion>VXLAN is a Layer 2 network that overlays a Layer 3 network; that is, it creates a Layer 2 adjacency across a routed IP fabric. Each Layer 2 overlay network is known as a VXLAN segment and is identified by a unique segment ID called a VXLAN Network Identifier (VNI). The VXLAN network enables virtual machines with the same VNI deployed on different hosts to communicate with each other. Virtual machines are identified uniquely by the combination of the MAC addresses of their virtual network interface card (NIC) and VNI. Hence, it is possible to have duplicate MAC addresses within the SDN infrastructure while in different VXLAN segments. Within the VXLAN architecture, VTEPs perform the encapsulation and de-encapsulation of the layer-2 traffic. The VXLAN segments are independent of the underlying network topology; conversely, the underlying IP network between VTEPs is independent of the VXLAN overlay. It routes the encapsulated packets based on the outer IP address header, which has the initiating VTEP as the source IP address and the terminating VTEP as the destination IP address. Each VXLAN segment is mapped to an IP multicast group in the transport IP network. Hence, VTEPs join IP multicast groups based on VNI membership. This is the method by which VTEPs can discover other VTEPs belonging to the same VXLAN segment. Each VTEP-enabled switch is configured to join the applicable multicast group for each VNI through Internet Group Management Protocol (IGMP). The IGMP joins will trigger Protocol Independent Multicast (PIM) joins, thereby signaling a multicast distribution tree for each group through the transport network based on the locations of participating VTEPs. The multicast group is used to transmit broadcast, unknown unicast, and multicast traffic through the IP network for each VXLAN segment, limiting all Layer 2 flooding to those switches that have end systems participating in the same VXLAN segment. Because the VNI is used to segregate all Layer 2 domains via the VXLAN header encapsulation by the VTEPs, and discovery of each VTEP member is dependent on a specific multicast group, it is imperative that the correct mapping of multicast groups to VNI is configured.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-024
1 Rule
<GroupDescription></GroupDescription>
The virtual tunnel endpoint (VTEP) must be dual-homed to two physical network nodes.
Low Severity
<VulnDiscussion>If uplink connectivity for the VTEP to the Virtual Extensible Local Area Network (VXLAN) transport network fails, traffic to and from the VM servers resident on the affected hypervisor host is dropped. Whether it is a hardware (VXLAN-enabled switch) or software (hypervisor resident) VTEP, dedicating a pair of physical uplinks from the VTEP to two separate network nodes adds high availability and resiliency to the VXLAN implementation. If either an uplink or one of the attached network nodes fails, the VTEP would still have connectivity to the underlying IP network for VXLAN traffic.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-025
1 Rule
<GroupDescription></GroupDescription>
A secondary IP address must be specified for the virtual tunnel endpoint (VTEP) loopback interface when Virtual Extensible Local Area Network (VXLAN) enabled switches are deployed as a multi-chassis configuration.
Low Severity
<VulnDiscussion>A multi-chassis configuration (i.e., vPC domain, MLAG, MCLAG, etc.) can be used to attach a hypervisor host to a pair of VXLAN-enabled switches. For example, a vPC consists of two vPC peer switches connected by a vPC peer link. A vPC domain is formed by the two switches; one switch is primary and the other is secondary. A switch can only be part of one vPC domain, and only two switches can make up a vPC domain. A vPC allows links that are physically connected to two different switches to appear as a single port channel to a third device, which can be another switch or a server that supports Link Aggregation Control Protocol (LACP) as defined in IEEE 802.1AX, 802.1aq, and 802.3ad. With vPC deployment, the loopback interface that is acting as the source-interface for the VTEP will use the secondary IP address to function as the anycast IP address if the hypervisor host is dual-attached through the vPC. When a host is single-attached (orphan port), the VXLAN-encapsulated traffic will be sent using the loopback’s primary address.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-027
1 Rule
<GroupDescription></GroupDescription>
Two or more edge gateways must be deployed connecting the network virtualization platform (NVP) and the physical network.
Low Severity
<VulnDiscussion>An edge gateway is deployed to allow north-south traffic to flow between the virtualized network and the physical network, including destinations outside of the data center or enclave boundaries. The gateway establishes routing adjacencies between the virtual routers and physical routers. The gateway can also filter the north-south traffic to enforce security policies for communication between the physical and virtual workloads. Deploying two or more edge gateways eliminates the risk of a single point of failure, thereby ensuring there is always reachability between virtual machines and the physical network infrastructure and reducing the risk of black-holing north-south traffic.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-028
1 Rule
<GroupDescription></GroupDescription>
Virtual edge gateways must be deployed across multiple hypervisor hosts.
Low Severity
<VulnDiscussion>An edge gateway is deployed to allow north-south traffic to flow between the virtualized network and the physical network, including destinations outside of the data center or enclave boundaries. The gateway can also filter the north-south traffic to enforce security policies for communication between the physical and virtual workloads. If the edge gateways deployed as virtual machines are resident on the same host, the host becomes a single point of failure for all communication between the virtual workload and the physical network infrastructure. Deploying the edge gateways across multiple hypervisor hosts eliminates the risk of a single point of failure, thereby ensuring there is always reachability between virtual machines and the physical network infrastructure and reducing the risk of black-holing north-south traffic.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>
NET-SDN-029
1 Rule
<GroupDescription></GroupDescription>
The virtual edge gateways must be deployed with routing adjacencies established with two or more physical routers.
Low Severity
<VulnDiscussion>An edge gateway is deployed to allow north-south traffic to flow between the virtualized network and the physical network, including destinations outside of the data center or enclave boundaries. The gateway establishes routing adjacencies between the virtual routers and physical routers. The gateway can also filter the north-south traffic to enforce security policies for communication between the physical and virtual workloads. Implementing the edge gateway in either active/standby or equal-cost multipath (ECMP) mode ensures there is always a virtual router to forward north-south traffic, assuming there is always a routing adjacency with a router in the physical network infrastructure. Having an adjacency with only one physical router creates a single point of failure regardless of the number of links deployed, there would be no connectivity between the virtual and physical workloads if a node failure occurred. Hence, it is imperative that each edge gateway is deployed with connectivity to two physical routers.</VulnDiscussion><FalsePositives></FalsePositives><FalseNegatives></FalseNegatives><Documentable>false</Documentable><Mitigations></Mitigations><SeverityOverrideGuidance></SeverityOverrideGuidance><PotentialImpacts></PotentialImpacts><ThirdPartyTools></ThirdPartyTools><MitigationControl></MitigationControl><Responsibility></Responsibility><IAControls></IAControls>