openstack L3-GRE 网络结构分析记录 (Icehouse) 第二篇

在上一篇文章中分析了单租户在不同网络情形下的计算节点的网络结构,但未有描述当数据包到达网络节点时候,数据包是如何走向公网的,网络的结构如下:

P6SU57JE1H7Y86U4U

~XU_KGP9}3W$@DSEMN`GG32

通过前一篇文章我们知道,对于该租户的网络来说,相当于计算节点中的虚机实例通过GRE的隧道与网络节点的虚拟路由器打通了二层,逻辑结构就像这样:

CRQU2S{QXGBIYPCDQZ3{7_D

那么这个二层究竟是如何相互联系起来的。

看下网络节点的ovs:

QQ图片20141225171859

当数据包通过GRE tunnel到达br-tun后,br-tun中的 patch-int port与 br-int中的 patch-tun port互联,数据包被送到br-int桥里,由于tunnel的ID和br-int的vlan是关联起来的,因此数据包会被正确的tagged并送到正确的vlan segment中去,而每个对应的vlan都会有一个qr**接口与路由器连接起来,因此数据包可以到达路由器接口,该接口其实也就是该网络的gateway。随后虚拟路由器执行数据转发将数据路由到qg-***接口(路由器设置了其缺省路由指向外部实际网络的网关),该qg***接口在br-ex桥中与实际的物理网卡eth2进行桥接,从而数据包送出。

EUV_))()Y48RY)Q~QZ)5Y}W

可以通过查看路由器的接口来对应上述各个接口:

那么这个虚拟路由器是如何转发数据包的:

 

当数据包通过路由器去往外部网络时,虚拟路由器的的iptables会执行相关NAT操作:

从上表可以看出,系统还将 192.168.0.3和10.10.20.7做了一对一的static映射,因此当10.10.20.7机器的数据经过路由后将执行snat为192.168.0.3地址,而如果从外部直接访问192.168.0.3则直接可以访问到10.10.20.7,从而实现了从外部网络访问内部虚机。

QQ图片20141225190327

TDI`KHEFFSPZ8U9Z~S]1Z_V

在Horizon界面的access&security中增加一条入站,目的端口为22的tcp规则

FMR3_B8KDV7]6QDUJK4JHB7

阅读更多

openstack L3-GRE 网络结构分析记录 (Icehouse) 第一篇

实验网络拓扑如下:

P6SU57JE1H7Y86U4U

 首先创建一个L3路由器,并给租户设置三个网络

网络节点情况:

在br-int中,tap*为DHCP的接口, qr为租户的路由器接口,目前只有一个租户,可以看出该L3路由器上有三个网络,实际的网络拓扑结构如下:ERLGK$WSFHHU@SSGLRQB@$R

此时虚拟路由器上实际有4个接口,可以理解为br-int交换是租户私人网的内部交换机划分了三个独立vlan,每个vlan都有一个接口和虚拟路由器相连。

路由器左边的蓝色网络可以理解为provide网络,路由器通过接入br-ex使得路由器的蓝色网络与实际物理接口eth2桥接:

事实上,这些网络的隔离是依靠namespace来进行的:

每个网络都有一个自己的ns,路由器也有一个自己的ns,功能上这就类似于F5的route domain,存在一个上下文关系。

在网络节点内部dhcp接口以及namespace之间关系如下:

V7ZNZG~K6XAX89)E`@3U6`J

.计算节点情况:

此时计算节点还没有启动instance,因此也就没有相关vm的interface, 此时计算节点的hypervisor上网络接口情况如下:

 

下面,在网络private1网络中启动一个实例:

~Q[PIWW2G[G(4IE@R_2[B57)50LNXC6S61R9NMPNS}ML9E

实例自动获得10.10.10.7IP地址,此时网络拓扑如下:

CRQU2S{QXGBIYPCDQZ3{7_D

进入该实例查看:

G}XRA1SRC{H[8]JBD6[[BMR

可以看出该实例通过dhcp自动获取了ip地址以及缺省网关

FL$151LAH6N@[TAT]}`_0(1

在该实例中ping网关,以及ping路由器的外部网络接口都可以ping,说明路由器工作正常。

此时计算节点hypervisor上,网络变化为:

可以看出,hypervisor的网络中多出了多个网络设备qbr***,qvb***,qvo***,tap****, 那么这些设备接口是如何将一个vm与物理网络(provide网络)对接起来的?

阅读更多

Neutron Networking: Neutron Routers and the L3 Agent

Neutron L3 Agent / What is it and how does it work?

Neutron has an API extension to allow administrators and tenants to create “routers” that connect to L2 networks. Known as the “neutron-l3-agent”, it uses the Linux IP stack and iptables to perform L3 forwarding and NAT. In order to support multiple routers with potentially overlapping IP addresses, neutron-l3-agent defaults to using Linux network namespaces to provide isolated forwarding contexts. Like the DHCP namespaces that exist for every network defined in Neutron, each router will have its own namespace with a name based on its UUID.

Network Design / Implementing Neutron Routers

While deploying instances using provider networks is suitable in many cases, there is a limit to the scalability of such environments. Multiple flat networks require corresponding bridge interfaces, and using VLANs may require manual switch and gateway configuration. All routing is handled by an upstream routing device such as a router or firewall, and said device may also be responsible for NAT as well. Any benefits are quickly outweighed by manual control and configuration processes.

Using the neutron-l3-agent allows admins and tenants to create routers that handle routing between directly-connected LAN interfaces (usually tenant networks, GRE or VLAN) and a single WAN interface (usually a FLAT or VLAN provider network). While it is possible to leverage a single bridge for this purpose (as is often the documented solution), the ability to use already-existing provider networks is my preferred solution.

Neutron L3 Agent / Floating IPs

One of the limitations of strictly using provider networks to provide connectivity to instances is that a method to provide public connectivity directly to instances must be handled by a device outside of Neutron’s control. In the previous walkthoughs, the Cisco ASA in the environment handled both 1:1 static NAT and a many-to-one PAT for outbound connectivity.

In nova-networking, the concept of a “floating ip” is best understood as a 1:1 NAT translation that could be modified on-the-fly and “float” between instances. The IP address used as the floating IP was an address in the same L2 domain as the bridge of the hypervisors (or something routed to the hypervisor.) Assuming multi_host was true, an iptables SNAT/DNAT rule was created on the hypervisor hosting the instance. If the user wanted to reassociate the floating IP with another instance, the rule was removed and reapplied on the appropriate hypervisor using the same floating IP and the newly-associated instance address. Instance IPs never changed – only NAT rules.

Neutron’s implementation of floating IPs differs greatly from nova-networks, but retains many of the same concepts and functionality. Neutron routers are created that serve as the gateway for instances and are scheduled to a node running neutron-l3-agent. Rather than manipulating iptables on the hypervisors themselves, iptables in the router namespace is modified to perform the appropriate NAT translations. The floating IPs themselves are procured from the provider network that is providing the router with its public connectivity. This means floating IPs are limited to the same L3 network as the router’s WAN IP address.

A logical representation of this concept can be seen below:

While logically it appears that floating IPs are associated directly with instances, in reality a floating IP is associated with a Neutron port. Other port associations include:

  • security groups
  • fixed ips
  • mac addresses

A port is associated with the instance indicated by the “device_id” field of the “port-show” command:

Networking / Layout

For this installment, a Cisco ASA 5510 will once again serve as the lead gateway device. In fact, I’ll be building upon the configuration already in place from the flat and/or VLAN networking demonstration in the previous installments:

10.240.0.0/24 will continue to serve as the management network for hosts. A single VLAN network will be created to demonstrate the ability to use either a flat or VLAN network.

  • VLAN 10 – MGMT – 10.240.0.0/24
  • VLAN 20 – GATEWAY_NET – 192.168.100.0/22

A single interface on the servers will be used for both management and provider network connectivity. Neutron works with Open vSwitch to build peer-to-peer tunnels between hosts that serve to carry encapsulated tenant network traffic.

Networking / L3 Agent Configuration

阅读更多

Neutron Networking: Simple Flat Network

In this multi-part walkthrough series, I intend to dive into the various components of the OpenStack Neutron project, and to also provide working examples of multiple networking configurations for clouds built with Rackspace Private Cloud powered by OpenStack on Ubuntu 12.04 LTS. When possible, I’ll provide configuration file examples for those following along on an install from source.

In the previous installment, Neutron Networking: The Building Blocks of an OpenStack Cloud, I laid the foundation of the Neutron networking model that included terminology, concepts, and a brief description of services and capabilities. In this second installment, I’ll describe how to build a simple flat network consisting of a few servers and limited networking gear. Future installments will include VLAN-based provider and tenant networks, GRE-based tenant networks, Open vSwitch troubleshooting, and more.<!– more –>

New to OpenStack? Rackspace offers a complete open-source package, Rackspace Private Cloud Software, that you’re welcome to use at no cost. Download and follow along.

Getting Started / What is a flat network?

For those coming from previous Essex- or Folsom-based Rackspace Private Cloud installations, flat networking in Neutron resembles the Flat DHCP model in Nova networking. For those new to the game, a flat network is one in which all instances reside on the same network (which may also be shared by the hosts). No vlan tagging takes place, and Neutron handles the assignment of IPs to instances using DHCP. Therefore, it’s possible to use unmanaged SOHO network switches to build a simple Neutron-based cloud, since there’s no need to configure switchports.

The diagram above represents a simple Neutron networking configuration that utilizes a flat provider network for connectivity of instances to the Internet.

Networking / Layout

In the following diagram, a Cisco ASA 5510 is serving as the lead gateway device, with a Cisco 2960G access switch connecting the firewall and servers together via VLAN 1. 10.240.0.0/24 was chosen as the management network for hosts, but will also serve as a provider network for instances. We’ll be using a single interface on the servers for both management and provider network connectivity.

Networking / Configuration of Physical Devices

阅读更多

Neutron Networking: VLAN Provider Networks

In this multi-part blog series I intend to dive into the various components of the OpenStack Neutron project, and to also provide working examples of networking configurations for clouds built with Rackspace Private Cloud powered by OpenStackon Ubuntu 12.04 LTS.

In the previous installment, Neutron Networking: Simple Flat Network, I demonstrated an easy method of providing connectivity to instances using an untagged flat network. In this third installment, I’ll describe how to build multiple provider networks using 802.1q vlan tagging.<!– more –>

Getting Started / VLAN vs Flat Design

One of the negative aspects of a flat network is that it’s one large broadcast domain. Virtual Local Area Networks, or VLANs, aim to solve this problem by creating smaller, more manageable broadcast domains. From a security standpoint, flat networks provide malicious users the potential to see the entire network from a single host.

VLAN segregation is often used in a web hosting environment where there’s one vlan for web servers (DMZ) and another for database servers (INSIDE). Neither network can communicate directly without a routing device to route between them. With proper security mechanisms in place, if a server becomes compromised in the DMZ it does not have the ability to determine or access the resources in the INSIDE vlan.

The diagrams below are examples of traditional flat and vlan-segregated networks:

VLAN Tagging / What is it and how does it work?

At a basic level on a Cisco switch there are two types of switchports: access ports and trunk ports. Switchports configured as access ports are placed into a single vlan and can communicate with other switchports in the same vlan. Switchports configured as trunks allow traffic from multiple vlans to traverse a single interface. The switch adds a tag to the Ethernet frame that contains the corresponding vlan ID as the frame enters the trunk. As the frame exits the trunk on the other side, the vlan tag is stripped and the traffic forwarded to its destination. Common uses of trunk ports include uplinks to other switches and more importantly in our case, hypervisors serving virtual machines from various networks.

VLAN Tagging / How does this apply to Neutron?

In the previous installment I discussed flat networks and their lack of vlan tagging. All hosts in the environment were connected to access ports in the same vlan, thereby allowing hosts and instances to communicate with one another on the same network. VLANs allow us to not only separate host and instance traffic, but to also create multiple networks for instances similar to the DMZ and INSIDE scenarios above.

Neutron allows users to create multiple provider or tenant networks using vlan IDs that correspond to real vlans in the data center. A single OVS bridge can be utilized by multiple provider and tenant networks using different vlan IDs, allowing instances to communicate with other instances across the environment, and also with dedicated servers, firewalls, load balancers and other networking gear on the same Layer 2 vlan.

Networking / Layout

For this installment, a Cisco ASA 5510 will once again serve as the lead gateway device. In fact, I’ll be building upon the configuration already in place from the flat networking demonstration in the previous installment. 10.240.0.0/24 will continue to serve as the management network for hosts and the flat provider network, and two new provider networks will be created:

  • VLAN 100 – MGMT – 10.240.0.0/24 (Existing)
  • VLAN 200 – DMZ – 192.168.100.0/24 (NEW)
  • VLAN 300 – INSIDE – 172.16.0.0/24 (NEW)

A single interface on the servers will be used for both management and provider network connectivity.

Networking / Configuration of Network Devices

阅读更多