1188faacb1e1f10196625f441810df63dcee9fd0
[cascardo/ovs.git] / tutorial / OVN-Tutorial.md
1 OVN Tutorial
2 ============
3
4 This tutorial is intended to give you a tour of the basic OVN features using
5 `ovs-sandbox` as a simulated test environment.  It’s assumed that you have an
6 understanding of OVS before going through this tutorial. Detail about OVN is
7 covered in [ovn-architecture(7)], but this tutorial lets you quickly see it in
8 action.
9
10 Getting Started
11 ---------------
12
13 For some general information about `ovs-sandbox`, see the “Getting Started”
14 section of [Tutorial.md].
15
16 `ovs-sandbox` does not include OVN support by default.  To enable OVN, you must
17 pass the `--ovn` flag.  For example, if running it straight from the ovs git
18 tree you would run:
19
20     $ make sandbox SANDBOXFLAGS=”--ovn”
21
22 Running the sandbox with OVN enabled does the following additional steps to the
23 environment:
24
25   1. Creates the `OVN_Northbound` and `OVN_Southbound` databases as described in
26      [ovn-nb(5)] and [ovn-sb(5)].
27
28   2. Creates the `hardware_vtep` database as described in [vtep(5)].
29
30   3. Runs the [ovn-northd(8)], [ovn-controller(8)], and [ovn-controller-vtep(8)]
31      daemons.
32
33   4. Makes OVN and VTEP utilities available for use in the environment,
34      including [vtep-ctl(8)], [ovn-nbctl(8)], and [ovn-sbctl(8)].
35
36 Note that each of these demos assumes you start with a fresh sandbox
37 environment.  Re-run `ovs-sandbox` before starting each section.
38
39 1) Simple two-port setup
40 ------------------------
41
42 This first environment is the simplest OVN example.  It demonstrates using OVN
43 with a single logical switch that has two logical ports, both residing on the
44 same hypervisor.
45
46 Start by running the setup script for this environment.
47
48 [View ovn/env1/setup.sh][env1setup].
49
50     $ ovn/env1/setup.sh
51
52 You can use the `ovn-nbctl` utility to see an overview of the logical topology.
53
54     $ ovn-nbctl show
55     lswitch 78687d53-e037-4555-bcd3-f4f8eaf3f2aa (sw0)
56         lport sw0-port1
57             addresses: 00:00:00:00:00:01
58         lport sw0-port2
59             addresses: 00:00:00:00:00:02
60
61 The `ovn-sbctl` utility can be used to see into the state stored in the
62 `OVN_Southbound` database.  The `show` command shows that there is a single
63 chassis with two logical ports bound to it.  In a more realistic
64 multi-hypervisor environment, this would list all hypervisors and where all
65 logical ports are located.
66
67     $ ovn-sbctl show
68     Chassis “56b18105-5706-46ef-80c4-ff20979ab068”
69         Encap geneve
70             ip: “127.0.0.1”
71         Port_Binding “sw0-port1”
72         Port_Binding “sw0-port2”
73
74 OVN creates logical flows to describe how the network should behave in logical
75 space.  Each chassis then creates OpenFlow flows based on those logical flows
76 that reflect its own local view of the network.  The `ovn-sbctl` command can
77 show the logical flows.
78
79     $ ovn-sbctl lflow-list
80     Datapath: d3466847-2b3a-4f17-8eb2-34f5b0727a70  Pipeline: ingress
81       table=0(port_sec), priority=  100, match=(eth.src[40]), action=(drop;)
82       table=0(port_sec), priority=  100, match=(vlan.present), action=(drop;)
83       table=0(port_sec), priority=   50, match=(inport == "sw0-port1" && eth.src == {00:00:00:00:00:01}), action=(next;)
84       table=0(port_sec), priority=   50, match=(inport == "sw0-port2" && eth.src == {00:00:00:00:00:02}), action=(next;)
85       table=1(     acl), priority=    0, match=(1), action=(next;)
86       table=2( l2_lkup), priority=  100, match=(eth.dst[40]), action=(outport = "_MC_flood"; output;)
87       table=2( l2_lkup), priority=   50, match=(eth.dst == 00:00:00:00:00:01), action=(outport = "sw0-port1"; output;)
88       table=2( l2_lkup), priority=   50, match=(eth.dst == 00:00:00:00:00:02), action=(outport = "sw0-port2"; output;)
89     Datapath: d3466847-2b3a-4f17-8eb2-34f5b0727a70  Pipeline: egress
90       table=0(     acl), priority=    0, match=(1), action=(next;)
91       table=1(port_sec), priority=  100, match=(eth.dst[40]), action=(output;)
92       table=1(port_sec), priority=   50, match=(outport == "sw0-port1" && eth.dst == {00:00:00:00:00:01}), action=(output;)
93       table=1(port_sec), priority=   50, match=(outport == "sw0-port2" && eth.dst == {00:00:00:00:00:02}), action=(output;)
94
95 Now we can start taking a closer look at how `ovn-controller` has programmed the
96 local switch.  Before looking at the flows, we can use `ovs-ofctl` to verify the
97 OpenFlow port numbers for each of the logical ports on the switch.  The output
98 shows that `lport1`, which corresponds with our logical port `sw0-port1`, has an
99 OpenFlow port number of `1`.  Similarly, `lport2` has an OpenFlow port number of
100 `2`.
101
102     $ ovs-ofctl show br-int
103     OFPT_FEATURES_REPLY (xid=0x2): dpid:00003e1ba878364d
104     n_tables:254, n_buffers:256
105     capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
106     actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
107      1(lport1): addr:aa:55:aa:55:00:07
108          config:     PORT_DOWN
109          state:      LINK_DOWN
110          speed: 0 Mbps now, 0 Mbps max
111      2(lport2): addr:aa:55:aa:55:00:08
112          config:     PORT_DOWN
113          state:      LINK_DOWN
114          speed: 0 Mbps now, 0 Mbps max
115      LOCAL(br-int): addr:3e:1b:a8:78:36:4d
116          config:     PORT_DOWN
117          state:      LINK_DOWN
118          speed: 0 Mbps now, 0 Mbps max
119     OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
120
121 Finally, use `ovs-ofctl` to see the OpenFlow flows for `br-int`.  Note that some
122 fields have been omitted for brevity.
123
124     $ ovs-ofctl -O OpenFlow13 dump-flows br-int
125     OFPST_FLOW reply (OF1.3) (xid=0x2):
126      table=0, priority=100,in_port=1 actions=set_field:0x1->metadata,set_field:0x1->reg6,resubmit(,16)
127      table=0, priority=100,in_port=2 actions=set_field:0x1->metadata,set_field:0x2->reg6,resubmit(,16)
128      table=16, priority=100,metadata=0x1,dl_src=01:00:00:00:00:00/01:00:00:00:00:00 actions=drop
129      table=16, priority=100,metadata=0x1,vlan_tci=0x1000/0x1000 actions=drop
130      table=16, priority=50,reg6=0x1,metadata=0x1,dl_src=00:00:00:00:00:01 actions=resubmit(,17)
131      table=16, priority=50,reg6=0x2,metadata=0x1,dl_src=00:00:00:00:00:02 actions=resubmit(,17)
132      table=17, priority=0,metadata=0x1 actions=resubmit(,18)
133      table=18, priority=100,metadata=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=set_field:0xffff->reg7,resubmit(,32)
134      table=18, priority=50,metadata=0x1,dl_dst=00:00:00:00:00:01 actions=set_field:0x1->reg7,resubmit(,32)
135      table=18, priority=50,metadata=0x1,dl_dst=00:00:00:00:00:02 actions=set_field:0x2->reg7,resubmit(,32)
136      table=32, priority=0 actions=resubmit(,33)
137      table=33, priority=100,reg7=0x1,metadata=0x1 actions=resubmit(,34)
138      table=33, priority=100,reg7=0xffff,metadata=0x1 actions=set_field:0x2->reg7,resubmit(,34),set_field:0x1->reg7,resubmit(,34)
139      table=33, priority=100,reg7=0x2,metadata=0x1 actions=resubmit(,34)
140      table=34, priority=100,reg6=0x1,reg7=0x1,metadata=0x1 actions=drop
141      table=34, priority=100,reg6=0x2,reg7=0x2,metadata=0x1 actions=drop
142      table=34, priority=0 actions=set_field:0->reg0,set_field:0->reg1,set_field:0->reg2,set_field:0->reg3,set_field:0->reg4,set_field:0->reg5,resubmit(,48)
143      table=48, priority=0,metadata=0x1 actions=resubmit(,49)
144      table=49, priority=100,metadata=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,64)
145      table=49, priority=50,reg7=0x1,metadata=0x1,dl_dst=00:00:00:00:00:01 actions=resubmit(,64)
146      table=49, priority=50,reg7=0x2,metadata=0x1,dl_dst=00:00:00:00:00:02 actions=resubmit(,64)
147      table=64, priority=100,reg7=0x1,metadata=0x1 actions=output:1
148      table=64, priority=100,reg7=0x2,metadata=0x1 actions=output:2
149
150 The `ovs-appctl` command can be used to generate an OpenFlow trace of how a
151 packet would be processed in this configuration.  This first trace shows a
152 packet from `sw0-port1` to `sw0-port2`.  The packet arrives from port `1` and
153 should be output to port `2`.
154
155 [View ovn/env1/packet1.sh][env1packet1].
156
157     $ ovn/env1/packet1.sh
158
159 Trace a broadcast packet from `sw0-port1`.  The packet arrives from port `1` and
160 should be output to port `2`.
161
162 [View ovn/env1/packet2.sh][env1packet2].
163
164     $ ovn/env1/packet2.sh
165
166 You can extend this setup by adding additional ports.  For example, to add a
167 third port, run this command:
168
169 [View ovn/env1/add-third-port.sh][env1thirdport].
170
171     $ ovn/env1/add-third-port.sh
172
173 Now if you do another trace of a broadcast packet from `sw0-port1`, you will see
174 that it is output to both ports `2` and `3`.
175
176     $ ovn/env1/packet2.sh
177
178 2) 2 switches, 4 ports
179 ----------------------
180
181 This environment is an extension of the last example.  The previous example
182 showed two ports on a single logical switch.  In this environment we add a
183 second logical switch that also has two ports.  This lets you start to see how
184 `ovn-controller` creates flows for isolated networks to co-exist on the same
185 switch.
186
187 [View ovn/env2/setup.sh][env2setup].
188
189     $ ovn/env2/setup.sh
190
191 View the logical topology with `ovn-nbctl`.
192
193     $ ovn-nbctl show
194     lswitch e3190dc2-89d1-44ed-9308-e7077de782b3 (sw0)
195         lport sw0-port1
196             addresses: 00:00:00:00:00:01
197         lport sw0-port2
198             addresses: 00:00:00:00:00:02
199     lswitch c8ed4c5f-9733-43f6-93da-795b1aabacb1 (sw1)
200         lport sw1-port1
201             addresses: 00:00:00:00:00:03
202         lport sw1-port2
203             addresses: 00:00:00:00:00:04
204
205 Physically, all ports reside on the same chassis.
206
207     $ ovn-sbctl show
208     Chassis “56b18105-5706-46ef-80c4-ff20979ab068”
209         Encap geneve
210             ip: “127.0.0.1”
211         Port_Binding “sw1-port2”
212         Port_Binding “sw0-port2”
213         Port_Binding “sw0-port1”
214         Port_Binding “sw1-port1”
215
216 OVN creates separate logical flows for each logical switch.
217
218     $ ovn-sbctl lflow-list
219     Datapath: 5aa8be0b-8369-49e2-a878-f68872a8d211  Pipeline: ingress
220       table=0(port_sec), priority=  100, match=(eth.src[40]), action=(drop;)
221       table=0(port_sec), priority=  100, match=(vlan.present), action=(drop;)
222       table=0(port_sec), priority=   50, match=(inport == "sw0-port1" && eth.src == {00:00:00:00:00:01}), action=(next;)
223       table=0(port_sec), priority=   50, match=(inport == "sw0-port2" && eth.src == {00:00:00:00:00:02}), action=(next;)
224       table=1(     acl), priority=    0, match=(1), action=(next;)
225       table=2( l2_lkup), priority=  100, match=(eth.dst[40]), action=(outport = "_MC_flood"; output;)
226       table=2( l2_lkup), priority=   50, match=(eth.dst == 00:00:00:00:00:01), action=(outport = "sw0-port1"; output;)
227       table=2( l2_lkup), priority=   50, match=(eth.dst == 00:00:00:00:00:02), action=(outport = "sw0-port2"; output;)
228     Datapath: 5aa8be0b-8369-49e2-a878-f68872a8d211  Pipeline: egress
229       table=0(     acl), priority=    0, match=(1), action=(next;)
230       table=1(port_sec), priority=  100, match=(eth.dst[40]), action=(output;)
231       table=1(port_sec), priority=   50, match=(outport == "sw0-port1" && eth.dst == {00:00:00:00:00:01}), action=(output;)
232       table=1(port_sec), priority=   50, match=(outport == "sw0-port2" && eth.dst == {00:00:00:00:00:02}), action=(output;)
233     Datapath: 631fb3c9-b0a3-4e56-bac3-1717c8cbb826  Pipeline: ingress
234       table=0(port_sec), priority=  100, match=(eth.src[40]), action=(drop;)
235       table=0(port_sec), priority=  100, match=(vlan.present), action=(drop;)
236       table=0(port_sec), priority=   50, match=(inport == "sw1-port1" && eth.src == {00:00:00:00:00:03}), action=(next;)
237       table=0(port_sec), priority=   50, match=(inport == "sw1-port2" && eth.src == {00:00:00:00:00:04}), action=(next;)
238       table=1(     acl), priority=    0, match=(1), action=(next;)
239       table=2( l2_lkup), priority=  100, match=(eth.dst[40]), action=(outport = "_MC_flood"; output;)
240       table=2( l2_lkup), priority=   50, match=(eth.dst == 00:00:00:00:00:03), action=(outport = "sw1-port1"; output;)
241       table=2( l2_lkup), priority=   50, match=(eth.dst == 00:00:00:00:00:04), action=(outport = "sw1-port2"; output;)
242     Datapath: 631fb3c9-b0a3-4e56-bac3-1717c8cbb826  Pipeline: egress
243       table=0(     acl), priority=    0, match=(1), action=(next;)
244       table=1(port_sec), priority=  100, match=(eth.dst[40]), action=(output;)
245       table=1(port_sec), priority=   50, match=(outport == "sw1-port1" && eth.dst == {00:00:00:00:00:03}), action=(output;)
246       table=1(port_sec), priority=   50, match=(outport == "sw1-port2" && eth.dst == {00:00:00:00:00:04}), action=(output;)
247
248 In this setup, `sw0-port1` and `sw0-port2` can send packets to each other, but
249 not to either of the ports on `sw1`.  This first trace shows a packet from
250 `sw0-port1` to `sw0-port2`.  You should see th packet arrive on OpenFlow port
251 `1` and output to OpenFlow port `2`.
252
253 [View ovn/env2/packet1.sh][env2packet1].
254
255     $ ovn/env2/packet1.sh
256
257 This next example shows a packet from `sw0-port1` with a destination MAC address
258 of `00:00:00:00:00:03`, which is the MAC address for `sw1-port1`.  Since these
259 ports are not on the same logical switch, the packet should just be dropped.
260
261 [View ovn/env2/packet2.sh][env2packet2].
262
263     $ ovn/env2/packet2.sh
264
265 3) Two Hypervisors
266 ------------------
267
268 The first two examples started by showing OVN on a single hypervisor.  A more
269 realistic deployment of OVN would span multiple hypervisors.  This example
270 creates a single logical switch with 4 logical ports.  It then simulates having
271 two hypervisors with two of the logical ports bound to each hypervisor.
272
273 [View ovn/env3/setup.sh][env3setup].
274
275     $ ovn/env3/setup.sh
276
277 You can start by viewing the logical topology with `ovn-nbctl`.
278
279     $ ovn-nbctl show
280     lswitch b977dc03-79a5-41ba-9665-341a80e1abfd (sw0)
281         lport sw0-port1
282             addresses: 00:00:00:00:00:01
283         lport sw0-port2
284             addresses: 00:00:00:00:00:02
285         lport sw0-port4
286             addresses: 00:00:00:00:00:04
287         lport sw0-port3
288             addresses: 00:00:00:00:00:03
289
290 Using `ovn-sbctl` to view the state of the system, we can see that there are two
291 chassis: one local that we can interact with, and a fake remote chassis. Two
292 logical ports are bound to each.  Both chassis have an IP address of localhost,
293 but in a realistic deployment that would be the IP address used for tunnels to
294 that chassis.
295
296     $ ovn-sbctl show
297     Chassis “56b18105-5706-46ef-80c4-ff20979ab068”
298         Encap geneve
299             ip: “127.0.0.1”
300         Port_Binding “sw0-port2”
301         Port_Binding “sw0-port1”
302     Chassis fakechassis
303         Encap geneve
304             ip: “127.0.0.1”
305         Port_Binding “sw0-port4”
306         Port_Binding “sw0-port3”
307
308 Packets between `sw0-port1` and `sw0-port2` behave just like the previous
309 examples.  Packets to ports on a remote chassis are the interesting part of this
310 example.  You may have noticed before that OVN’s logical flows are broken up
311 into ingress and egress tables.  Given a packet from `sw0-port1` on the local
312 chassis to `sw0-port3` on the remote chassis, the ingress pipeline is executed
313 on the local switch.  OVN then determines that it must forward the packet over a
314 geneve tunnel.  When it arrives at the remote chassis, the egress pipeline will
315 be executed there.
316
317 This first packet trace shows the first part of this example.  It’s a packet
318 from `sw0-port1` to `sw0-port3` from the perspective of the local chassis.
319 `sw0-port1` is OpenFlow port `1`.  The tunnel to the fake remote chassis is
320 OpenFlow port `3`.  You should see the ingress pipeline being executed and then
321 the packet output to port `3`, the geneve tunnel.
322
323 [View ovn/env3/packet1.sh][env3packet1].
324
325     $ ovn/env3/packet1.sh
326
327 To simulate what would happen when that packet arrives at the remote chassis we
328 can flip this example around.  Consider a packet from `sw0-port3` to
329 `sw0-port1`.  This trace shows what would happen when that packet arrives at the
330 local chassis.  The packet arrives on OpenFlow port `3` (the tunnel).  You should
331 then see the egress pipeline get executed and the packet output to OpenFlow port
332 `1`.
333
334 [View ovn/env3/packet2.sh][env3packet2].
335
336     $ ovn/env3/packet2.sh
337
338 4) Locally attached networks
339 ----------------------------
340
341 While OVN is generally focused on the implementation of logical networks using
342 overlays, it’s also possible to use OVN as a control plane to manage logically
343 direct connectivity to networks that are locally accessible to each chassis.
344
345 This example includes two hypervisors.  Both hypervisors have two ports on them.
346 We want to use OVN to manage the connectivity of these ports to a network
347 attached to each hypervisor that we will call “physnet1”.
348
349 This scenario requires some additional configuration of `ovn-controller`.  We
350 must configure a mapping between `physnet1` and a local OVS bridge that provides
351 connectivity to that network.  We call these “bridge mappings”.  For our
352 example, the following script creates a bridge called `br-eth1` and then
353 configures `ovn-controller` with a bridge mapping from `physnet1` to `br-eth1`.
354
355 [View ovn/env4/setup1.sh][env4setup1].
356
357     $ ovn/env4/setup1.sh
358
359 At this point we should be able to see that `ovn-controller` has automatically
360 created patch ports between `br-int` and `br-eth1`.
361
362     $ ovs-vsctl show
363     aea39214-ebec-4210-aa34-1ae7d6921720
364         Bridge br-int
365             fail_mode: secure
366             Port “patch-br-int-to-br-eth1”
367                 Interface “patch-br-int-to-br-eth1”
368                     type: patch
369                     options: {peer=”patch-br-eth1-to-br-int”}
370             Port br-int
371                 Interface br-int
372                     type: internal
373         Bridge “br-eth1”
374             Port “br-eth1”
375                 Interface “br-eth1”
376                     type: internal
377             Port “patch-br-eth1-to-br-int”
378                 Interface “patch-br-eth1-to-br-int”
379                     type: patch
380                     options: {peer=”patch-br-int-to-br-eth1”}
381
382 Now we can move on to the next setup phase for this example.  We want to create
383 a fake second chassis and then create the topology that tells OVN we want both
384 ports on both hypervisors connected to `physnet1`.  The way this is modeled in
385 OVN is by creating a logical switch for each port.  The logical switch has the
386 regular VIF port and a `localnet` port.
387
388 [View ovn/env4/setup2.sh][env4setup2].
389
390     $ ovn/env4/setup2.sh
391
392 The logical topology from `ovn-nbctl` should look like this.
393
394     $ ovn-nbctl show
395         lswitch 5a652488-cfba-4f3e-929d-00010cdfde40 (provnet1-2)
396             lport provnet1-2-physnet1
397                 addresses: unknown
398             lport provnet1-2-port1
399                 addresses: 00:00:00:00:00:02
400         lswitch 5829b60a-eda8-4d78-94f6-7017ff9efcf0 (provnet1-4)
401             lport provnet1-4-port1
402                 addresses: 00:00:00:00:00:04
403             lport provnet1-4-physnet1
404                 addresses: unknown
405         lswitch 06cbbcb6-38e3-418d-a81e-634ec9b54ad6 (provnet1-1)
406             lport provnet1-1-port1
407                 addresses: 00:00:00:00:00:01
408             lport provnet1-1-physnet1
409                 addresses: unknown
410         lswitch 9cba3b3b-59ae-4175-95f5-b6f1cd9c2afb (provnet1-3)
411             lport provnet1-3-physnet1
412                 addresses: unknown
413             lport provnet1-3-port1
414                 addresses: 00:00:00:00:00:03
415
416 `port1` on each logical switch represents a regular logical port for a VIF on a
417 hypervisor.  `physnet1` on each logical switch is the special `localnet` port.
418 You can use `ovn-nbctl` to see that this port has a `type` and `options` set.
419
420     $ ovn-nbctl lport-get-type provnet1-1-physnet1
421     localnet
422
423     $ ovn-nbctl lport-get-options provnet1-1-physnet1
424     network_name=physnet1
425
426 The physical topology should reflect that there are two regular ports on each
427 chassis.
428
429     $ ovn-sbctl show
430     Chassis fakechassis
431         Encap geneve
432             ip: “127.0.0.1”
433         Port_Binding “provnet1-3-port1”
434         Port_Binding “provnet1-4-port1”
435     Chassis “56b18105-5706-46ef-80c4-ff20979ab068”
436         Encap geneve
437             ip: “127.0.0.1”
438         Port_Binding “provnet1-2-port1”
439         Port_Binding “provnet1-1-port1”
440
441 All four of our ports should be able to communicate with each other, but they do
442 so through `physnet1`.  A packet from any of these ports to any destination
443 should be output to the OpenFlow port number that corresponds to the patch port
444 to `br-eth1`.
445
446 This example assumes following OpenFlow port number mappings:
447
448 * 1 = patch port to `br-eth1`
449 * 2 = tunnel to the fake second chassis
450 * 3 = lport1, which is the logical port named `provnet1-1-port1`
451 * 4 = lport2, which is the logical port named `provnet1-2-port1`
452
453 We get those port numbers using `ovs-ofctl`:
454
455     $ ovs-ofctl show br-int
456     OFPT_FEATURES_REPLY (xid=0x2): dpid:0000765054700040
457     n_tables:254, n_buffers:256
458     capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
459     actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src
460     mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
461      1(patch-br-int-to): addr:de:29:14:95:8a:b8
462          config:     0
463          state:      0
464          speed: 0 Mbps now, 0 Mbps max
465      2(ovn-fakech-0): addr:aa:55:aa:55:00:08
466          config:     PORT_DOWN
467          state:      LINK_DOWN
468          speed: 0 Mbps now, 0 Mbps max
469      3(lport1): addr:aa:55:aa:55:00:09
470          config:     PORT_DOWN
471          state:      LINK_DOWN
472          speed: 0 Mbps now, 0 Mbps max
473      4(lport2): addr:aa:55:aa:55:00:0a
474          config:     PORT_DOWN
475          state:      LINK_DOWN
476          speed: 0 Mbps now, 0 Mbps max
477      LOCAL(br-int): addr:76:50:54:70:00:40
478          config:     PORT_DOWN
479          state:      LINK_DOWN
480          speed: 0 Mbps now, 0 Mbps max
481     OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
482
483 This first trace shows a packet from `provnet1-1-port1` with a destination MAC
484 address of `provnet1-2-port1`.  Despite both of these ports being on the same
485 local switch (`lport1` and `lport2`), we expect all packets to be sent out to
486 `br-eth1` (OpenFlow port 1).  We then expect the network to handle getting the
487 packet to its destination.  In practice, this will be optimized at `br-eth1` and
488 the packet won’t actually go out and back on the network.
489
490 [View ovn/env4/packet1.sh][env4packet1].
491
492     $ ovn/env4/packet1.sh
493
494 This next trace is a continuation of the previous one.  This shows the packet
495 coming back into `br-int` from `br-eth1`.  We now expect the packet to be output
496 to `provnet1-2-port1`, which is OpenFlow port 4.
497
498 [View ovn/env4/packet2.sh][env4packet2].
499
500     $ ovn/env4/packet2.sh
501
502 This next trace shows an example of a packet being sent to a destination on
503 another hypervisor.  The source is `provnet1-2-port1`, but the destination is
504 `provnet1-3-port1`, which is on the other fake chassis.  As usual, we expect the
505 output to be to OpenFlow port 1, the patch port to `br-et1`.
506
507 [View ovn/env4/packet3.sh][env4packet3].
508
509     $ ovn/env4/packet3.sh
510
511 This next test shows a broadcast packet.  The destination should still only be
512 OpenFlow port 1.
513
514 [View ovn/env4/packet4.sh][env4packet4]
515
516     $ ovn/env4/packet4.sh
517
518 Finally, this last trace shows what happens when a broadcast packet arrives
519 from the network.  In this case, it simulates a broadcast that originated from a
520 port on the remote fake chassis and arrived at the local chassis via `br-eth1`.
521 We should see it output to both local ports that are attached to this network
522 (OpenFlow ports 3 and 4).
523
524 [View ovn/env4/packet5.sh][env4packet5]
525
526     $ ovn/env4/packet5.sh
527
528 5) Locally attached networks with VLANs
529 ---------------------------------------
530
531 This example is an extension of the previous one.  We take the same setup and
532 add two more ports to each hypervisor.  Instead of having the new ports directly
533 connected to `physnet1` as before, we indicate that we want them on VLAN 101 of
534 `physnet1`.  This shows how `localnet` ports can be used to provide connectivity
535 to either a flat network or a VLAN on that network.
536
537 [View ovn/env5/setup.sh][env5setup]
538
539     $ ovn/env5/setup.sh
540
541 The logical topology shown by `ovn-nbctl` is similar to `env4`, except we now
542 have 8 regular VIF ports connected to `physnet1` instead of 4.  The additional 4
543 ports we have added are all on VLAN 101 of `physnet1`.  Note that the `localnet`
544 ports representing connectivity to VLAN 101 of `physnet1` have the `tag` field
545 set to `101`.
546
547     $ ovn-nbctl show
548         lswitch 12ea93d0-694b-48e9-adef-d0ddd3ec4ac9 (provnet1-7-101)
549             lport provnet1-7-physnet1-101
550                 parent: , tag:101
551                 addresses: unknown
552             lport provnet1-7-101-port1
553                 addresses: 00:00:00:00:00:07
554         lswitch c9a5ce3a-15ec-48ea-a898-416013463589 (provnet1-4)
555             lport provnet1-4-port1
556                 addresses: 00:00:00:00:00:04
557             lport provnet1-4-physnet1
558                 addresses: unknown
559         lswitch e07d4f7a-2085-4fbb-9937-d6192b79a397 (provnet1-1)
560             lport provnet1-1-physnet1
561                 addresses: unknown
562             lport provnet1-1-port1
563                 addresses: 00:00:00:00:00:01
564         lswitch 6c098474-0509-4219-bc9b-eb4e28dd1aeb (provnet1-2)
565             lport provnet1-2-physnet1
566                 addresses: unknown
567             lport provnet1-2-port1
568                 addresses: 00:00:00:00:00:02
569         lswitch 723c4684-5d58-4202-b8e3-4ba99ad5ed9e (provnet1-8-101)
570             lport provnet1-8-101-port1
571                 addresses: 00:00:00:00:00:08
572             lport provnet1-8-physnet1-101
573                 parent: , tag:101
574                 addresses: unknown
575         lswitch 8444e925-ceb2-4b02-ac20-eb2e4cfb954d (provnet1-6-101)
576             lport provnet1-6-physnet1-101
577                 parent: , tag:101
578                 addresses: unknown
579             lport provnet1-6-101-port1
580                 addresses: 00:00:00:00:00:06
581         lswitch e11e5605-7c46-4395-b28d-cff57451fc7e (provnet1-3)
582             lport provnet1-3-port1
583                 addresses: 00:00:00:00:00:03
584             lport provnet1-3-physnet1
585                 addresses: unknown
586         lswitch 0706b697-6c92-4d54-bc0a-db5bababb74a (provnet1-5-101)
587             lport provnet1-5-101-port1
588                 addresses: 00:00:00:00:00:05
589             lport provnet1-5-physnet1-101
590                 parent: , tag:101
591                 addresses: unknown
592
593 The physical topology shows that we have 4 regular VIF ports on each simulated
594 hypervisor.
595
596     $ ovn-sbctl show
597     Chassis “56b18105-5706-46ef-80c4-ff20979ab068”
598         Encap geneve
599             ip: “127.0.0.1”
600         Port_Binding “provnet1-6-101-port1”
601         Port_Binding “provnet1-1-port1”
602         Port_Binding “provnet1-2-port1”
603         Port_Binding “provnet1-5-101-port1”
604     Chassis fakechassis
605         Encap geneve
606             ip: “127.0.0.1”
607         Port_Binding “provnet1-4-port1”
608         Port_Binding “provnet1-3-port1”
609         Port_Binding “provnet1-8-101-port1”
610         Port_Binding “provnet1-7-101-port1”
611
612 All of the traces from the previous example, `env4`, should work in this
613 environment and provide the same result.  Now we can show what happens for the
614 ports connected to VLAN 101.  This first example shows a packet originating from
615 `provnet1-5-101-port1`, which is OpenFlow port 5.  We should see VLAN tag 101
616 pushed on the packet and then output to OpenFlow port 1, the patch port to
617 `br-eth1` (the bridge providing connectivity to `physnet1`).
618
619 [View ovn/env5/packet1.sh][env5packet1].
620
621     $ ovn/env5/packet1.sh
622
623 If we look at a broadcast packet arriving on VLAN 101 of `physnet1`, we should
624 see it output to OpenFlow ports 5 and 6 only.
625
626 [View ovn/env5/packet2.sh][env5packet2].
627
628     $ ovn/env5/packet2.sh
629
630
631 6) Stateful ACLs
632 ----------------
633
634 ACLs provide a way to do distributed packet filtering for OVN networks.  One
635 example use of ACLs is that OpenStack Neutron uses them to implement security
636 groups.  ACLs are implemented using conntrack integration with OVS.
637
638 Start with a simple logical switch with 2 logical ports.
639
640 [View ovn/env6/setup.sh][env6setup].
641
642     $ ovn/env6/setup.sh
643
644 A common use case would be the following policy applied for `sw0-port1`:
645
646 * Allow outbound IP traffic and associated return traffic.
647 * Allow incoming ICMP requests and associated return traffic.
648 * Allow incoming SSH connections and associated return traffic.
649 * Drop other incoming IP traffic.
650
651 The following script applies this policy to our environment.
652
653 [View ovn/env6/add-acls.sh][env6acls].
654
655     $ ovn/env6/add-acls.sh
656
657 We can view the configured ACLs on this network using the `ovn-nbctl` command.
658
659     $ ovn-nbctl acl-list sw0
660     from-lport  1002 (inport == “sw0-port1” && ip) allow-related
661       to-lport  1002 (outport == “sw0-port1” && ip && icmp) allow-related
662       to-lport  1002 (outport == “sw0-port1” && ip && tcp && tcp.dst == 22) allow-related
663       to-lport  1001 (outport == “sw0-port1” && ip) drop
664
665 Now that we have ACLs configured, there are new entries in the logical flow
666 table in the stages `switch_in_pre_acl`, switch_in_acl`, `switch_out_pre_acl`,
667 and `switch_out_acl`.
668
669     $ ovn-sbctl lflow-list
670
671 Let’s look more closely at `switch_out_pre_acl` and `switch_out_acl`.
672
673 In `switch_out_pre_acl`, we match IP traffic and put it through the connection
674 tracker.  This populates the connection state fields so that we can apply policy
675 as appropriate.
676
677     table=0(switch_out_pre_acl), priority=  100, match=(ip), action=(ct_next;)
678     table=1(switch_out_pre_acl), priority=    0, match=(1), action=(next;)
679
680 In `switch_out_acl`, we allow packets associated with existing connections.  We
681 drop packets that are deemed to be invalid (such as non-SYN TCP packet not
682 associated with an existing connection).
683
684     table=1(switch_out_acl), priority=65535, match=(!ct.est && ct.rel && !ct.new && !ct.inv), action=(next;)
685     table=1(switch_out_acl), priority=65535, match=(ct.est && !ct.rel && !ct.new && !ct.inv), action=(next;)
686     table=1(switch_out_acl), priority=65535, match=(ct.inv), action=(drop;)
687
688 For new connections, we apply our configured ACL policy to decide whether to
689 allow the connection or not.  In this case, we’ll allow ICMP or SSH.  Otherwise,
690 we’ll drop the packet.
691
692     table=1(switch_out_acl), priority= 2002, match=(ct.new && (outport == “sw0-port1” && ip && icmp)), action=(ct_commit; next;)
693     table=1(switch_out_acl), priority= 2002, match=(ct.new && (outport == “sw0-port1” && ip && tcp && tcp.dst == 22)), action=(ct_commit; next;)
694     table=1(switch_out_acl), priority= 2001, match=(outport == “sw0-port1” && ip), action=(drop;)
695
696 When using ACLs, the default policy is to allow and track IP connections.  Based
697 on our above policy, IP traffic directed at `sw0-port1` will never hit this flow
698 at priority 1.
699
700     table=1(switch_out_acl), priority=    1, match=(ip), action=(ct_commit; next;)
701     table=1(switch_out_acl), priority=    0, match=(1), action=(next;)
702
703 Note that conntrack integration is not yet supported in ovs-sandbox, so the
704 OpenFlow flows will not represent what you’d see in a real environment.  The
705 logical flows described above give a very good idea of what the flows look like,
706 though.
707
708 [This blog post][openstack-ovn-acl-blog] discusses OVN ACLs from an OpenStack
709 perspective and also provides an example of what the resulting OpenFlow flows
710 look like.
711
712 [ovn-architecture(7)]:http://openvswitch.org/support/dist-docs/ovn-architecture.7.html
713 [Tutorial.md]:https://github.com/openvswitch/ovs/blob/master/tutorial/Tutorial.md
714 [ovn-nb(5)]:http://openvswitch.org/support/dist-docs/ovn-nb.5.html
715 [ovn-sb(5)]:http://openvswitch.org/support/dist-docs/ovn-sb.5.html
716 [vtep(5)]:http://openvswitch.org/support/dist-docs/vtep.5.html
717 [ovn-northd(8)]:http://openvswitch.org/support/dist-docs/ovn-northd
718 [ovn-controller(8)]:http://openvswitch.org/support/dist-docs/ovn-controller.8.html
719 [ovn-controller-vtep(8)]:http://openvswitch.org/support/dist-docs/ovn-controller-vtep.8.html
720 [vtep-ctl(8)]:http://openvswitch.org/support/dist-docs/vtep-ctl.8.html
721 [ovn-nbctl(8)]:http://openvswitch.org/support/dist-docs/ovn-nbctl.8.html
722 [ovn-sbctl(8)]:http://openvswitch.org/support/dist-docs/ovn-sbctl.8.html
723 [env1setup]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env1/setup.sh
724 [env1packet1]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env1/packet1.sh
725 [env1packet2]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env1/packet2.sh
726 [env1thirdport]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env1/add-third-port.sh
727 [env2setup]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env2/setup.sh
728 [env2packet1]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env2/packet1.sh
729 [env2packet2]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env2/packet2.sh
730 [env3setup]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env3/setup.sh
731 [env3packet1]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env3/packet1.sh
732 [env3packet2]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env3/packet2.sh
733 [env4setup1]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env4/setup1.sh
734 [env4setup2]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env4/setup2.sh
735 [env4packet1]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env4/packet1.sh
736 [env4packet2]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env4/packet2.sh
737 [env4packet3]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env4/packet3.sh
738 [env4packet4]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env4/packet4.sh
739 [env4packet5]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env4/packet5.sh
740 [env5setup]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env5/setup.sh
741 [env5packet1]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env5/packet1.sh
742 [env5packet2]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env5/packet2.sh
743 [env6setup]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env6/setup.sh
744 [env6acls]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env6/add-acls.sh
745 [openstack-ovn-acl-blog]:http://blog.russellbryant.net/2015/10/22/openstack-security-groups-using-ovn-acls/