1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
|
# Control plane support
Control plane functionalities are provides via SDN controllers or via standard
IP routing protocols. SDN support is provided by using the NETCONF/YANG protocol
for network management, control and telemetry.
Routing is supported via synchronization of the IP FIB and the IP RIB as implemented
by one of the routing protocols in FRR. Without loss of generality we have reported
below one example of IGP routing via OSPF for IPv6.
The VPP IP FIB can be controlled and updated by one FRR routing protocol which
is used for routing over locators and also over hICN name prefixes.
## NETCONF/YANG
### Getting started
NETCONF/YANG support is provided via several external components such as
libyang, sysrepo, libnetconf and netopeer.
The hicn project provides a sysrepo plugin and a YANG model for two devices:
the VPP based hicn virtual switch and the portable forwarder.
The YANG model for the VPP based hICN vSwitch is based the full hICN C API
exported by the VPP plugin with the addition of some VPP APIs such as
interface and FIB management which are required by the hICN plugin.
To install libyang, sysrepo, libnetconf and netopeer2 for Ubuntu18 amd64/arm64
or CentOS 7 and ad-hoc repository is available and maintained in bintray
at <https://dl.bintray.com/icn-team/apt-hicn-extras>.
For instance in Ubuntu 18 LTS:
Install the sysrepo YANG data store and a NETCONF server:
```bash
echo "deb [trusted=yes] https://dl.bintray.com/icn-team/apt-hicn-extras bionic main" \
| tee -a /etc/apt/sources.list
apt-get update && apt-get install -y libyang sysrepo libnetconf2 netopeer2-server
```
Install the VPP based hICN virtual switch:
```bash
curl -s https://packagecloud.io/install/repositories/fdio/release/script.deb.sh | bash
apt-get update && apt-get install -y hicn-plugin vpp-plugin-dpdk hicn-sysrepo-plugin
```
The hICN YANG models are installed under `/usr/lib/$(uname -m)-linux-gnu/modules_yang`.
Configure the NETCONF/YANG components:
```bash
bash /usr/bin/setup.sh sysrepoctl /usr/lib/$(uname -m)-linux-gnu/modules_yang root
bash /usr/bin/merge_hostkey.sh sysrepocfg openssl
bash /usr/bin/merge_config.sh sysrepocfg genkey
```
You can manually install the yang model using the following bash script:
```bash
EXIT_CODE=0
command -v sysrepoctl > /dev/null
if [ $? != 0 ]; then
echo "Could not find command \"sysrepoctl\"."
exit ${EXIT_CODE}
else
sysrepoctl --install --yang=path_to_hicn_yang_model
fi
```
### YANG model
hicn.yang can be found in the yang-model. It consists of two container nodes:
```text
|--+ hicn-conf: holds the configuration data;
| |--+ params: contains all configuration parameters;
|--+ hicn-state: provides the state data
| |--+ state,
| |--+ strategy,
| |--+ strategies,
| |--+ route,
| |--+ face-ip-params
and corresponding leaves.
```
A controller can configure these parameters through the edit-config RPC
call. This node can be used to enable and to initialize the hicn-plugin in VPP
instance. hicn-state container is used to provide the state data to the
controller. It consists of state, strategy, strategies, route, and face-ip-params
nodes with the corresponding leaves. In the hicn model a variety of RPCs are provided
to allow controller to communicate with the hicn-plugin as well as update the state
data in hicn-state.
### Example
To setup the startup configuration you can use the following script:
```bash
EXIT_CODE=0
command -v sysrepocfg > /dev/null
if [ $? != 0 ]; then
echo "Could not find command \"sysrepocfg\"."
exit ${EXIT_CODE}
else
sysrepocfg -d startup -i path_to_startup_xml -f xml hicn
fi
```
startup.xml is placed in the yang-model. Here you can find the content:
```xml
<hicn-conf xmlns="urn:sysrepo:hicn">
<params>
<enable_disable>false</enable_disable>
<pit_max_size>-1</pit_max_size>
<cs_max_size>-1</cs_max_size>
<cs_reserved_app>-1</cs_reserved_app>
<pit_dflt_lifetime_sec>-1</pit_dflt_lifetime_sec>
<pit_max_lifetime_sec>-1</pit_max_lifetime_sec>
<pit_min_lifetime_sec>-1</pit_min_lifetime_sec>
</params>
</hicn-conf>
```
It contains the leaves of the parameters in hicn-conf node which is
used as the startup configuration. This configuration can be changed through the
controller by subscribing which changes the target to the running state. hicn
yang model provides a list of RPCs which allows controller to communicate
directly with the hicn-plugin. This RPCs may also cause the modification in
state data.
In order to run different RPCs from controller you can use the examples in the
controler_rpcs_instances.xml in the yang-model. Here you can find the content:
```xml
<node-params-get xmlns="urn:sysrepo:hicn"/>
<node-stat-get xmlns="urn:sysrepo:hicn"/>
<strategy-get xmlns="urn:sysrepo:hicn">
<strategy_id>0</strategy_id>
</strategy-get>
<strategies-get xmlns="urn:sysrepo:hicn"/>
<route-get xmlns="urn:sysrepo:hicn">
<prefix0>10</prefix0>
<prefix1>20</prefix1>
<len>30</len>
</route-get>
<route-del xmlns="urn:sysrepo:hicn">
<prefix0>10</prefix0>
<prefix1>20</prefix1>
<len>30</len>
</route-del>
<route-nhops-add xmlns="urn:sysrepo:hicn">
<prefix0>10</prefix0>
<prefix1>20</prefix1>
<len>30</len>
<face_ids0>40</face_ids0>
<face_ids1>50</face_ids1>
<face_ids2>60</face_ids2>
<face_ids3>70</face_ids3>
<face_ids4>80</face_ids4>
<face_ids5>90</face_ids5>
<face_ids6>100</face_ids6>
<n_faces>110</n_faces>
</route-nhops-add>
<route-nhops-del xmlns="urn:sysrepo:hicn">
<prefix0>10</prefix0>
<prefix1>20</prefix1>
<len>30</len>
<faceid>40</faceid>
</route-nhops-del>
<face-ip-params-get xmlns="urn:sysrepo:hicn">
<faceid>10</faceid>
</face-ip-params-get>
<face-ip-add xmlns="urn:sysrepo:hicn">
<nh_addr0>10</nh_addr0>
<nh_addr1>20</nh_addr1>
<swif>30</swif>
</face-ip-add>
<face-ip-del xmlns="urn:sysrepo:hicn">
<faceid>0</faceid>
</face-ip-del>
<punting-add xmlns="urn:sysrepo:hicn">
<prefix0>10</prefix0>
<prefix1>20</prefix1>
<len>30</len>
<swif>40</swif>
</punting-add>
<punting-del xmlns="urn:sysrepo:hicn">
<prefix0>10</prefix0>
<prefix1>20</prefix1>
<len>30</len>
<swif>40</swif>
</punting-del>
```
#### Run the plugin
Firstly, verify the plugin and binary libraries are located correctly, then run
the vpp through (service vpp start). Next, run the sysrepo daemon (sysrepod),
for debug mode: sysrepo -d -l 4 which runs with high verbosity. Then, run the
sysrepo plugin (sysrepo-plugind), for debug mode: sysrep-plugind -d -l 4 which
runs with high verbosity. Now, the hicn sysrepo plugin is loaded. Then, run the
netopeer2-server which serves as NETCONF server.
#### Connect from netopeer2-cli
In order to connect through the netopeer client run the netopeer2-cli. Then,
follow these steps:
- connect --host XXX --login XXX
- get (you can get the configuration and operational data)
- get-config (you can get the configuration data)
- edit-config --target running --config
You can modify the configuration but it needs an xml configuration input.
```xml
<hicn-conf xmlns="urn:sysrepo:hicn">
<params>
<enable_disable>false</enable_disable>
<pit_max_size>-1</pit_max_size>
<cs_max_size>-1</cs_max_size>
<cs_reserved_app>-1</cs_reserved_app>
<pit_dflt_lifetime_sec>-1</pit_dflt_lifetime_sec>
<pit_max_lifetime_sec>-1</pit_max_lifetime_sec>
<pit_min_lifetime_sec>-1</pit_min_lifetime_sec>
</params>
</hicn-conf>
```
- user-rpc (you can call one of the rpc proposed by hicn model but it needs an xml input)
#### Connect from OpenDaylight (ODL) controller
In order to connect through the OpenDaylight follow these procedure:
- run karaf distribution (./opendayligh_installation_folder/bin/karaf)
- install the required feature list in DOL (feature:install odl-netconf-server
odl-netconf-connector odl-restconf-all odl-netconf-topology or
odl-netconf-clustered-topology)
- run a rest client program (e.g., postman or RESTClient)
- mount the remote netopeer2-server to the OpenDaylight by the following REST API:
```
PUT <http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/hicn-node>`
```
with the following body:
```xml
<node xmlns="urn:TBD:params:xml:ns:yang:network-topology">
<node-id>hicn-node</node-id>
<host xmlns="urn:opendaylight:netconf-node-topology">Remote_NETCONF_SERVER_IP</host>
<port xmlns="urn:opendaylight:netconf-node-topology">830</port>
<username xmlns="urn:opendaylight:netconf-node-topology">username</username>
<password xmlns="urn:opendaylight:netconf-node-topology">password</password>
<tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only>
<keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">1</keepalive-delay>
</node>
```
Note that the header files must be set to `Content-Type: application/xml, Accept: application/xml`.
- send the operation through the following REST API:
POST <http://localhost:8181/restconf/operations/network-topology:network-topology/topology/topology-netconf/node/hicn-node/yang-ext:mount/ietf-netconf:edit-config>
The body can be used the same as edit-config in netopeer2-cli.
#### Connect from Cisco Network Services Orchestrator (NSO)
To connect NSO to the netopeer2-server, first, you need to write a NED package
for your device. The procedure to create NED for hicn is explained in the
following:
Place hicn.yang model in a folder called hicn-yang-model, and follow these steps:
- ncs-make-package --netconf-ned ./hicn-yang-model ./hicn-nso
- cd hicn-nso/src; make
- ncs-setup --ned-package ./hicn-nso --dest ./hicn-nso-project
- cd hicn-nso-project
- ncs
- ncs_cli -C -u admin
- configure
- devices authgroups group authhicn default-map remote-name user_name remote-password password
- devices device hicn address IP_device port 830 authgroup authhicn device-type netconf
- state admin-state unlocked
- commit
- ssh fetch-host-keys
At this point, we are able to connect to the remote device.
## Release note
The current version is compatible with the 20.01 VPP stable and sysrepo devel.
## Routing plugin for VPP and FRRouting for OSPF6
This document describes how to configure the VPP with hicn_router
plugin and FRR to enable the OSPF protocol. The VPP and FRR
are configured in a docker file.
### DPDK configuration on host machine
Install and configure DPDK:
```bash
make install T=x86_64-native-linux-gcc && cd x86_64-native-linux-gcc && sudo make install
modprobe uio
modprobe uio_pci_generic
dpdk-devbind --status
the PCIe number of the desired device can be observed ("xxx")
sudo dpdk-devbind -b uio_pci_generic "xxx"
```
### VPP configuration
Run and configure the VPP (hICN router plugin is required to be installed in VPP):
```bash
vpp# set int state TenGigabitEtherneta/0/0 up
vpp# set int ip address TenGigabitEtherneta/0/0 a001::1/24
vpp# create loopback interface
vpp# set interface state loop0 up
vpp# set interface ip address loop0 b001::1/128
vpp# enable tap-inject # This creates the taps by router plugin
vpp# show tap-inject # This shows the created taps
vpp# ip mroute add ff02::/64 via local Forward # ff02:: is multicast ip address
vpp# ip mroute add ff02::/64 via TenGigabitEtherneta/0/0 Accept
vpp# ip mroute add ff02::/64 via loop0 Accept
```
Setup the tap interface:
```bash
ip addr add a001::1/24 dev vpp0
ip addr add b001::1/128 dev vpp1
ip link set dev vpp0 up
ip link set dev vpp1 up
```
### FRR configuration
Install FRR in Ubuntu 18 LTS:
<http://docs.frrouting.org/projects/dev-guide/en/latest/building-frr-for-ubuntu1804.html>
Run and configure FRRouting (ospf):
```text
/usr/lib/frr/frrinit.sh start &
vtysh
configure terminal
router ospf6
area 0.0.0.0 range a001::1/24
area 0.0.0.0 range b001::1/128
interface vpp0 area 0.0.0.0
interface vpp1 area 0.0.0.0
end
wr
add "no ipv6 nd suppress-ra" to the first configurtion part of the /etc/frr/frr.conf
```
After the following configuration, the traffic over tap interface can be observed
via `tcpdump- i vpp1`. The neighborhood and route can be seen with the
`show ipv6 ospf6 neighbor/route` command.
|