SixXS::Sunset 2017-06-06

Subnet + openvz + venet
[de] Carmen Sandiego on Saturday, 20 June 2009 09:27:45
Hi, I wanted to ask if anyone already has some experience in setting up a sixxs subnet with openvz. The tunnel is working perfectly on my hardware node and can be pinged. Now I want to assign the ips from the subnet to my virtual nodes. Can anyone help? Thanks
Subnet + openvz + venet
[us] Shadow Hawkins on Saturday, 20 June 2009 21:44:47
I'm running my SixXS tunnel on my OpenVZ hardware node and have local ethernet and virtual hosts are on the SixXS IPv6 subnet. It's been at least 6 months since I set it up, so I may be a little rusty and new development may invalidate some of what I say. If I recall correctly you have to use veth networking instead of venet for IPv6. You definitely have to use veth to use radvd / stateless autoconfiguration because venet does not have MAC addresses. (Possibly outdated info.) The main trick is getting the node veth devices on a bridge, and that is described in the "veth" link above. My setup has the tunnel and radvd running on the hardware node running Ubuntu Hardy and the 64-bit OpenVZ kernel. I originally wanted it in a virtual host but couldn't make the tunnel work that way. My subnet is 2001:4978:192::/48, and I am using the 2001:4978:192::/64 as my local prefix. Snippets of my /etc/network/interfaces file:
auto vzbr0 iface vzbr0 inet static bridge_ports eth0 regex veth* iface vzbr0 inet6 static post-up /sbin/ip -6 route add 2001:4978:192::/48 dev lo || true
The bridge_ports statement automatically adds eth0 and all network interfaces starting with "veth" to the bridge when it is created, but it does not automatically add future "veth" devices to the bridge. The post-up route command null routes my /48. This is desirable to prevent anything addressed to my subnet from being sent to back to the internet. (I am only using one /64 out of the /48, but if some stray traffic gets addressed to somewhere else in my /48 I want to kill it here rather than send it back to the internet where it will just route right back to me.) It does not hurt anything because the more specific live routes override this route. I left out the rest of my configuration because it has some confusing remnants of some experimenting I've done. My /etc/vz/vznet.conf:
#!/bin/bash EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"
My /usr/sbin/vznetaddbr:
#!/bin/bash # /usr/sbin/vznetaddbr # a script to add virtual network interfaces (veth's) in a CT to a bridge on CT0 CONFIGFILE=/etc/vz/conf/$VEID.conf . $CONFIGFILE VZHOSTIF=`echo $NETIF |sed 's/^.*host_ifname=\(.*\),.*$/\1/g'` if [ ! -n "$VZHOSTIF" ]; then echo "According to $CONFIGFILE CT$VEID has no veth interface configured." exit 1 fi if [ ! -n "$VZHOSTBR" ]; then echo "According to $CONFIGFILE CT$VEID has no bridge interface configured." exit 1 fi echo "Adding interface $VZHOSTIF to bridge $VZHOSTBR on CT0 for CT$VEID" /sbin/ifconfig $VZHOSTIF 0 echo 1 > /proc/sys/net/ipv4/conf/$VZHOSTIF/proxy_arp echo 1 > /proc/sys/net/ipv4/conf/$VZHOSTIF/forwarding /usr/sbin/brctl addif $VZHOSTBR $VZHOSTIF exit 0
I had to change the shebang lines to use /bin/bash instead of /bin/sh. I was getting an error, and it turned out that Ubuntu's default /bin/sh didn't have all the features needed by the vznetaddbr script. The above scripts add a node with a veth interface to the bridge if the node conf file has parameters similar to the following:
NETIF="ifname=veth181,host_ifname=veth181.181,host_mac=02:42:76:0F:62:FB" CONFIG_CUSTOMIZED="yes" VZHOSTBR="vzbr0"
I used some program to randomly generate a MAC for each node. My radvd.conf:
interface vzbr0 { AdvSendAdvert on; # My SixXS subnet! prefix 2001:4978:192::/64 { }; };
Relevant portions of my /etc/sysctl.conf file:
# Uncomment the next line to enable packet forwarding for IPv6 #### this appears to be invalid ### #net.ipv6.ip_forward=1 net.ipv6.conf.all.forwarding=1 net.ipv6.conf.default.forwarding=1 #-- OpenVZ begin --# # On Hardware Node we generally need # packet forwarding enabled and proxy arp disabled net.ipv4.conf.default.forwarding=1 net.ipv4.conf.default.proxy_arp = 0 net.ipv4.conf.eth0.proxy_arp = 1 net.ipv4.conf.vzbr0.forwarding=1 net.ipv6.conf.vzbr0.autoconf=0 # Enables source route verification net.ipv4.conf.all.rp_filter = 1 # Enables the magic-sysrq key kernel.sysrq = 1 # TCP Explict Congestion Notification #net.ipv4.tcp_ecn = 0 # we do not want all our interfaces to send redirects net.ipv4.conf.default.send_redirects = 1 net.ipv4.conf.all.send_redirects = 0 #-- OpenVZ end --#
Note that proxy arp is on for eth0, not vzbr0. VZ may complain about proxy arp not being enabled, but this works. You need proxy arp on the physical interface, not the bridge.
Subnet + openvz + venet
[de] Carmen Sandiego on Sunday, 21 June 2009 09:45:23
Hi, thanks for your quick reply. I tried your very nice howto however I've got a small problem. I wanted to start radvd and it complains about vzbr0 not being up. So I did a "ifup vzbr0" and it tells me
Don't seem to be have all the variables for vzbr0/inet. Failed to bring up vzbr0.
Did I misunderstand something in the configuration?
Subnet + openvz + venet
[de] Carmen Sandiego on Sunday, 21 June 2009 10:15:48
Ok forget what I said. I just got it up by using ifconfig vzbr0 0 radvd already works and the virtual ethernet device got a ipv6 assigned. Now my problem is that the vnode can't communicate via ipv6 to the outside nor can anything communicate to the vnode.
Subnet + openvz + venet
[us] Shadow Hawkins on Monday, 29 June 2009 00:37:13
For practice and verification I tried adding a new VE to my setup and can't get the veth device to autoconfigure an IPv6 address even though I have another VE set up where this is working fine. The working one is based on the debian-4.0-i386-minimal template from OpenVZ and the nonworking one is based on an Ubuntu amd64 template I made, but I have yet to figure out what the problem is there. It is able to handle DHCPv4, so I know the veth device itself is working. On the other hand I started one of my idle VEs up and tried 'sudo vzctl set 235 --ipadd "2001:4978:192::235" --save', and what do you know? It works just fine with a statically assigned IPv6 address for venet. I didn't even have to specify a default route but can ping6 www.sixxs.net just fine. It won't autoconfigure via venet, though...something about broadcasts. On the other hand, you have to specify a MAC when creating a veth device, so I'm suddenly not clear on why I wanted to make autoconfigure work for the VE's in the first place. I'm curious about why your bridge wasn't up...it sounds as if it weren't set up in the first place, or perhaps you left out "auto vzbr0" in your interfaces file. What does "sudo brctl show" give you? Here is what mine looks like:
bridge name bridge id STP enabled interfaces vzbr0 8000.001851a2fdb3 no eth0 veth101.0 veth181.181 veth300.0
eth0 is my physical LAN interface, and the others are 3 veth devices for three different VE's. Note that my tunnels and venet devices aren't there; the bridge just puts the veth devices on the same broadcast domain as the physical device.
Subnet + openvz + venet
[us] Shadow Hawkins on Monday, 29 June 2009 05:16:19
Well, now that I'm toying around with OpenVZ and IPv6 again I'm rethinking my setup. It seems the hassle of using veth networking and the host bridge may not be worth the trouble, and it opens up the possibility of an in-VE misconfiguration causing problems elsewhere on the network. What you gain by using veth is a full network connection (with broadcast ability), if you can get it working right. I think I chose veth networking at first because I wanted to run the tunnel and radvd in a VE, but I never got that to work. And I don't really see an advantage to using stateless autoconfiguration on my VE's, so my original reasons for wanting veth are now moot. At the moment I have a mix of veth and venet devices, all on the same /64 with other physical hosts. I can get the venet VE's on IPv6, but they have trouble speaking to the other physical hosts when they are on the same /64. I expect this has to do with proxy_ndp and suspect my mix of veth, venet and the bridge are complicating the issue. But if I assign a different /64 to the venet VE's they can speak to everything. So my new plan is to ditch the bridge and veth networking (along with /etc/vz/vznet.conf and /usr/sbin/vznetaddbr ), put radvd on eth0 for the physical LAN and use a different /64 prefix to assign static addresses via venet for my VE's. Having the VE's on other prefixes can simplify firewall rules (e.g. a prefix for a DMZ, a prefix for local-access only, etc.). Since my tunnel endpoint is on the OpenVZ host the routing will be handled with no special configuration on my part. In your case, do you have a physical LAN on which you want to use IPv6, or is everything on the OpenVZ host?
Subnet + openvz + venet
[us] Shadow Hawkins on Monday, 29 June 2009 08:54:29
OK, I made the changes mentioned above and have it working, although if I assign an IPv6 address to a VE that is in the same prefix as the physical LAN it won't talk to the physical LAN. There is probably a fix for that, but I decided to have the VE's on a separate prefix, anyway. (The v4 addresses work fine even though they are on the same subnet as the physical v4 network. Go figure.) My plan: - OpenVZ host runs aiccu for SixXS tunnel - radvd runs on eth0 for physical LAN 2001:4978:192::/64 - v6 public VE's will use 2001:4978:192:1::/64 - v6 private VE's will use 2001:4978:192:2::/64 and be firewalled - no bridge (vzbr0) or veth networking, just venet My aiccu.conf is unchanged. My radvd.conf is only different by vzbr0 replaced by eth0 The vzbr0 interface is commented out of my interfaces file and not used. Here is the relevant part of my interfaces now:
auto eth0 iface eth0 inet6 static address 2001:4978:192::XXX netmask 64 # null route my /48 to prevent nonassigned prefixes from routing back over the tunnel post-up /sbin/ip -6 route add 2001:4978:192::/48 dev lo || true
No v6 gateway is needed in the interfaces file because aiccu sets up the tunnel gateway and OpenVZ adds routes for its VE's. /etc/vznet.conf and /usr/sbin/vznetaddbr aren't needed now; they were just for adding the veth devices to the bridge, and I got rid of all that. NETIF, CONFIG_CUSTOMIZED and VZHOSTBR lines were removed or commented out from all /etc/vz/conf/<veid>.conf files. Relevant portions of sysctl.conf are now a little different:
# Uncomment the next line to enable packet forwarding for IPv4 net.ipv4.ip_forward=1 # Uncomment the next line to enable packet forwarding for IPv6 #### this appears to be invalid ### #net.ipv6.ip_forward=1 net.ipv6.conf.all.forwarding=1 net.ipv6.conf.default.forwarding=1 #-- OpenVZ begin --# # On Hardware Node we generally need # packet forwarding enabled and proxy arp disabled net.ipv4.conf.default.forwarding=1 net.ipv4.conf.default.proxy_arp = 0 # Enables source route verification net.ipv4.conf.all.rp_filter = 1 # Enables the magic-sysrq key kernel.sysrq = 1 # TCP Explict Congestion Notification #net.ipv4.tcp_ecn = 0 # we do not want all our interfaces to send redirects net.ipv4.conf.default.send_redirects = 1 net.ipv4.conf.all.send_redirects = 0 #-- OpenVZ end --#
Then, to add the venet addresses:
sudo vzctl set 181 --ipadd 2001:4978:192:2::181 --save
Well, I just found a problem. The debian 4.0 i386 minimal template VE's are working fine, but again my homemade Ubuntu amd64 template is not. I just compared the output of sysctl -A of a working VE and a nonworking VE and can't see a significant difference. Another mystery to solve, but I'm pretty sure this one is inside the VE as I have a few other VE's working properly on venet IPv6 now.
Subnet + openvz + venet
[us] Shadow Hawkins on Tuesday, 30 June 2009 03:54:40
My venet IPv6 problem seems to be with the ubuntu-8.04-amd64-minimal.tar.gz template (and my modified version of it). I don't know what the cause is, but a default route is not being created inside the VE. All I had to do was (inside the VE)
ip -6 route add ::/0 dev venet0
And now my Ubuntu VE can talk to the world, at least until the next VE restart.
Subnet + openvz + venet
[us] Shadow Hawkins on Saturday, 04 July 2009 00:12:05
I created a page on the wiki for IPv6 on OpenVZ. Judging from my replies above I had a bunch to say about it.
Subnet + openvz + venet
[fi] Shadow Hawkins on Saturday, 14 June 2014 20:05:45
Jim Nelson wrote:
I created a page on the wiki for IPv6 on OpenVZ. Judging from my replies above I had a bunch to say about it.
Hi! First, thank you for all this information, it has helped me a lot doing quite much the same. I have a bit different system though: - Desktop PC connected to ipv6 throught /64 ipv6 subnet from sixxs - radvd installed on the pc, all devices in my network succesfully gets an ipv6 address, including Windows laptop and virtualbox servers (but not the virtualbox machines on the laptop, that's because wlan cannot use promiscious mode, no way to fix). Decided to test openvz virtualization installed on a virtualbox. I wanted all the VPS's to have ipv6 connection to the internet (I only have 1 ipv4 address). So the difference is that openvz runs inside a virtualbox, not on the same machine I have aiccu installed. For that I needed /48 subnet, which I (thanks to SixXS) got. First I just pushed the VPS's ipv6 addresses from /64 address space assigned from /48. That gave the VPS's connection to the virtualbox openvz host, but not to the internet. All I needed to do (in addition to all that sysctl magic) was to add a static route from desktop pc (having aiccu and radvd) to the virtualbox openvz host. So the configuration is now: - the original /64 subnet: 2001:14b8:100:8xxx::2/64 - new /48 subnet: 2001:14b8:xxx::/48 Desktop pc (original /64 subnet, endpoint 2001:14b8:100:xxx::2) + aiccu + radvd Home network: all get address from radvd server, inside the original /64 network (2001:14b8:100:8xxx:xxxx:xxxx:xxxx:xxxx) Virtualbox openvz server: get address from the original ipv6 /64 subnet, just like the other nodes on the home network. Also, desktop pc routes one /64 subnet from the new /48 address space to the virtualbox. I did this by uncommenting this line from aiccu.conf: -------- # Script to run after setting up the interfaces (default: none) setupscript /usr/local/etc/aiccu-subnets.sh -------- and adding this to the aiccu-subnets.sh: ---------- /bin/ip -6 route add 2001:14b8:xxx:100::/64 via 2001:14b8:100:8xxx:yyyy:zzzz:aaaa:bbbb proto kernel ---------- This ipv6 address is the addr it got from radvd (should I use static for this?) After that I just give all new openvz VPS a new address from subnet 2001:14b8:xxx:100::/64: sudo vzctl set 101 --ipadd 2001:14b8:xxx:100::101 --save The VPS are using venet, not veth. And voila, they are on the internet with unique ipv6 address! But the question is, did I do it the right way, or should I do something differently?

Please note Posting is only allowed when you are logged in.

Static Sunset Edition of SixXS
©2001-2017 SixXS - IPv6 Deployment & Tunnel Broker