tag:forum.sixxs.net,2001:atom SixXS Forum - setup SixXS Forum (ATOM 1.0) 2017-06-05T18:00:00-00:00 SixXS Forum info@sixxs.net How can I see raw AYIYA traffic at the IPv4 layer? tag:forum.sixxs.net,2017-05-09:setup.16027557.16057225 2017-05-09T12:05:30-00:00 2017-05-09T12:05:30-00:00 Pim ZandbergenPZ1453-RIPE@whois.sixxs.net For the record: It appears all the problems I was seeing with 6in4 tunnels are related to protocol 41 packets being throttled (Ziggo NL) or dropped (Vodafone NL). My solution was to get myself a VPS, use that as the endpoint for the 6in4 tunnel, and tunnel the routed /48 subnet to home and work using Wireguard. Wireguard uses UDP, supports NAT and dynamic endpoints just like AYIYA and as a bonus encrypts the traffic. That works just fine. Fast and reliable. This will help me while waiting for native IPv6 hopefully before the end of this year. How can I see raw AYIYA traffic at the IPv4 layer? tag:forum.sixxs.net,2017-04-20:setup.16027557.16027565.16029193.16029201.16029217.16029221 2017-04-20T13:04:11-00:00 2017-04-20T13:04:11-00:00 Jeroen MassarJRM1-RIPE@whois.sixxs.net <div class="quote">Oh but I have, extensively. I have negotiated native IPv6 within a year, or break an otherwise three year contract. I/they just need some more time. I need to give them (FTB Nederland) a break, they are a new small ISP that took over a fiber network previously run by Vodafone NL, which they abandoned, deemed unprofitable. FTB are about to invest more in IPv6 then they will ever earn from my contract. And they are actively helping me to find the cause of current tunneling issues</div> Not too unreasonable to give them a wee bit of time in that case. Their shortest path: setup a 6rd box and voila, all customers that want it can do IPv6. Then, the longer path: go native IPv6. Though as it is &quot;fiber&quot;, it is very likely Ethernet based and thus enabling IPv6 should not be too complicated. <div class="quote">Downloading 100GB over HTTP (wget to apache) stalls at around 30 GB.</div><div class="quote">rsync over ssh stalls almost immediately.</div> A &quot;stall&quot; definitely sounds like a PathMTU issue. Though, if TCP loses packets weird effects that are similar can happen though. <div class="quote">... consumer cable network (300/30 mbit).</div> Note that those are MAXimum speeds, they are not what you will always get, especially at peak time. <div class="quote">With 6in4 it is slow to anywhere.</div> Ziggo/LibertyGlobal are known to do QoS style packet prioritization. They have never admitted it publicaly but it is seen all over the place. Also, LibertyGlobal has a nasty peering policy, which means that the transit port you are going over might just be full and they are not bothering to upgrade it, instead letting the remote ISP pay for buying transit from them. That is what you get with monopoly companies unfortunately. <div class="quote">5 bol.macroscoop.nl (80.69.71.122) 22.208 ms !X 24.885 ms !X 24.623 ms !X</div> Note that traceroutes are one-way, you do not see the return path with them which might take a completely different route. But as there is an !X there, it also shows that some kind of packet filtering is happening, which can also cause problems for tunnels. <div class="quote">We both are suspecting a change in the Vodafone infrastructure between FTB and AMS-IX.</div> AMS-IX only provides a switch. But those ports on the switch might be 'full'. That can happen on both sides of the link though. That said, IX'es should not be used for transit, that is what private peering is for. <div class="quote">I particularly suspect the hop with the private IPv4 address. Would that hop be able to send an ICMP message?</div> Obvious it is sending a ICMP (most traceroute implementions use that method) or another packet type. Nevertheless that packet is being sourced from a RFC1918 address and being returned without issues to your host. RFC1918 should never exist on the public Internet. Which shows that the networks you are on are susceptible to spoofing. Time to teach your ISP some <a href="https://www.routingmanifesto.org/manrs/">MANRS</a>! Note that that indeed can cause all kind of weird issues, as packets originating from such a host will be dropped by properly configured networks. <div class="quote">it would be sufficient that _my_ endpoints handle ICMPv6 PTB.</div> No. Because that is just the first hop of a traceroute. All nodes between the source and destination need to properly support ICMPv6 PTB, and the rest of the IPv6 Node Requirements (there are quite a few more). But you are forgetting that if a too-large IPv4 packet is send that the IPv4 network might silently drop that packet too. IPv4 might fragment, but maybe the node that is uncompliant does not properly do that, the fun with unknown nodes. How can I see raw AYIYA traffic at the IPv4 layer? tag:forum.sixxs.net,2017-04-20:setup.16027557.16027565.16029193.16029201 2017-04-20T11:04:46-00:00 2017-04-20T11:04:46-00:00 Jeroen MassarJRM1-RIPE@whois.sixxs.net <div class="quote">In case you have not guessed, this is about migrating tunnels away from SixXS to other tunnelbrokers.</div> Instead of asking your ISP for Native IPv6.... or moving to an ISP that does provide what you actually want. <div class="quote">... suddenly is showing stalled transfers when loads are high</div> You state &quot;load&quot;, what do you mean with &quot;load&quot;? <div class="quote">Another tunnel that used to run AYIYA to SixXS is now runs at 10% of the IPv4 speed using 6in4 to elsewhere.</div> From where to where? See the <a href="https://www.sixxs.net/faq/connectivity/?faq=slow">FAQ: The tunnel is slow</a> which mostly also applies to any other tunnel in the world... <div class="quote">I can replicate the problems I see over a 6in4 tunnel to a colocation Linux server under my control</div> That partially only excludes the common path, but that might just be an indicator. Remember that your source node is still the same. There are CPEs which have issues with non-TCP/UDP packets for instance. <div class="quote">In the case of the &quot;stalled&quot; network, I guess AYIYA is less prone to MTU and fragmentation issues</div> AYIYA does nothing magical for fragmentation. Just a correctly configured MTU and properly configured endpoints that do proper ICMPv6 PTB sending, nothing else. You will &quot;just&quot; (as it is far from that easy) need to verify if your setup is correct and that the nodes you are talking to properly handle ICMPv6 PTB. <div class="quote">The (business oriented) ISP for the &quot;stalled&quot; network has promised me native IPv6 within a year, but not before 6-6-2017.</div> Did you ask them what they have been doing for the last 20 years? IPv6 is very old by now.... <div class="quote">The ISP for the &quot;slow&quot; network offers me either DS Lite or an expensive upgrade from a consumer to a business subscription. I guess I will have to opt for one of them. Or prove and complain that they are not net-neutral.</div> Sounds like the standard monopoly that is plaguing most of Europe, better to complain to your government for unfair business practices. How can I see raw AYIYA traffic at the IPv4 layer? tag:forum.sixxs.net,2017-04-20:setup.16027557.16027565.16029193 2017-04-20T10:04:27-00:00 2017-04-20T10:04:27-00:00 Pim ZandbergenPZ1453-RIPE@whois.sixxs.net Jeroen Massar wrote: <div class="quote">Dump the correct interface: the IPv4 interface, not the tunnel. [...] Wireshark has an AYIYA dissector, and thus will show you Ethernet -&gt; IPv4 -&gt; UDP -&gt; AYIYA -&gt; IPv6. </div> Wireshark always dissects AYIYA and IPv6, regardless of the interface. Searching for &quot;disable dissector&quot; helped, I needed to disable IPv6 and/or AYIYA in &quot;enabled protocols&quot;. <div class="quote">&quot;slowness, stalled traffic&quot; can be caused by many many factors. Though, most very likely, due to the latter &quot;stalled&quot; part: you are having MTU issues. See the FAQ for more details. </div> In case you have not guessed, this is about migrating tunnels away from SixXS to other tunnelbrokers. One tunnel that used to work fine as a static 6in4 tunnel, first at SixXS, later elsewhere (also fine), suddenly is showing stalled transfers when loads are high. Another tunnel that used to run AYIYA to SixXS is now runs at 10% of the IPv4 speed using 6in4 to elsewhere. For both networks, I can replicate the problems I see over a 6in4 tunnel to a colocation Linux server under my control, using private IPv6 tunnel addresses. So the other tunnelbroker is not to blame. But notably, for both these IPv4 networks, an AYIYA tunnel works just fine, fast and reliable. In the case of the &quot;stalled&quot; network, I guess AYIYA is less prone to MTU and fragmentation issues. For the &quot;slow&quot; network, it could be the ISP is throttling proto 41 but not AYIYA's UDP. The (business oriented) ISP for the &quot;stalled&quot; network has promised me native IPv6 within a year, but not before 6-6-2017. They are willing to help me with my current tunneling issues, though, but do not appear to be able to find the cause. Tracepath and ping tests (with DF bit and large packet sizes) do not show problems. I will need to dig deep. The ISP for the &quot;slow&quot; network offers me either DS Lite or an expensive upgrade from a consumer to a business subscription. I guess I will have to opt for one of them. Or prove and complain that they are not net-neutral. How can I see raw AYIYA traffic at the IPv4 layer? tag:forum.sixxs.net,2017-04-19:setup.16027557.16027565 2017-04-19T16:04:18-00:00 2017-04-19T16:04:18-00:00 Jeroen MassarJRM1-RIPE@whois.sixxs.net <div class="quote">Wireshark always seems to show the IPv6 payload.</div> Dump the correct interface: the IPv4 interface, not the tunnel. <div class="quote">I'd like to see exactly how AYIYA is encapsulating the payload, so I may figure out why I am seeing problems with 6in4 (slowness, stalled traffic) not present with AYIYA.</div> Wireshark has an AYIYA dissector, and thus will show you Ethernet -&gt; IPv4 -&gt; UDP -&gt; AYIYA -&gt; IPv6. &quot;slowness, stalled traffic&quot; can be caused by many many factors. Though, most very likely, due to the latter &quot;stalled&quot; part: you are having MTU issues. See the FAQ for more details. How can I see raw AYIYA traffic at the IPv4 layer? tag:forum.sixxs.net,2017-04-19:setup.16027557 2017-04-19T16:04:22-00:00 2017-04-19T16:04:22-00:00 Pim ZandbergenPZ1453-RIPE@whois.sixxs.net How can use Wireshark or other tools to see what AYIYA is doing over my IPv4 connection? Wireshark always seems to show the IPv6 payload. I'd like to see exactly how AYIYA is encapsulating the payload, so I may figure out why I am seeing problems with 6in4 (slowness, stalled traffic) not present with AYIYA. Debugging MTU on "native" 6rd / OpenWRT [Sunset] tag:forum.sixxs.net,2017-04-19:setup.16027437.16027477 2017-04-19T09:04:48-00:00 2017-04-19T09:04:48-00:00 Jeroen MassarJRM1-RIPE@whois.sixxs.net <div class="quote">Since aiccu handled MTU issues rather well, there hasn't really been any need for me to read ipv6 tcpdumps concerning MTU issues.</div> aiccu has no special handling of MTU. It just configures the default of 1280 (or what is configured in the webinterface) on both sides of the tunnel. The PoP properly sends back ICMPv6 Packet Too Big if an inbound packet does not fit, and so does the client end. That is thus simply a case of a properly configured network and a stack that properly sends out and handles ICMPv6 PTBs. <div class="quote">Any hints on how to interpret the following dump?</div><div class="quote">AFAIK ayiya and pinger1 should handle things correctly.</div><div class="quote">Earlier the same openwrt box used aiccu with static tunnel / mtu 1480 without issues,</div><div class="quote">but with 6rd it appears to get that default/minimum mtu from somewhere?</div> It seems you might be configuring multiple address spaces on a host with multiple default routes etc. Do note that you need to have proper source selection for that to properly work. You need to ask the operator of the tunneling service what their MTU is configured to. There is no guessing there. Noting though that the MTU here is only the link MTU. The moment you go a hop further the MTU might be bigger or smaller (if not the minimum of 1280 already). Also note that there are a bunch of &quot;hosts&quot;, especially CDNs like used by Google that are actually loadbalanced IPs that do not properly handle ICMPv6 Packet Too Big and fake things (Google for instance just fakes the TCP MSS, which indeed means that large packets in UDP do not work, which is fun with QUIC which is UDP based; oh and on top of that QUIC does not support non-1500 MTU sizes.... great &quot;protocol&quot; yes they have been made aware, no they do not care...) Hence, when you are testing test against know hosts, otherwise you might be debugging the unknown which is not useful. Also read up on <a href="https://blog.cloudflare.com/path-mtu-discovery-in-practice/">CloudFlare's Path MTU discovery in practice</a> for some more details. Debugging MTU on "native" 6rd / OpenWRT [Sunset] tag:forum.sixxs.net,2017-04-19:setup.16027437 2017-04-19T07:04:52-00:00 2017-04-19T07:04:52-00:00 Tuomas HeinoTH28-6BONE@whois.sixxs.net Since aiccu handled MTU issues rather well, there hasn't really been any need for me to read ipv6 tcpdumps concerning MTU issues. Any hints on how to interpret the following dump? AFAIK ayiya and pinger1 should handle things correctly. Earlier the same openwrt box used aiccu with static tunnel / mtu 1480 without issues, but with 6rd it appears to get that default/minimum mtu from somewhere? openwrt 2001:db8:f:b00::1/56, 2001:db8:f:bc0::1/64 6rd landest 2001:db8:f:bc0::33/64 ayiya 2001:db8:5:71::1/48 pinger1 2001:db8:5:59::1/64, 2001:db8:5:71::21/64 $ ping6 -c 1 -M do -s 1380 2001:db8:f:bc0::33 PING 2001:db8:f:bc0::33(2001:db8:f:bc0::33) 1380 data bytes From 2001:db8:f:b00::1 icmp_seq=1 Packet too big: mtu=1280 --- 2001:db8:f:bc0::33 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms $ ping6 -M do -s 1380 2001:db8:5:71::1 PING 2001:db8:5:71::1(2001:db8:5:71::1) 1380 data bytes 1388 bytes from 2001:db8:5:71::1: icmp_seq=2 ttl=54 time=176 ms 1388 bytes from 2001:db8:5:71::1: icmp_seq=3 ttl=54 time=164 ms ^C --- 2001:db8:185:5d71::1 ping statistics --- 3 packets transmitted, 2 received, 33% packet loss, time 2008ms rtt min/avg/max/mdev = 164.182/170.336/176.490/6.154 ms root@OpenWrt:~# tcpdump -nettti 6rd-rd6 icmp6 tcpdump: WARNING: 6rd-rd6: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on 6rd-rd6, link-type RAW (Raw IP), capture size 65535 bytes 00:00:00.000000 ip: 2001:db8:5:59::1 &gt; 2001:db8:f:bc0::33: ICMP6, echo request, seq 1, length 1388 00:00:48.433027 ip: 2001:db8:f:bc0::33 &gt; 2001:db8:5:71::1: ICMP6, echo request, seq 1, length 1388 00:00:00.252362 ip: 2001:db8:5:71::1 &gt; 2001:db8:f:bc0::33: ICMP6, echo reply, seq 1, length 1388 00:00:00.000247 ip: 2001:db8:f:b00::1 &gt; 2001:db8:5:71::1: ICMP6, packet too big, mtu 1280, length 1240 00:00:00.755739 ip: 2001:db8:f:bc0::33 &gt; 2001:db8:5:71::1: ICMP6, echo request, seq 2, length 1388 00:00:00.174998 ip: 2001:db8:5:71::1 &gt; 2001:db8:f:bc0::33: frag (0|1232) ICMP6, echo reply, seq 2, length 1232 00:00:00.001118 ip: 2001:db8:5:71::1 &gt; 2001:db8:f:bc0::33: frag (1232|156) 00:00:00.824260 ip: 2001:db8:f:bc0::33 &gt; 2001:db8:5:71::1: ICMP6, echo request, seq 3, length 1388 00:00:00.162697 ip: 2001:db8:5:71::1 &gt; 2001:db8:f:bc0::33: frag (0|1232) ICMP6, echo reply, seq 3, length 1232 00:00:00.001141 ip: 2001:db8:5:71::1 &gt; 2001:db8:f:bc0::33: frag (1232|156) Multiple IPv6 static addresses on one interface? tag:forum.sixxs.net,2017-02-23:setup.15922681.15922693 2017-02-23T17:02:42-00:00 2017-02-23T17:02:42-00:00 Jeroen MassarJRM1-RIPE@whois.sixxs.net You might want to read <a href="https://wiki.debian.org/NetworkConfiguration">Debian interfaces</a> see the bottom section for the details. In summary, just add multiple 'interface' sections with static addresses and skip the gateway definition. Typically defining them as /128 is a good idea btw even though the real interface is a /64. Note that you will run into a lot of fun with outbound addresses, as a random one more or less is chosen... (depends typically on the order of add and/or on the deprecate flag if code is good). Multiple IPv6 static addresses on one interface? tag:forum.sixxs.net,2017-02-23:setup.15922681 2017-02-23T16:02:39-00:00 2017-02-23T16:02:39-00:00 Andre-John MasAME4-SIXXS@whois.sixxs.net On a Linux based host how can I assign multiple IPv6 addresses to one interface? I had tried adding two 'address' lines under an interface in the /etc/network/interfaces file, but that did not seem to work. Any ideas? Find where packet is dropped tag:forum.sixxs.net,2017-02-16:setup.15908505.15908513 2017-02-16T10:02:27-00:00 2017-02-16T10:02:27-00:00 Jeroen MassarJRM1-RIPE@whois.sixxs.net <div class="quote">When I route to the 4G, nothing comes back</div> Quite likely they filter certain ports. Call Your ISP... Or try one of the many &quot;Am I being filtered&quot; kind of websites that are out there. Find where packet is dropped tag:forum.sixxs.net,2017-02-16:setup.15908505 2017-02-16T10:02:49-00:00 2017-02-16T10:02:49-00:00 Leif NelandLNR4-SIXXS@whois.sixxs.net <b>Is my router or my ISP dropping packets to/from POP?</b> I have a 4G router and a ADSL-router, the latter only 1Mb/s When I route to the pop over ADSL, the connection works fine <code>10:36:53.881855 IP 192.168.1.254.44928 &gt; dkcph01.sixxs.net.5072: UDP, length 104 10:36:53.927412 IP dkcph01.sixxs.net.5072 &gt; 192.168.1.254.44928: UDP, length 104 traceroute to dkcph01.sixxs.net (93.158.77.42), 30 hops max, 60 byte packets 1 192.168.1.3 (192.168.1.3) 17.898 ms 17.604 ms 17.264 ms &lt;- MY ADSL ROUTER 2 xe-2-1-1-1103.ronnqe10.dk.ip.tdc.net (87.58.0.130) 30.039 ms 34.716 ms 42.157 ms 3 xe-1-0-0-0.bgt-peer1.mmx.se.ip.tdc.net (88.131.143.115) 49.252 ms 55.641 ms 60.560 ms 4 netnod-ix-ge-b-mmo-1500.ip-only.net (195.69.117.92) 68.463 ms 73.828 ms 79.806 ms 5 83.145.2.46 (83.145.2.46) 119.284 ms 113.858 ms 114.203 ms 6 212.112.188.66 (212.112.188.66) 103.168 ms 46.887 ms 31.330 ms 7 83.140.255.70 (83.140.255.70) 44.473 ms 50.255 ms * 8 dkcph01.sixxs.net (93.158.77.42) 56.382 ms 62.972 ms 68.178 ms </code> When I route to the 4G, nothing comes back <code>10:41:54.166050 IP 192.168.1.254.44928 &gt; dkcph01.sixxs.net.5072: UDP, length 105 10:41:57.861203 IP 192.168.1.254.44928 &gt; dkcph01.sixxs.net.5072: UDP, length 105 10:41:58.443125 IP 192.168.1.254.44928 &gt; dkcph01.sixxs.net.5072: UDP, length 105 traceroute to dkcph01.sixxs.net (93.158.77.42), 30 hops max, 60 byte packets 1 192.168.1.1 (192.168.1.1) 1.431 ms 0.832 ms 1.402 ms &lt;- MY 4G ROUTER 2 192.168.225.1 (192.168.225.1) 1.099 ms 1.391 ms 1.778 ms 3 62.44.164.245 (62.44.164.245) 31.055 ms 31.322 ms 31.740 ms 4 62.44.166.184 (62.44.166.184) 30.979 ms 30.626 ms 30.724 ms 5 212.97.200.65 (212.97.200.65) 31.132 ms 31.472 ms 30.727 ms 6 kbn-b3-link.telia.net (80.239.132.1) 33.178 ms 19.937 ms 20.330 ms 7 iponly-ic-319310-kbn-b3.c.telia.net (62.115.151.47) 24.211 ms 20.568 ms 21.012 ms 8 213.80.86.252 (213.80.86.252) 33.367 ms 21.022 ms 31.120 ms 9 83.145.2.46 (83.145.2.46) 55.067 ms 58.371 ms 57.875 ms 10 212.112.188.66 (212.112.188.66) 31.118 ms 31.405 ms 32.348 ms 11 83.140.255.70 (83.140.255.70) 34.017 ms 33.506 ms 32.858 ms 12 dkcph01.sixxs.net (93.158.77.42) 28.164 ms 30.134 ms 29.727 ms </code> Where do I find what is dropping the packets; my router or my 4G provider, Telia.dk? (<i>Which does not yet run IPv6 on 4G; I have asked repeatedly</i>) The router is a TP-Link Archer MR200 I can ping the pop nicely: <code>10:47:19.760772 IP 192.168.1.254 &gt; dkcph01.sixxs.net: ICMP echo request, id 8775, seq 1, length 64 10:47:19.787280 IP dkcph01.sixxs.net &gt; 192.168.1.254: ICMP echo reply, id 8775, seq 1, length 64 </code> Can the tunnel be set to use other ports or protocols? I have a VPS (at OVH France I think) with true ipv6-connectivity, I could use to debug the connection, if I knew what to look for. I have tried to get a /64 routed to the VPS, so I could VPN this network to my home network, but this is not yet possible. Is it possible to NAT all my machines at home through a single IPv6 address at the VPS? I'd rather not, as defeats the purpose of having some home automation Raspberry's reachable from outside over separate ipv6-addresses. ping6 - Time exceeded: Hop limit tag:forum.sixxs.net,2017-02-15:setup.15904569.15904573.15906505.15906561 2017-02-15T06:02:13-00:00 2017-02-15T06:02:13-00:00 Jeroen MassarJRM1-RIPE@whois.sixxs.net <div class="quote">Playing around indicates the issue only happens if wlan0 is up.</div> Check 'ip -6 ro show' likely you see that your Wireless is bridged to your Ethernet. Hence, eth0 on your device, announces connectivity to wlan0 through your Access Point which bridges Ethernet and Wireless. Thus, bringing up eth0 and wlan0 at the same time is a bad idea in this configuration unless you apply a lot of extra configuration. <div class="quote">While only impacting radvd and not the base connectivity I also noticed that 2001:4978:15d:feed::1 wasn't being assigned and instead I needed to do this manually:</div><div class="quote"></div><div class="quote"><code>sudo /sbin/ip -6 addr add 2001:4978:15d:feed::1/64 dev eth0</code></div> That is because forwarding is enabled on the interface, you should put that in /etc/network/interfaces. Also, you might be missing a 'auto eth0' line there. Instead of 'pre-up modprobe ipv6' stuff that in /etc/modules and it will load way before your networking comes up. That might resolve it too. Running a Debian based image will resolve all that, as they got IPv6 built in per default for a long long time already. ping6 - Time exceeded: Hop limit tag:forum.sixxs.net,2017-02-15:setup.15904569.15904573.15906505 2017-02-15T04:02:23-00:00 2017-02-15T04:02:23-00:00 Andre-John MasAME4-SIXXS@whois.sixxs.net The Raspberry Pi is a model 3 and both eth0 and wlan0 are connected to the local network. Output of uname: <code>Linux balsa 4.4.34-v7+ #930 SMP Wed Nov 23 15:20:41 GMT 2016 armv7l GNU/Linux</code> Playing around indicates the issue only happens if wlan0 is up. When both interfaces are up: <code>eth0 Link encap:Ethernet HWaddr b8:27:eb:40:7e:e5 inet addr:192.168.2.28 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: 2001:4978:15d:feed::1/64 Scope:Global inet6 addr: fe80::57d6:1d7:8c17:6b2b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:65719 errors:0 dropped:1 overruns:0 frame:0 TX packets:46200 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:50720760 (48.3 MiB) TX bytes:45893098 (43.7 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:204 errors:0 dropped:0 overruns:0 frame:0 TX packets:204 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:17264 (16.8 KiB) TX bytes:17264 (16.8 KiB) sit0 Link encap:IPv6-in-IPv4 NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) sixxs Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet6 addr: fe80::4878:f:48:2/64 Scope:Link inet6 addr: 2001:4978:f:48::2/64 Scope:Global UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1280 Metric:1 RX packets:34810 errors:0 dropped:0 overruns:0 frame:0 TX packets:10494 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:42924095 (40.9 MiB) TX bytes:999427 (976.0 KiB) wlan0 Link encap:Ethernet HWaddr b8:27:eb:15:2b:b0 inet addr:192.168.2.29 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::e659:fc06:d4cd:230a/64 Scope:Link inet6 addr: 2001:4978:15d:feed:d3b9:b0d3:9598:655/64 Scope:Global UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:631 errors:0 dropped:313 overruns:0 frame:0 TX packets:18082 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:128977 (125.9 KiB) TX bytes:4659397 (4.4 MiB) </code> /etc/network/interfaces: <code>source-directory /etc/network/interfaces.d auto lo iface lo inet loopback iface eth0 inet manual iface eth0 inet6 static pre-up modprobe ipv6 address 2001:4978:15d:feed::1 netmask 64 allow-hotplug wlan0 iface wlan0 inet manual wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf allow-hotplug wlan1 iface wlan1 inet manual wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf # IPv6 iface wlan0 inet6 static # pre-up modprobe ipv6 # address 2001:4978:15d:feed::1 # netmask 64 </code> While only impacting radvd and not the base connectivity I also noticed that 2001:4978:15d:feed::1 wasn't being assigned and instead I needed to do this manually: <code>sudo /sbin/ip -6 addr add 2001:4978:15d:feed::1/64 dev eth0</code> ping6 - Time exceeded: Hop limit tag:forum.sixxs.net,2017-02-14:setup.15904569.15904573 2017-02-14T16:02:13-00:00 2017-02-14T16:02:13-00:00 Jeroen MassarJRM1-RIPE@whois.sixxs.net <div class="quote">I get a timeout. I then try a ping6 to the same address, but get:</div> Instead of ping, try traceroute, it will tell you what route is being taken for those packets. Do note that Google is a bad network to test connectivity towards as they use all kinds of tricks to make their performance the way it is (amongst others ignored ICMPv6 Packet Too Big, as their load balancers do not support it, and thus they guess TCP MSS instead, yes, UDP is SoL), thus misdiagnosis as one does not see their end is very possible. <div class="quote">The settings in my /etc/aiccu.conf (masking sensitive info):</div> AICCU configuration has very little to do: when the system is misconfigured before AICCU runs, it can't magically fix it. And of course, it can't fix any routing issues either. Hence, why there is a &quot;Problems Checklist&quot; on the contact page which is what the big yellow/orange boxes when posting point too. ping6 - Time exceeded: Hop limit tag:forum.sixxs.net,2017-02-14:setup.15904569 2017-02-14T16:02:39-00:00 2017-02-14T16:02:39-00:00 Andre-John MasAME4-SIXXS@whois.sixxs.net I am trying to set up my new Raspberry Pi up as my IPv6 gateway, using aiccu, but when I try a test connection via: <code> telnet ipv6.google.com 443</code> I get a timeout. I then try a ping6 to the same address, but get: <code>$ ping6 ipv6.google.com PING ipv6.google.com(yyz08s10-in-x0e.1e100.net) 56 data bytes From yyz08s10-in-x0e.1e100.net icmp_seq=1 Time exceeded: Hop limit From yyz08s10-in-x0e.1e100.net icmp_seq=2 Time exceeded: Hop limit ^C --- ipv6.google.com ping statistics --- 2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 1001ms </code> I am not sure whether the issue is at my end or at the PoP side. What should I be examining? The settings in my /etc/aiccu.conf (masking sensitive info): <code>username XXXXX password XXXXX protocol tic ipv6_interface sixxs tunnel_id XXXXX verbose false daemonize true automatic true requiretls false behindnat true </code> When I set daemonize false and verbose true, I see no errors in the startup sequence. My PoP is uschi02. Routing problem, once more tag:forum.sixxs.net,2017-02-14:setup.15842189.15904277 2017-02-14T10:02:52-00:00 2017-02-14T10:02:52-00:00 Jeroen MassarJRM1-RIPE@whois.sixxs.net Risto Virtanen wrote: <div class="quote">eth0 Link encap:Ethernet HWaddr 00:e0:00:5a:04:36 inet addr:192.168.144.112 Bcast:192.168.144.255 Mask:255.255.255.0 inet6 addr: 2001:14b8:100:8345:2e0:ff:fe5a:436/64 Scope:Global </div> You might want to check with &quot;ip -6 addr show&quot; and &quot;ip -6 ro show&quot;. &quot;ifconfig&quot; should be avoided on Linux as much as possible btw. You are missing a fe80::/ style address there. Though likely it is there and it is fe80::2e0:ff:fe5a:436 that is mentioned below on the client. <div class="quote">* route ipv6: 2001:14b8:100:8345::/64 dev eth0 proto kernel metric 256 expires 86386sec fe80::/64 dev eth0 proto kernel metric 256 default via fe80::8eeb:c6ff:fec7:eb0c dev eth0 proto ra metric 1024 expires 1508sec hoplimit 64 default via fe80::2e0:ff:fe5a:436 dev eth0 proto ra metric 1024 expires 76sec hoplimit 64 </div> You have two defaults routes from two different hosts. The last one matches your host above, the other though is a magic one come elsewhere that is also advertising routes (hence 'proto ra'). <div class="quote">* pinging ipv6.google.com:</div> Try 'ip -6 ro get &lt;address&gt;' instead, that will show you where the packets are supposed to go. Then use 'traceroute6 &lt;address&gt;' to see if they really go there. Also, do check your firewall and all other properties in the list on the contact page. Own / Company pop tag:forum.sixxs.net,2017-02-14:setup.15879345.15879349.15904109 2017-02-14T10:02:12-00:00 2017-02-14T10:02:12-00:00 Jeroen MassarJRM1-RIPE@whois.sixxs.net Leif Neland wrote: <div class="quote">Why is this thread marked private? </div> Because unfortunately in times when people want more than they pay for they demand in rather nasty ways, and thus all posts are screened instead of us getting indexed on Google with vulgarity. Own / Company pop tag:forum.sixxs.net,2017-02-14:setup.15879345.15904097 2017-02-14T09:02:26-00:00 2017-02-14T09:02:26-00:00 Jeroen MassarJRM1-RIPE@whois.sixxs.net Sounds like you want a VPN. That thus has little to do with SixXS, which is a Tunnel Broker for getting global IPv6 connectivity and playing with IPv6. How to re-enable? tag:forum.sixxs.net,2017-02-14:setup.15894873.15904061 2017-02-14T09:02:35-00:00 2017-02-14T09:02:35-00:00 Jeroen MassarJRM1-RIPE@whois.sixxs.net <div class="quote">My tunnel detail page has the Enable button grayed out, along with the button to change the tunnel type or endpoint</div> Did you Call Your ISP and ask for Native IPv6? How to re-enable? tag:forum.sixxs.net,2017-02-09:setup.15894873 2017-02-09T06:02:03-00:00 2017-02-09T06:02:03-00:00 Pradeep SandersPSI8-SIXXS@whois.sixxs.net My server was down for 11 days, and I lost all my ISK since I foolishly hoped it would come up quicker than it did and thus didn't pause it. My tunnel detail page has the Enable button grayed out, along with the button to change the tunnel type or endpoint. I've not found a way to re-enable my tunnel despite checking about 30 pages of forum posts and all the FAQs. Either I missed something, or the instructions don't reflect what I'm seeing. How can I re-enable my tunnel now that my static address is reachable again? Thanks, -Pradeep Own / Company pop tag:forum.sixxs.net,2017-02-01:setup.15879345 2017-02-01T08:02:21-00:00 2017-02-01T08:02:21-00:00 Leif NelandLNR4-SIXXS@whois.sixxs.net Home and office does not have IPv6, but our webserver (and intranet webserver) is hosted at a site with IPv6 Because of the ease of setting up aiccu, and for not abusing a public, free PoP for business use,, I'd like to set up a PoP at our hosting site. Servers are running Linux (of cause ;-) ); which software to use for pop? aiccu on centos7 tag:forum.sixxs.net,2017-01-28:setup.15869489.15870753 2017-01-28T07:01:55-00:00 2017-01-28T07:01:55-00:00 Jeroen MassarJRM1-RIPE@whois.sixxs.net Triantafyllos Tsakidis wrote: <div class="quote">Hello I found out that aiccu is not available in <a href="https://dl.fedoraproject.org/pub/epel/7/x86_64/a/">epel repository for Centos 7</a> I also tried to download the rpm for centos 6 but it did not install on centos 7 due to missing dependencies So currently the only way to setup a tunnel on centos7 is to use a 6in4-static tunnel? Or is there any other way to use aiccu on centos 7? </div> Contacting your ISP and asking them for Native IPv6 is a good step. It is 2017 afterall... As for the package, contact your vendor, and one can always use the source ;) Though, if you are trying to do tunnels now, I suggest really going for the ISP option. See the news pages for more details. aiccu on centos7 tag:forum.sixxs.net,2017-01-27:setup.15869489 2017-01-27T23:01:36-00:00 2017-01-27T23:01:36-00:00 Triantafyllos TsakidisTTU3-SIXXS@whois.sixxs.net Hello I found out that aiccu is not available in <a href="https://dl.fedoraproject.org/pub/epel/7/x86_64/a/">epel repository for Centos 7</a> I also tried to download the rpm for centos 6 but it did not install on centos 7 due to missing dependencies So currently the only way to setup a tunnel on centos7 is to use a 6in4-static tunnel? Or is there any other way to use aiccu on centos 7? Routing problem, once more tag:forum.sixxs.net,2017-01-13:setup.15842189 2017-01-13T15:01:49-00:00 2017-01-13T15:01:49-00:00 Risto VirtanenRVH9-SIXXS@whois.sixxs.net Hi all, I apologize opening a new thread in spite of that the problem has been discussed in the forum several times over and over. Just didn't find an adequate response... In my home network I have a server (an old laptop) running Debian 6.0.10 (kernel 2.6.32-5-686) and among others aiccu installed. The tunnel seems to work just fine and I can reach other ipv6 hosts in the internet from this server. I also have radvd installed and it seems to advertise as configured and my clients get there ipv6 addresses. But for some reason the connections from the clients to the internet do not work. I'm probably missing some really simple error in config, but anyway I'm stuck. Any hints what to do and change are appreciated! Settings on he server: * aiccu/sixxs: Tunnel T119958 - endpoint 2001:14b8:100:345::2 - enabled Subnet R217502 - prefix 2001:14b8:100:8345::/64 - enabled * ifconfig eth0 Link encap:Ethernet HWaddr 00:e0:00:5a:04:36 inet addr:192.168.144.112 Bcast:192.168.144.255 Mask:255.255.255.0 inet6 addr: 2001:14b8:100:8345:2e0:ff:fe5a:436/64 Scope:Global inet6 addr: fe80::2e0:ff:fe5a:436/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 sixxs Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet6 addr: 2001:14b8:100:345::2/64 Scope:Global inet6 addr: fe80::14b8:100:345:2/64 Scope:Link UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1280 Metric:1 tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.144.112.1 P-t-P:10.144.112.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 (I've omitted the RX/TX lines from the output; the tun0 device is for openvpn, currently not working as my ISP is not giving any public ipv4 address anymore, neither ipv6) * route ipv4: 10.144.112.2 dev tun0 proto kernel scope link src 10.144.112.1 10.144.112.0/24 via 10.144.112.2 dev tun0 192.168.144.0/24 dev eth0 proto kernel scope link src 192.168.144.112 default via 192.168.144.1 dev eth0 * route ipv6: 2001:14b8:100:345::/64 dev sixxs proto kernel metric 256 mtu 1280 advmss 1220 hoplimit 0 2001:14b8:100:8345::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev sixxs proto kernel metric 256 mtu 1280 advmss 1220 hoplimit 0 default via 2001:14b8:100:345::1 dev sixxs metric 1024 mtu 1280 advmss 1220 hoplimit 0 * forwarding /proc/sys/net/ipv6/conf/eth0/forwarding : 1 * pinging: PING ipv6.google.com(arn09s05-in-x0e.1e100.net) 56 data bytes 64 bytes from arn09s05-in-x0e.1e100.net: icmp_seq=1 ttl=54 time=37.5 ms --- ipv6.google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms * /etc/radvd.conf: interface eth0 { AdvSendAdvert on ; # Advertise at least every 30 seconds MaxRtrAdvInterval 30; # in order to force non RFC 6106 compliant clients to get a dns address AdvOtherConfigFlag on ; prefix 2001:14b8:100:8345::/64 { AdvOnLink on;# RDNSS 2001:14b8:100:345::1 2001:14b8:100:345::2 { # }; AdvAutonomous on; AdvRouterAddr on; }; }; Settings on the client #1: * Xubuntu 14.04 with kernel 3.13.0-107-generic * no manual configuration done for ipv6 * ifconfig (fresh after recent reboot) eth0 Link encap:Ethernet HWaddr 48:5b:39:c6:42:1b inet addr:192.168.144.77 Bcast:192.168.144.255 Mask:255.255.255.0 inet6 addr: 2001:14b8:100:8345:4a5b:39ff:fec6:421b/64 Scope:Global inet6 addr: fe80::4a5b:39ff:fec6:421b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 * route ipv4: default via 192.168.144.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.144.0/24 dev eth0 proto kernel scope link src 192.168.144.77 * route ipv6: 2001:14b8:100:8345::/64 dev eth0 proto kernel metric 256 expires 86386sec fe80::/64 dev eth0 proto kernel metric 256 default via fe80::8eeb:c6ff:fec7:eb0c dev eth0 proto ra metric 1024 expires 1508sec hoplimit 64 default via fe80::2e0:ff:fe5a:436 dev eth0 proto ra metric 1024 expires 76sec hoplimit 64 * forwarding: /proc/sys/net/ipv6/conf/eth0/forwarding: 0 * neighbours fe80::8eeb:c6ff:fec7:eb0c dev eth0 lladdr 8c:eb:c6:c7:eb:0c router DELAY fe80::2e0:ff:fe5a:436 dev eth0 lladdr 00:e0:00:5a:04:36 router STALE * pinging the server: PING 2001:14b8:100:8345:2e0:ff:fe5a:436(2001:14b8:100:8345:2e0:ff:fe5a:436) 56 data bytes 64 bytes from 2001:14b8:100:8345:2e0:ff:fe5a:436: icmp_seq=1 ttl=64 time=0.388 ms --- 2001:14b8:100:8345:2e0:ff:fe5a:436 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms * pinging ipv6.google.com: PING ipv6.google.com(arn06s07-in-x0e.1e100.net) 56 data bytes --- ipv6.google.com ping statistics --- 1 packets transmitted, 0 received, 100% packet loss, time 0ms * traceroute6 ipv6.google.com: traceroute to ipv6.l.google.com (2a00:1450:400f:805::200e) from 2001:14b8:100:8345:4a5b:39ff:fec6:421b, 30 hops max, 24 byte packets 1 * * * Similar results with another client running Debian Jessie. Most of the problems discussed in the forum seemed to be related to missing ipv6 address on the server interface from the sixxs offered subnet, but that is not the problem here. The server eth0 interface has address 2001:14b8:100:8345:2e0:ff:fe5a:436 that belongs to the subnet given, although not being the recommended 2001:14b8:100:8345::1 or 2, so this time the problem must elsewhere. But where? //rkv Risto Virtanen systemd too optimistic about network tag:forum.sixxs.net,2016-12-19:setup.15793629.15793669 2016-12-19T18:12:07-00:00 2016-12-19T18:12:07-00:00 Jeroen MassarJRM1-RIPE@whois.sixxs.net Please file a bug report with your distribution directly, as apparently as the original coder of the project one has little to say when a package is maintained by some downstream. In the end though, your real solution is to get NATIVE IPv6 from your ISP. You did Call Your ISP we hope? systemd too optimistic about network tag:forum.sixxs.net,2016-12-19:setup.15793629 2016-12-19T16:12:55-00:00 2016-12-19T16:12:55-00:00 Bernd StrammBST5-SIXXS@whois.sixxs.net My system won't start aiccu at boot time, because the network is not ready enough. This system uses systemd, with a dependency like so: <code>[Unit] Description=AICCU (Automatic IPv6 Connectivity Configuration Utility) Wants=network.target network-online.target After=network.target network-online.target time-sync.target </code> When it starts aiccu, the host side of the network is good, but the network is not connected. Immediately after boot, starting aiccu manually does work correctly. So I believe that systemd is being optimistic about the network being up, especially if I connect through wifi (which I do usually). How else can I set this up, to get a reliable, more conservative target to wait for?