Over the last few days there has been a huge amount of FUD and panic surrounding two as-yet-to-be-published CVEs (found here and here) related to Mikrotik’s IPv6 implementation.It is my opinion that this entire process has been poorly handled, and that the community involved tends to be fairly sensitive to issues such as, and the cloak and dagger nature of the two issues has only exacerbated it. Mikrotik, as a company, is well known for being terse in their responses and tight lipped with their internal workings and dealings with these kinds of issues. I take that as a given, that’s their business and realistically we’re entitled to know exactly none of that information, even if it would be nice to have it. The history behind this is discouraging, the original discovery was quite some time ago with reports dating back to 2013, and the person who originally uncovered the issues did so upwards of a year ago, bringing them to Mikrotik at that time, as can be seen in this thread. Now, anyone with a passing knowledge of pen testing or IPv6 device load testing can trivially put together the information needed to decipher the problem and replicate it, neither are exactly complicated or new. Both can be done in literally one line of common, open source toolkits. The issues are not magical and are not even esoteric or cryptic. They are very straightforward, and by reading the threads and understanding how things like route caches and neighbor discovery work, they become very clear. Since this is IPv6 related I am very interested in it because I feel that WISPs and emerging markets are an excellent environment for moving IPv6 forward, and the equipment and mindset involved makes that fairly straightforward. Reverse engineering these given the information available is pretty straightforward, and folks other than me have done it too. I personally do not consider either of these a security vulnerability or a bug, per se. They’re both the result of a short sighted protocol implementation resulting in a very acute, unfortunate event. With the benefit of hindsight, and as an outsider, I can only wonder if this had been handled differently (i.e. not framed as a critical security vulnerability but rather a broken protocol), if the hysteria that resulted could have been quelled. On a particular forum thread, this was likened to the discovery of the “ping of death”, and that feels like a good analogy to me. It probably should have been handled that way. So, I will post my .02 as to how this kind of even can be handled in the future, in case there is no better process to work with:
[buraglio@gw] > /ipv6 route cache print interval=1 cache-size: 190max-cache-size: 1024000
The ND issue can be mitigated with the following command (obviously adjusted for your own environment).
/ipv6 firewall filter add action=drop chain=forward connection-mark=drop connection-state=new /ipv6 firewall mangle add action=accept chain=prerouting connection-state=new dst-address=\ 2001:db8:3::/64 limit=2,5:packet add action=mark-connection chain=prerouting connection-state=new dst-address=\ 2001:db8:3::/64 new-connection-mark=drop passthrough=yes
And for those more interested in the actual process, here is a video demonstrating the basic route-cache issue (commands, although very easy to figure out are obfuscated)
Mikrotik has released a fix as of this morning (4/1/2019), although it is currently beta. ROS 6.45 addresses both the route cache and the neighbor table issue. More details on the discovery will be disclosed at the UKNOF conference.