“Legacy” network security layer and moving into Software-defined security/networking

By | December 13, 2015

In lack of finding a suiteable subject for this post, I just used the topic legacy to shine some light upon a modern problem in our infrastructure.

Modern firewalls have evolved from being a device that handles ACLs to becoming to the term Next-generation firewall, where we have features like

  • Application firewall & Application awareness
  • In-line deep packet inspection
  • IPS
  • QoS / Bandwidth mangement
  • Antivirus inspection
  • VPN/Secure Access MGMT

And of course these types of NGFW which typically sits between the infrastructure and the “rest of the world”, to protect resources from the “bad guys” outhere.

Whenever I think about firewalls, I think about the wall of troy, which was put in place to protect the residents of Troy from enemies coming from the outside.

NGFWs are a bit like the wall of troy, but with more traps, maybe even with pools with shark equipped with lasers on top to deal with incoming attacks.

And having a huge NGFW with huge bandwidth capabilities basically means that you have a huge gate and are able to have many people go inn and out at the same time, where the ACLs are the guards that keeps to see if the traffic is legit or not.

So what is the problem with this approach? The guards have done their job for many years now and it seems to have been working.  The issue occurs when we security issues already happening inside the gate, the guards at the gates are immobile and can’t help. Or if the guards at the gates are unable to detect the threats at the gates (such as the trojan horse)

This is the issue we are facing, more and more attacks on the infrastructure is happening from the inside and not from external attacks. Another issue is since many of the new attacks are so new that the signature based engine on the network firewall is unable to detect it. Now in some cases we can detect signature based attacks since traffic is split into different subnets and we can define the traffic to flow between the firewall and different zones.

Now there is a new way to approach this, with a zero-trust model. Where we have virtual machines within their own zone even if they are a part of the same subnet, moving away from the tradisional model of subnets and ACLs.

Such as with Vmware NSX and vArmour, which allow us to implement micro-segmentation.

Another advantage of this approach is this features are tightly integrated into the virtualization layer which allows us to control security from a virtual machine layer, and not at an IP-layer which makes it alot more flexible and dynamic.

So with this transition into sofware-defined why do we still need regular firewalls? “Why do we still need the big walls”? at the edge?

Software-defined datacenter is aimed at pretty much that at the moment, the datacenter. Its not aimed at protecting the regular computer clients and other perhipials in the buisness. We still need those guards to protect the regular residens from traffic going out and in.  Also in case of DDoS I would not want to have alot of traffic being processed trough a hypervisor or a virtual appliance, I would much rather have this being processed on a physical appliance that is aimed at processing this kind of traffic.

So before adopting a SDN technology make sure to have a proper strategy for all type of clients inside your infrastructure.

Leave a Reply

Your email address will not be published. Required fields are marked *