The battle over Advanced Persistent Threats (APTs)
May 31, 2016
Shah Sheikh (1294 articles)
Share

The battle over Advanced Persistent Threats (APTs)

Advanced Persistent Threats are a constant concern for the IT Director, the CIO and, lately, the entire C-Suite. Every day, we learn about how a new piece of malware has penetrated a supposedly protected IT environment. How does the APT not only survive but thrive despite the IT infrastructure theoretically in place to block it? And how can IT Directors and System Administrators re-evaluate their IT security posture to protect themselves from zero day threats?

Introduction to APT: Understanding the malware problem

Put simply, the problem with malware is that it’s not being accurately and efficiently detected. Why? The answer lies in the fine line dividing the terms ‘resistant’ and proof’.

A water-resistant wristwatch may be resistant to water but is not ultimately waterproof. This ‘resistance’ is also usually qualified up to certain depth. Take that watch down a little too far and it will be ruined.

Padlocks, perhaps more aptly, are tamper-resistant, not tamper-proof. One could try lock-picking, or use a pocket-sized crowbar or a host of other measures to separate hasp from staple without success. However, hit that lock with a 40 pound sledgehammer, and it will most definitely yield.

Similarly, traditional anti-virus measures don’t make computers infection-proof, only infection-resistant, and then still only really resistant to ‘known bad’ files. This is due to an over-reliance on blacklisting technology (utilising virus signature databases) to recognise and remove or otherwise block malicious files from an environment. In other words someone, somewhere, had to be patient zero for that file to make it onto a blacklist, though statistics show that there are more often hundreds, if not thousands, of patient zeros before the infection is recognised, a signature created and a database update finally rolled out.

Also, when $40 exploit kits can spit out thousands of new malware variants a day, how can anyone keep a blacklist updated? It’s become physically impossible.

And what if just one ‘patient zero’ had code specifically written and targeted exclusively to them? Would that code be detected? And what if that code was so ingeniously created by highly skilled hackers that it was completely unrecognisable against the backdrop of the millions of other files on the network? How long would it take to be detected?

Based on so many of the stories in the news these days, the likely answer is not for a long, long time, if ever.

The knee-jerk reaction to blacklisting is a full 180-degree pivot to whitelisting. Only known good files are allowed into the network. This raises some serious questions: is it possible to whitelist every file on a network while maintaining any semblance of productivity? And how does one know which new files to whitelist? Especially with the growing number of never-before-seen files?

Whitelisting is perhaps better described as a process, along with blacklisting, which forms part of the solution, not the solution in and of itself. Whitelisting is designed to increase resistance.

This leaves us with that area between black and white, the grey area of the spectrum, denoting all unknown files. Maybe they’re good, and maybe they’re bad, but for the moment, as unknown files, they could be either.

Enter the APT

Let’s start with ‘What is an APT?’ Again, as stated above, APT stands for an Advanced Persistent Threat.

‘Advanced’ meaning it’s ahead of its peers, ‘Persistent’ meaning it’s continuous, tenacious and ongoing in nature, and ‘Threat’ meaning with intention to cause damage or menace.

The process behind an APT could come from the pages of an Ian Fleming or John le Carré novel. It starts with profiling the target. This is done through a variety of methods including externally available public information tools and resources and other websites where organisations advertise for IT staff and disclose the hardware and software they’re using. Google hacking and any other number of public sources may also be leveraged as part of the profiling process. The organisation’s business partners, suppliers and customers are usually thoroughly researched and noted as well.

Once this information has been gathered, it’s time to make a phone call to the organisation (most likely to the HR department) in order to establish personnel movements. A call or two to the helpdesk may also take place to test the resilience of support staff to simple password reset requests.

During this time the target will likely start receiving emails, perhaps from a ‘supplier’ under the context of an attached invoice, perhaps from a ‘customer’ under the context of a pricing inquiry. Unsuspecting users may open these attachments and, using a yet-to-be-discovered programming flaw, become part of an exploit and allow a new and malevolent ‘unknown’ into the network.

Perhaps the intrusion may be physical: a burglary, perhaps. The organisation may find that a number of items have been stolen. What they might not realise in time is the new ‘unknown’ piece of software brought in by the ‘burglars’ and injected into the environment from a USB thumb drive or memory stick that’s quickly and silently spread all over their network.

Regardless of the specific steps taken, the attack will generally not stop until the hackers’ goals are met, and the organisation is suddenly in crisis mode, losing money, IT resource time, confidential customer data or trade secrets, and brand value.

The legacy approach

Blacklists identifying ‘known-bad’ files are a good starting point of defense but are prone to failure when ‘unknown’ files are encountered. The default-allow behavior of legacy security software is to allow unknown files to enter the organisation. If a file doesn’t exist on the blacklist, then it must be ok – so in it comes.

Whitelists or Host Based Intrusion Prevention systems permitting only ‘known good’ files are an excellent enhancement when used in conjunction with blacklists but are unwieldy and struggle to deal with those same ‘unknown’ files.

It’s impossible to say how many unknown files are in any given network because they only tend to unveil themselves at run-time in mid-breach. Many unknown files go undetected for very long periods, quietly collecting and transmitting confidential data to the ‘bad guys’.

The challenge is how to manage unknown files without impeding day-to-day business operations and productivity, or otherwise hindering the user’s experience.

What’s needed is a change to the default method of handling unknown files. Instead of always allowing them onto an endpoint (persistent default allow) and inviting malware in, or explicitly forbidding all non-whitelisted files in (persistent default deny) a more effective mechanism is to incorporate both blacklisting and whitelisting while automatically sandboxing unknown files at run-time.

The solution to dealing with all of these ‘unknowns’ must therefore be three-fold.

The next generation approach

One, employ a default-deny architecture to disallow access to the OS kernel while still allowing applications to execute, so productivity isn’t overly compromised.

Two, allow this execution only in an isolated environment where the application has ‘virtual’ permissions to a ‘virtual’ kernel — a sandbox. Next generation sandboxing, or ‘containment’, provides, in essence, a jailed environment, where programs can run, but are isolated from the rest of the host environment. Containment is robust but lightweight. In this way, CPU resources are preserved and if the unknown file ends up being malware, it won’t be able to infect the system, and will remain effectively ‘jailed’ until it’s deleted from the environment.

Three, ensure that the sandbox (or container) is the default environment in which these ‘unknowns’ are allowed to run – i.e. automatic containment. In other words, a true, workable default deny architecture.

Unlike automatic containment, manual containment relies on a user to make an informed decision on the validity of a file which then slows things down while opening the door to human error. Automatic sandboxing and containment as the default procedure solves the problem of manual and heuristic sandboxing.

Thus, rethinking the standard model and protecting from the inside out instead of the outside in relegates the ‘hard exterior/soft interior’ discussion to the pages of history and instead introduces the concept of a hard exterior and an even harder interior; removing the human-error element and focusing defense where it matters most: the endpoint.

Source | ITProPortal