Tuesday, May 20, 2008

The Future of Desktop Security - Part 3 - Desktop Policy Management and Administration

In Part 3 of this series, I have presented enterprise desktop security which incorporates both traditional anti-virus, anti-malware features as well as desktop policy management, foresics, desktop lockdown and license control. I would like to make the argument that IT managers should focus solely on corporate desktop policies - devising, administering and enforcing compliance. There is a need to separate policies from execution.

Policy administration should be as simple as defining it in plain text or in spreadsheets. Security software should enforce these policies automatically. Reports and dashboard should be built into these softwares which should report the compliance. In the enterprise business application market, several Business Process Management (BPM) companies have taken this policy management approach and have been successful with it. They call it business rules and several BPM vendors offer solutions catering to this approach (Evelson, 2008).

Once Bitten, Twice Shy?
Today, corporate IT security managers focus their priorities incorrectly. They focus on detailed policy execution rather than on simple policy orchestration. Having bitten by past anti-virus failures such as exposure to zero-day viruses, these managers are naturally paranoid about enforcing stricter security even if it comes at the cost of decreased flexibility. In many enterprises, IT is centralized with the sole notion of providing robust security. I feel this thinking is fundamentally flawed. In the balance between IT centralization versus de-centralization, IT security swings the pendulum unfairly towards centralization. The question then becomes, whether IT policies themselves can be decentrlaized and yet have a safe security policy? The answer is yes as new technologies are emerging that can exclusively cater to this.

To an IT security manager, the notion of authorizing “benign programs” exclusively is appealing. For example, policies such as the ones below should be the ones that IT security managers (in an ideal world) concern themselves with. Everything else is a matter of detail.

Example of Corporate Policies:
“Employees should not attach unauthorized external devices to corporate systems”
“Employees should not run personal MP3s and videos from corporate systems”
“Employees should not run unauthorized programs on corporate networks”

You might be wondering, if simple administration such as these are even possible? After all, most IT security managers spend the bulk of their effort exactly doing the opposite. Do such technologies exist today where IT security managers can free themselves from the mundane administration and prevention work and focus their energies instead on top-level policy and decision making on behalf of the company? In part four of this series, I have covered a list of softwares that do policy based administration and automation.

Granular Policy Management
Obviously, policy management needs to be flexible. One can envision layers of policies - a corporate wide policy management at the top layer whereby policies that are unique across the organization can be enforced. An example of a regional security policy can be:
"Computers deployed for call center purposes should not allow personal emails and/or Instant Messenger."

One can extend this analogy all the way to team level and further to individual policies. The following diagram illustrates this notion.


In the area of IT security management, there definitely exists an overlap between Desktop Asset Management and Configuration Management tools and softwares. In my opinion, this is one area that will be hotly contested by traditional vendors such as Novell Zenworks, Microsoft SMS and newly contesting players such as Bit9, DriveSentry etc.


Desktop Lockdown

Locking down or hardening the system refers to a configuration of the system such that it prevents unauthorized software from being installed on the desktop/laptop while not imposing such a restriction on legitimate software. There are a variety of reasons for locking down desktops - improving security and stability, reducing help desk noise, licensing loss, compliance and regulation are some of the chief reasons.

As I said earlier, IT security managers are concerned about giving admin rights to individual users within the organization. By denying individual users the admin rights, IT managers prevent a variety of unauthorized software installation and configuration changes on the desktop/laptop. However, the battle is only half won. Such a centralized co-ordination means loss of flexibility. Individual users are forced to raise trouble tickets to make minor modifications or to install legitimate software (that are actually whitelisted by the corporation), wait for the IT to install them. This translates into frustration for the user and productivity loss for the company.

An alternative solution is to provide software based "lock down". By combining lock down with granular policy management and whitelisting, corporates can bring the balance between flexibility and control. Thus, a locked down PC, for example allow an user to install a whitelisted software (note: this might require admin rights), while denying the installation of products in gray and/or black list.

The Future of Desktop Security - Part 2 - Alternative Technolgies

This is the second of the four part series on the future of Desktop security. In the first part, I argued that traditional signature based anti-virus technologies are a passé. I presented various factors that are likely to influence the downfall of traditional signature based anti-virus technologies. In this article, I present some of the alternative technologies that individuals and security managers will have to become aware of. These technologies, some of which are just emerging will complement traditional anti-virus technologies. Many mainstream anti-virus companies like Symantec, McAfee have started bundling some of these features on top of their traditional offerings.

Heuristics - Understanding the static and run-time behavior
Heuristics simply means a rule-of-thumb. Many anti-virus softwares employ heuristics to "study" malwares to understand their behavior and potency. As discussed in part 1 of the series, polymorphic and metamorphic viruses alter their code structure dynamically. Thus, they show structural changes between two instances. Anti-virus softwares sandbox (descirbed below) the malware and trick them to execution. These software then employ heuristics by applying fuzzy algorithm to study the properties of the polymorphic viruses at run-time.

A milder form of hueristics called passive hueristics is employed by anti-virus companies where they parse the code statically and apply heuristics to deduce the potential of a virus. One such technique is the use of Markov Chaining and variants of Markov Chaining, which probabilistically tells if a given piece of altered code might bear the resemblance of a known family of metamorphic virus.

Heuristics is the anti-virus community's answer to metamorphic viruses which are the most potent class of virus to exist to date. However, as I argued in Part 1, the effectiveness of using heuristics is questionable. Many authors have argued that only 30% of metamorphic virus forms can be identified by this technique. Another side-effect of this technique is the raise of false-positives.

Behavior Blocking/Intrusion Detection (IDS)
“When meditating over a disease, I never think of finding a remedy for it, but, instead, a means of preventing it."—Louis Pasteur (1822-1895)

Rather than wait for a new virus to emerge, propagate, destroy and then come up with a solution, intrusion detection technology shields a host (be it a computer or an entire network) and actively monitors every single operation. When a virus is about to unleash itself, these intrusion detection systems throttle and quarantine them, thus effectively diffusing the problem before they occur.

To understand this better, the following illustration shows the difference between how an Anti-virus technology and an Intrusion Detection technology handles a potential security problem. Imagine a malicious person trying to rob a local bank. In most situations the local bank would report the incident after it has occurred to the police who would then study the forensic, go after the clues the robber had left behind and then nab him. This is exactly how Anti-Virus technologies work.

Think of an alternative way. Imagine commissioning a secuirty-guard whose keen eyes are trained to capture the thief even before he has performed the action. For example, such a guard could look for patterns such as, “Does any person wear a mask? Does he appear to be loaded with a rifle? Does this person appear to be in a hurry?” etc. The idea here is that to a well trained eye, such activities raise alarms which can prevent the robbery even before it has occurred. This is what Intrusion Detection systems do in principle.

Two subclasses of Intrusion Detection are available - Host Intrusion Prevention Systems (HIPS) and Network based Intrusion Prevention. Since, this article is about Desktop Security, I would like to limit the discussion to HIPS. The biggest problem with such Intrusion Detection system is that they raise lot of false alarms. The vexing question is how do you guarantee that all genuine transactions are allowed? The problem with Intrusion Detectors is that they block several genuine operations and ask for validation. This makes these systems less reliable, noisy and cumbersome to use. One solution to alleviate this problem is the White Listing technology which is explained later.

Technical Note
: The term Behavior Blocking and/or System Firewalls are equally applied to HIPS technologies.

Threat Scoring
One of the major limitations of most behavior blocking softwares is that when they analyze the behavior of processes, they do not classify them into high-medium-low risk. Consequently, when the user is alerted of the operation, he/she is clueless as to what action to take. This gives raise to lot of false positives and the user incorrectly allows or denies the operation. A better way to score the behavior is to classify them according to the risk potential. Going by the bank theft analogy, it is one thing to say that the potential thief is wearing googles and hoodie or that he is reaching out his inner jacket (probably to draw his gun??). It is another thing to say that because the potential thief has done the above action and his profile looks such and such, I rate him as highly suspicious or moderately suspicious. Threat level scoring helps users take much better decisions.

Black Listing and White Listing Techniques

Anti-virus products maintain a list of malicious programs to compare against. They maintain either the signature of the malware or the the hash (MD5, SHA256) of the actual malware. This list has come to be known as the Black List.
Most IT security managers worry about preventing malicious programs from entering their territory. But, the fundamental question is why are IT managers even bothered about the malicious programs? Let us face it, if you wish to enter a corporation today, you are asked for an ID and are required to prove your genuineness. Corporate IT security managers do not have the social obligation of say a public park manager to allow just about anyone inside.

Consequently, corporate IT security managers should focus on allowing only the known good programs rather than focusing on preventing the bad ones. This is a fundamentally different approach and requires a very different mind-shift. The technology behind it is called the white listing technology which is the exact opposite of black list. i.e the signature and/or the hash (MD5, SHA256) of the good programs are maintained in a list. During execution time, only those programs that are part of the white-list will be allowed for execution.

Managing the Gray List and the Power of Peer Review
The question at this point is, as an IT security manager, do I know all the good programs ahead of time? Certainly not! This leads to a class of programs that is neither good, nor bad, but are unknown (sort of in limbo – aka the gray list). The unknown programs fall under the category of “I am bad until proven otherwise”. How do you manage those programs that fall under this gray list category. One alternative is to give up and ask the user instead. This assumes that the user is aware of the program and is intelligent enough to take a decision. Another side effect of asking the user is the intrusiveness associated with it.

One of the popular techniques is the use of social networking. For example, peer-reviews are one option by which the classification of gray programs can be administered. If more users rely on an "unknown program", then it is perhaps benign to allow such programs for execution. This is exactly the same approach that popular websites like Google, Wikipedia etc take. While one might argue the effectiveness of peer-reviews and social networking, collectively, there is no denial that such techniques will work. According to Surowiecki (2004), the decision making of a group is often better than that of individual users. Thus, often times managing the gray list via peer-review is better than asking the user for a decision.

Firewalls
The term firewall refers to a barrier enacted to prevent intruders from entering your network or the personal computer. Firewalls are classified into Network Firewalls and Process Firewalls. A network firewall prevents unauthorized external traffic from accessing internal resources and vice-versa. Process firewalls on the other hand allow/deny new processes from getting created. Network firewalls limit their scope to network connections, whereas process firewalls limit their scope to process creation time. Thus, they are different from IDS systems (such as HIPS) describe above in that HIPS monitors processes actively. For example, a HIPS program might allow a process to be started, but can control an operation (such as registry write) at run-time- thus offering fine grained control.

Note
: Sometimes the term Personal Firewalls are applied to that class of network firewalls that are installed on host (personal) machines. ZoneAlarm (Free Edition) is an example of a personal firewall that protects the host from attacks from outside entities.
The term system firewall is also applied to HIPS!! Apparently, there is lot of clutter around various terminologies. The lack of standards has given room to lot of marketing muddle with various vendors staking a claim flexing and bending the definitions.

Sandboxing/Virtualization
Sanboxing is the technique of isolating the process and let it run within a virtual container. Sanboxing is not a new technique nor is it used solely for security reasons alone. Popular applications such as VMware, Java's applet technology, .NET all run with certain sanboxing capability. The administrator can specify the right security policy such that applications that run within these virutal containers can (or cannot) interact with the host system and the network to which they are connected to.

Therefore, in theory, the potential for damage done by any application is limited to the virtual container alone- which in most cases can be rebuilt with ease. Futhermore, sanboxing is somewhat similar to behavior blocking in the sense that all sanboxes will have to monitor for behavior at run-time and enforce policies at run-time. However, unlike behavior blockers, sandboxes limit the exposure to a virtual container. This introduces additional layering of resources (memory, filesystem etc). This tend to cause process execution delays.

Note
: The term virtualization is often used with sanboxing technologies. Several emulators (hardware, OS etc) fall under this category.

Anti-Virus Technology Comparison Chart (Scroll below):













































Action


Strength


Weakness


Scope of Monitoring


Intrusiveness


Examples


Signature Based Anti-Virus


Passive


Omnipresent, Effective against static malwares


Ineffective against polymorphic and metamorphic viruses


Files, Memory


Low


Symantec, AVG, McAfee, Kaspersky etc


Heuristics


Proactive


Can detect known family of viruses


Moderately successful against unknown family of viruses


Processes


Low


Symantec, AVG, McAfee, Kaspersky etc


Behavior Blocking (IDS)/HIPS/System Firewalls


Proactive


Can detect all intrusive behavior


Very
noisy. Weak decision making intelligence.



Cannot detect static viruses


Processes- Full lifecycle


High


DriveSentry, PervX, FireStorm, WinPooch


IDS With Threat Level Scoring


Proactive


Aids in user decision making


Moderately noisy


Processes- Full lifecycle


Moderate


DriveSentry


IDS with Threshold Based Threat Level Scoring


Highly Proactive


Very Low Intrusiveness




Processes- Full lifecycle


Low


DriveSentry


Process Firewalls


Proactive


Monitors processes at startup


Not granular enough


Process startup time


High



Personal Firewall


Somewhat Proactive


Monitors network processes and traffic


Does not cover all processes


Network Processes Startup time


Moderate


ZoneAlarm, Norton Personal Firewall


Anti-Spyware


Proactive


Effective against spywares (tracking cookie etc)


Do not supplement traditional anti-viruses


Mainly web (browser) based processes


Low


Microsoft Anti-Spyware, ePestControl,AVG


White Listing


Proactive


Improves system integrity


False positives


Files


Moderate


Bit9, DriveSentry


Gray Listing (With Peer Review)


Proactive


Improved decision making


Reliance on external user’s knowledge


Files


Low


DriveSentry


Sandboxing/Virtualization

Passive


Robust security


Slows down system performance, additional licensing cost


Virtual Memory


Low


VMWare

Monday, May 19, 2008

The Future of Desktop Security - Part 1 - The Demise of Anti-Virus

In this four part series, I would like to present the various technologies that make up the desktop security. Arguably, desktop security has become synonymous with anti-virus technologies and perhaps a little of desktop administration and user rights management thrown in.

However, in this article, I would like to present some new emerging trends in desktop security and show why traditional signature based anti-virus technologies are on the wane. Besides, I would like to show why convergance of traditional signature based anti-virus with the newly emerging trends in pro-active security such as HIPS, Firewalls etc is a key. This is an area where large anti-virus companies such as Symantec, McAfee etc battle with alternative technology providers such as Prevx, Bit9, DriveSentry etc to offer one unified product.

Moving towards the enterprise side, I see policy based desktop management as an extension of IT security management. In the coming days, the battle for domination in Desktop Management by products such as Novell Zenworks, Microsoft SMS will heat up further when newly emerging upstarts such as DriveSentry, Bit9 etc stretch their wares towards IT configuration, asset management and forensics.

In the first part of this series, I would like to argue that traditional anti-virus technologies (i.e signature based virus detection and solution) are a passé. Given the near 100% market penetration that traditional anti-virus companies like McAfee, Symantec etc enjoy, the death of traditional anti-virus technologies might sound preposterous and untimely. However, I would like to present the various factors that have contributed to the ineffectiveness of traditional signature based scanning.


Defining the Traditional Anti-Virus Technologies.

Traditionally, anti-viruses solutions have two major components – a) a virus scanning and detection part and b) an anti-gen which will nullify or quarantine the virus, retrieve the files infected. The detection of anti-virus is based on signatures that are unique to each virus. Thus, when a new virus surfaces, the anti-virus companies deduce the signature, propagate the same via updates and then have the cleaning engine clean the virus from the computer that is infected.

In most cases the cleaning of the virus is generic enough and is built into the anti-virus engine. However, in some cases, the antigen has to be completely recoded and then transmitted to each of infected hosts. Thus, there exists an inherent delay between the time a virus emerges and a strain is developed. Viruses that take advantage of such a delay are called Zero-Day viruses.

The Anti-Virus technology is dead. Long Live the Anti-Virus Technology!!
Factor 1- Increasing Sophistication of Viruses
It is a well known fact that in the current race between Viruses and Anti-Viruses the Viruses always have the edge. In the past 25 or so years since the first virus appeared, the level of sophistication that viruses have undergone is simply tremendous. From a simple beginning, today’s viruses have shown the adaptability that are well known to their biological cousins. Mutation, Polymorphism, Genetic Makeover – you name it, the viruses have all got it. The sophistication has grown to a point where each day a new virus emergence utilizing the weaknesses in hardware, operating systems, applications, networks, databases and all the way to human behavior (phishing). Furthermore, the motivation for virus coders has shifted from mere personal challenge, hobby or status to more commerce based.



Factor 2 – Increase in Zero Day Threats
With the advent of Internet and the explosion in other communications technologies, the spread of viruses have been particularly disastrous. The proliferation of new zero-day threats have become a major cause of worry for several institutions who are held hostage until an antigen is developed.

Every time a new strand of virus breaks down, the world watches with tense moments (much like waiting for a SARS or Bird Flu breakdown) until the Anti-Virus companies come up with solution. During this period, the world is taken hostage by the newly unleashed viruses. Given today’s Internet penetration, epidemics such as the ones caused by Slammer W/32, Melissa, CodeRed etc in a short period of time has been devastating. Such threats have came to be known as Zero-Day Threats and traditional Anti-Viruses technologies in theory have no solutions for this.

Factor 3 – The Decrease in Effectiveness of Traditional Anti-Virus Technologies
With some of the more advanced viruses such as metamorphic viruses, the chances of an Anti-Virus technology being able to detect and squelch it is 30% at best. The pace as well as the intensity (sophistication) of such attacks also seems to be on the rise. According to Gartner, by 2007 75% of enterprises will be infected with undetected, financially motivated targeted malware (MacDonald, 2007). Thus, in the race between the good and the bad, the bad seems to be creeping ahead leaving the good far behind.


Factor 4 – Anti-Virus Technologies Are Reactive Not Proactive by Nature
The failure of Anti-Virus is clear. The reason why Anti-Viruses are hopelessly behind in this new race is that Anti-Viruses is because AV technologies are reactive by nature. That is, Anti Virus softwares wait for a new malware to attack before they come up with the defense. Such technologies have worked reasonable well in a static world dominated by sporadic virus bursts.

But, not anymore. We are now in a world of cyber-dollies (Dolly – the cloned sheep) and mutating engines. What IT security managers need is proactive technology that can prevent viruses before they sting- or better still before they clone or better yet before they are birthed!! This concept is akin to the pre-crime division that Stephen Spielberg famously portrayed in his movie “Minority Report”.

Technical Note: Most Anti-Virus products come with some level of pro-active security in the form of heuristics. However, such heuristics are only effective against a known family of viruses. For entirely new viruses, they are ineffective. (Refer part 2 of this series for discussion on heuristics).

Anti-Virus- A Technology past its prime
Anti-Virus has long served the cyber-world with stellar results. Their contribution should be duly recognized. Without their support, many of the tremors that we experienced with viruses would have been more like a major earthquake or a tsunami. However, according to Computer World magazine (quoting Symantec), in 2007 alone about 700,000 (roughly 70% of all known viruses today) have been created. By any measure this is alarming. Besides, the new viruses clone themselves into a totally new breed taking corporate IT networks hostage and turning them into a cloning factory. Thus, we need to look at new alternatives to traditional anti-viruses to secure ourselves.


References:
1) MacDonald Neil, Host-Based Intrusion Prevention Systems (HIPS) Update: Why Antivirus and Personal Firewall Technologies Aren’t Enough – Gartner Teleconference 25th Jan 2007
2) Chen, Thomas – The Evolution of Viruses and Worms – Working Paper
3) Cohen, Fred – “Computer Viruses: theory and experiments,” Computers and Security, vol 6., pp 22-35, Feb
4) DriveSentry – To learn more about DriveSentry’s innovative products, visit http://www.drivesentry.com/
5) Evelson Borris, Teubner Collin – How The Convergence of Business Rules, BPM and BI will Drive Business Optimization
6) Malware Count Blows Past 1 Million Mark – Computer World, Apr 8, 2008 - http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9075518

7) Surowiecki,James, 2004The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations

8) Vollmer Ken, Teubner, Collin – “Increase Business Agility with BPM Suites”, Forrester Research - http://www.forrester.com/Research/Document/Excerpt/0,7211,40041,00.html