“Computer viruses are an urban myth.”

Peter Norton, circa 1988

The 1990s

In the 1990s, having a security agent on your computer meant having an antivirus software package installed (or pre-installed in many cases). The most popular two solutions at the time were McAfee Antivirus and Symantec Antivirus, because they had worked out licensing deals with most Original Equipment Manufacturers (OEMs) to have their software pre-installed on each system their respective OEM sold. Most malicious software at the time was benign, such as the Morris worm or Melissa virus, which seemed to be written more as a proof of concept than to actually cause harm. Most businesses of the era were just starting to adopt computer systems and learn of the potential it could unlock for their workforce and their bottom line.

The 2000s

By the turn of the century, after the DotCom Era Bubble burst, many companies were left picking up the pieces. Attackers, on the other hand, did not slow down. As more money was transacted across the Internet thanks to companies like eBay and Amazon, attackers started to see an opportunity to profit from their nefarious skills. No longer would malicious software be written by highly skilled academics as a proof of concept or unintentionally released by a graduate student to lament his lost girlfriend. It was quickly becoming evident that traditional antivirus software would no longer be adequate. It was time for a new era of security software to step up.

“Hackers are breaking the systems for profit. Before, it was about intellectual curiosity and pursuit of knowledge and thrill, and now hacking is big business.”

Kevin Mitnick

The Mid-2000s

By the mid-2000s, as broadband service providers began to become ubiquitous across America, Internet Commerce began to rise from the ashes of the DotCom Era Bubble and take flight. This also marked the era of spyware and adware software. Seemingly overnight, companies such as Gator Corporation created free software to fill web page forms and help manage financial information like credit card numbers. This software was almost never open-source or made by a community of loving developers. Instead, it was created to collect sites visited, credit card numbers, and other data, all while posing as simple and helpful software. This rise of objectionable software brought us the likes of Spybot Search & Destroy, Malwarebytes Anti-Malware, SUPERAntiSpyware, AdwCleaner, SpywareBlaster, and a whole host of free, online scanners as antivirus manufacturers attempted to innovate. But, most of these solutions would be uninstalled once the system was cleaned, leaving it highly vulnerable to re-infection. Businesses often operated in the same manner, relying on their trusty fallback of a good antivirus solution. The only real innovations in the antivirus market at the time was implementing real-time scanning (in memory), heuristics scanning, and a higher frequency of definition updates.

The 2010s

Around the turn of the decade, Information Security as an industry began to take shape. Many people outside of the industry also began to realize this problem was not going to go away and we could not create the perfect protection mechanism. Security experts knew this in the 1980s, but it took awhile for it to spread as common knowledge.

“Attacks always get better; they never get worse.”

Attributed to NSA by Bruce Schneier

The 2010s quickly escalated things by bringing us Nation State sponsored attacks like Stuxnet, which spread unintentionally; Botnets, or zombie computers used collectively for malicious intent; Ransomware, which encrypted user data for ransom, further enabled by anonymous payments; File-less malware, which could cleanup behind itself; Polymorphic malware, which could create a delta of itself with each install, becoming virtually undetectable with traditional scanning techniques; Crypto-Jacking, or the misuse of computing resources for the purposes of mining cryptocurrencies; and every combination of all of the solutions above.

Along with these new, emerging threats came companies with innovative solutions, such as SentinelOne, CrowdStrike, FireEye, Cylance, Carbon Black, Forcepoint and many, many others. Not only was it becoming important to stop outside threats, it was just as important to stop inside threats. What seemed like overnight, companies began to track file integrity, network traffic, user behavior, database access, and many other aspects of their environments. Being attacked was no longer a question of “if”, but “when”.

All of these solutions required an agent or multiple agents. All of those agents required resources. Each one generally consumed 1-3% of CPU cycles here or 100-200 MBs of RAM there. Added together, they began to form a formidable obstruction to productivity.

All of this brings us to today. This is our current state. At the time of this writing, most experts and practitioners believe our best solution is to deploy threat management technologies in layers. This means, if one layer is compromised or vulnerable, there will still be yet another layer of protection. On the endpoint, having multiple solutions leads to one glaring issue: agent bloat.

A Discussion of Solutions

Due to businesses still facing this issue and the Information Security industry not yet collectively deciding on a path forward, companies must face this challenge independently. Since each selection of deployed threat management solutions is driven by different factors such as cost, features, unique business requirements, and threat models, a common solution to agent bloat may still yet be out of reach for some time. Still, there are some possible commonalities at which to look.

Feature Overlap

Due to the nature of each threat management vendor effectively completing a similar task, many times purchased solutions will be deployed with duplicate features enabled. Each feature should be identified for each deployed solution and disabled in sister products if they cause conflicts in a computing environment. The most robust feature should remain enabled in the intended solution and disabled elsewhere.

Scheduled Scans

Many agents scan in real-time, which in itself can be problematic, but they also often perform periodic full scans of the file system. These full scans should never be scheduled at the same time as another product. Full system scans should be completed outside of business hours, naturally, but also within their own scan window. Scan windows should be maintained meticulously and re-evaluated with the purchase of a new solution or at the time of of renewal or upgrade of the current product.

Whitelisting Products

Most vendors maintain a list of files and directories they recommend whitelisting in other threat management solutions. These lists should be followed, maintained, and re-visited often, to ensure the lowest performance impact for a computing environment. If it is not easily found in the provided documentation, ask for it. Customer Support can often provide this documentation.

While there is still no silver bullet to fix our current predicament, there are still many good steps to take to help cut back on agent bloat. If you find yourself stuck, ask for help. Reach out to Customer Support or your Technical Account Manager to seek out solutions.

In all of our efforts to stay secure, one thing we must keep in mind is to never become a roadblock in the business. Security is not about saying no, it is about making smart decisions to innovate a business and find new ways to stay secure.

“Security should help and enable the business.”

Dr. Eric Cole

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *