Security Agent Bloat: A Growing Concern

“Computer viruses are an urban myth.”

Peter Norton, circa 1988

The 1990s

In the 1990s, having a security agent on your computer meant having an antivirus software package installed (or pre-installed in many cases). The most popular two solutions at the time were McAfee Antivirus and Symantec Antivirus, because they had worked out licensing deals with most Original Equipment Manufacturers (OEMs) to have their software pre-installed on each system their respective OEM sold. Most malicious software at the time was benign, such as the Morris worm or Melissa virus, which seemed to be written more as a proof of concept than to actually cause harm. Most businesses of the era were just starting to adopt computer systems and learn of the potential it could unlock for their workforce and their bottom line.

The 2000s

By the turn of the century, after the DotCom Era Bubble burst, many companies were left picking up the pieces. Attackers, on the other hand, did not slow down. As more money was transacted across the Internet thanks to companies like eBay and Amazon, attackers started to see an opportunity to profit from their nefarious skills. No longer would malicious software be written by highly skilled academics as a proof of concept or unintentionally released by a graduate student to lament his lost girlfriend. It was quickly becoming evident that traditional antivirus software would no longer be adequate. It was time for a new era of security software to step up.

“Hackers are breaking the systems for profit. Before, it was about intellectual curiosity and pursuit of knowledge and thrill, and now hacking is big business.”

Kevin Mitnick

The Mid-2000s

By the mid-2000s, as broadband service providers began to become ubiquitous across America, Internet Commerce began to rise from the ashes of the DotCom Era Bubble and take flight. This also marked the era of spyware and adware software. Seemingly overnight, companies such as Gator Corporation created free software to fill web page forms and help manage financial information like credit card numbers. This software was almost never open-source or made by a community of loving developers. Instead, it was created to collect sites visited, credit card numbers, and other data, all while posing as simple and helpful software. This rise of objectionable software brought us the likes of Spybot Search & Destroy, Malwarebytes Anti-Malware, SUPERAntiSpyware, AdwCleaner, SpywareBlaster, and a whole host of free, online scanners as antivirus manufacturers attempted to innovate. But, most of these solutions would be uninstalled once the system was cleaned, leaving it highly vulnerable to re-infection. Businesses often operated in the same manner, relying on their trusty fallback of a good antivirus solution. The only real innovations in the antivirus market at the time was implementing real-time scanning (in memory), heuristics scanning, and a higher frequency of definition updates.

The 2010s

Around the turn of the decade, Information Security as an industry began to take shape. Many people outside of the industry also began to realize this problem was not going to go away and we could not create the perfect protection mechanism. Security experts knew this in the 1980s, but it took awhile for it to spread as common knowledge.

“Attacks always get better; they never get worse.”

Attributed to NSA by Bruce Schneier

The 2010s quickly escalated things by bringing us Nation State sponsored attacks like Stuxnet, which spread unintentionally; Botnets, or zombie computers used collectively for malicious intent; Ransomware, which encrypted user data for ransom, further enabled by anonymous payments; File-less malware, which could cleanup behind itself; Polymorphic malware, which could create a delta of itself with each install, becoming virtually undetectable with traditional scanning techniques; Crypto-Jacking, or the misuse of computing resources for the purposes of mining cryptocurrencies; and every combination of all of the solutions above.

Along with these new, emerging threats came companies with innovative solutions, such as SentinelOne, CrowdStrike, FireEye, Cylance, Carbon Black, Forcepoint and many, many others. Not only was it becoming important to stop outside threats, it was just as important to stop inside threats. What seemed like overnight, companies began to track file integrity, network traffic, user behavior, database access, and many other aspects of their environments. Being attacked was no longer a question of “if”, but “when”.

All of these solutions required an agent or multiple agents. All of those agents required resources. Each one generally consumed 1-3% of CPU cycles here or 100-200 MBs of RAM there. Added together, they began to form a formidable obstruction to productivity.

All of this brings us to today. This is our current state. At the time of this writing, most experts and practitioners believe our best solution is to deploy threat management technologies in layers. This means, if one layer is compromised or vulnerable, there will still be yet another layer of protection. On the endpoint, having multiple solutions leads to one glaring issue: agent bloat.

A Discussion of Solutions

Due to businesses still facing this issue and the Information Security industry not yet collectively deciding on a path forward, companies must face this challenge independently. Since each selection of deployed threat management solutions is driven by different factors such as cost, features, unique business requirements, and threat models, a common solution to agent bloat may still yet be out of reach for some time. Still, there are some possible commonalities at which to look.

Feature Overlap

Due to the nature of each threat management vendor effectively completing a similar task, many times purchased solutions will be deployed with duplicate features enabled. Each feature should be identified for each deployed solution and disabled in sister products if they cause conflicts in a computing environment. The most robust feature should remain enabled in the intended solution and disabled elsewhere.

Scheduled Scans

Many agents scan in real-time, which in itself can be problematic, but they also often perform periodic full scans of the file system. These full scans should never be scheduled at the same time as another product. Full system scans should be completed outside of business hours, naturally, but also within their own scan window. Scan windows should be maintained meticulously and re-evaluated with the purchase of a new solution or at the time of of renewal or upgrade of the current product.

Whitelisting Products

Most vendors maintain a list of files and directories they recommend whitelisting in other threat management solutions. These lists should be followed, maintained, and re-visited often, to ensure the lowest performance impact for a computing environment. If it is not easily found in the provided documentation, ask for it. Customer Support can often provide this documentation.

While there is still no silver bullet to fix our current predicament, there are still many good steps to take to help cut back on agent bloat. If you find yourself stuck, ask for help. Reach out to Customer Support or your Technical Account Manager to seek out solutions.

In all of our efforts to stay secure, one thing we must keep in mind is to never become a roadblock in the business. Security is not about saying no, it is about making smart decisions to innovate a business and find new ways to stay secure.

“Security should help and enable the business.”

Dr. Eric Cole

10 Steps to Securing The Endpoint


Endpoint security is often not given a lot of thought or planning within an organization, but it can be incredibly important. For most security-minded people, endpoint security is usually associated with technical controls. These controls are often not perfect, but are a good first line of defense to help build a more secure network. If you, as a Network or Systems Administrator can put these controls in place, it will make your life much easier over time, because a lot of breaches begin with downloaded malware that is installed by the end user, with or without their consent. Here are 10 technical controls you can implement on your network to help mitigate that risk.

Standard User Account

This control can be implemented in almost every operating system or light-weight directory environment. It returns results immediately, as well. The gross majority of malware requires administrative rights to install, even those that do not require user consent. By using a standard account on all endpoints, as an administrator, you greatly limit the attack surface that account would have otherwise provided running with administrative rights.

Account Auto-Lock Policy

It is pretty common, most users do not lock their workstation or laptop when they walk away. But, by having a policy of locking endpoints when they are not in use, you increase user authenticity and bolster physical security. In a Microsoft Windows environment or network, this policy can be easily enforced through Group Policies to all machines on a domain. In a Macintosh environment and network this can also be achieved with Profile Manager and a Macintosh Server. (Microsoft’s Systems Center Configuration Manager 2012 R2 can also manage Apple Macintosh systems, as well as Microsoft Windows systems.)

Operating System Updates

Operating system updates should be top priority in any environment. Almost every major operating system, as of this writing, has a method for automating updates for at least critical, important, and recommended updates (naming scheme is often platform dependent). These updates can also be automated with different centralized management platforms, paid and free. Usually, the easiest way to centrally manage updates across a network is to use the vendor’s configuration utility (mentioned under ‘Account Auto-Lock Policy’). A popular, free solution is Windows Server Update Services (WSUS) by Microsoft. Apple’s Mac OS X Server can also act as an update proxy server.

Third-Party Software Updates

Patching third-party software is usually a little harder to automate, but is just as important as operating system updates. Third-party patching can be harder to perform due to different patching schedules, multiple vendors, and support, or the lack thereof, in patching clients. One of the author’s favorites at the time of this writing is Ninite. Ninite Pro can even be implemented across multiple domain-joined machines in an Active Directory environment. But, there are many other options like Shavlik and GFI’s LanGuard.

Host-Based Anti-Malware

Anti-malware software has gotten a bad rap over the last few years, because most users consider it inadequate. Truthfully, it is by itself; that is why it is only 1/10th of all of the controls in this the recommendation list. Anti-malware software should be used on an endpoint as the first line of security, and considering it comes with most major operating systems, it should be enabled no questions asked. It should acts as the watchdog on the system and any alert that it finds should cause an escalation of the effected system for further analysis. The author recommends Cylance, at the time of this writing, for Windows environments.

Host-Based Firewall

Once again, at the time of this writing, almost every operating system comes with a built-in firewall, much like automatic updates and anti-malware software. It should just be enabled. Again, it can be centrally managed in most environments and third-party vendors have offerings that can be centrally managed, as well. Use it. It is better than nothing.

Install Secure Browser

This should be everyone’s favorite topic, because of all the great, free choices. Personally, the author recommends Google’s Chrome browser at the time of this writing. Google Chrome uses pinned certificates (to combat inauthentic certificates), each tab is a separate process, there is great extension support, and the built-in synchronization settings make everyday tasks almost operating system-agnostic. Mozilla’s Firefox is another great choice, but is relatively less secure by most counts. Internet Explorer honestly should not even be a choice in 2016. Microsoft’s Edge browser is still a little too new and has no extension support. Apple’s Safari is adequate, fairly secure, but lacks the speed and extension support of Chrome.

Full Hard Drive Encryption

Hard Drive encryption is available on most operating systems, if the correct edition is used. Microsoft offers BitLocker and Apple offers FileVault. A good third-party, open-source alternative with Trust-No-One (TNO) security is VeraCrypt. It is best used in situations where physical theft is a risk in the organization. It is highly recommended for portal computers such as tablets and laptops, but it can also prove useful for workstations. Encrypting external hard drives and thumb drives is also highly recommended.

Remote Logging

After any breach or security event, logs are always one of the first ways to track down exactly what happened. The only issue is, if logs are located on the same system that was compromised, the logs can no longer be trusted. The best option is to move the logs, or copy them, to a remote system. Usually, there will be a server (preferably two) that collects the logs from all of the other systems on the network and store them securely. Loggly has an open-source solution, for those looking to get started.

External Backups

Having a good backup should be an integral part of every network. All endpoint data should be located centrally and then protected by a standalone backup appliance. If endpoint user data must be decentralized, even a Windows Backup on an external drive is better than nothing. Getting those backups to an off-site location should also be a top priority. If endpoints are joined to a domain and important files are replicated to a file server, then Datto makes a good selection of standalone backup appliances that replicate data to highly-secure, off-site redundant bi-coastal datacenters.

As an administrator of any type, if you can implement half of these recommendations, you will be doing better than most. But, even if you implement all 10 steps, this is by no means a guarantee that nothing bad will happen. These are 10 strong recommendations to help mitigate risk. If you put these controls in place it will make your life much easier and your network much safer.