Learning More Through Exercise

In the 1990s studies confirmed what many people had asserted for decades, that our brain had an autopilot mode. To explore this hypothesis, researchers used fMRI brain scans to test this theory. They had some participants play a well-known card game with a full understanding of the rules, while they had other participants play an unfamiliar card game where the researchers did not define the rules. The second group was left to figure out the card game as they played.

During their research they found the latter group showed signs of using parts of their brain associated with learning, with much of their brain showing active as they figured out the game as they played. The former group, being familiar with the game and rules, showed decreased activity in the brain overall, with the activity demonstrated in the study aligning with an activity pattern known as the default mode network (DMN).

This default mode network is commonly referred to as the autopilot part of our brains. This is also the part of the brain that is active during activities such as meditation. It is the portion of the brain that allows us to efficiently multitask while doing other simple tasks such as walking, running, sitting, driving, and even showering. Anything with which we are already familiar, have fully internalized, or can perform through “muscle memory”, relies on the default mode network insider our brain.

Many of us get comfortable with what we do everyday; therefore, most of our routine is performed by utilizing our default mode network. Our brains want to put us into a routine any chance it can get to conserve energy. Our brains will do everything in their power to simplify anything we do so it can then set that activity on autopilot. This is demonstrated best with small children.

When a child is first learning to walk it requires concentrated effort of will. A child will fall repeatedly as its brain struggles to keep everything in balance while maintaining a forward momentum. Eventually, over the course of only a few months, the child’s brain will become increasingly efficient at the once impossible task of walking. Soon, the child’s brain becomes so efficient at walking that the child pushes to explore running, which is the natural progression of walking.

The key is, the child only grows by stepping out of its comfort zone of sitting up and crawling. Like the child, we too can use these techniques to grow and expand our abilities and ideas. There is a good chance we are all comfortable, our brains have put as many things as possible on autopilot, and we are just flying through life looking for casual enjoyment. The seat belt lights are off and we feel safe to move about in the regiment of our daily routine. That is no place for growth.

If growth only happens outside of our comfort zone, one of the easiest ways to step out of that comfort zone is through exercise. New exercises, especially, will push you to move in ways for which your brain may not have a preprogrammed flight path. You have probably never found yourself doing a burpee while standing in a check-out line at your local grocery story or pressing out a set of tricep extensions on your drive to work. If you have, we may need to talk. (Feel free to leave a comment!)

Beyond removing you from your comfort zone, exercise has many different paths for personal development if done correctly. It requires an almost Zen-like approach when you focus on the exercise being performed. Taking this focused approach will further force your brain to abandon its default mode network circuitry and begin the phase of learning something new. That sense of learning will spill over into other parts of your life.

You will find yourself breaking old habits in life and at work, discovering new solutions to old problems, and becoming comfortable with being uncomfortable. It will inspire a new sense of confidence in your mind as you see changes in your body from hard work and renew your sense of accomplishment in the tasks you do everyday. The benefits are only limited by the amount of change you allow into your life.

Although we evolved with the default mode network portion of our brain intact, which helped us survive in times of famine and need, today it only seems to hinder our ability to grow. It is something that has served us well and we must respect it in its ability to make us more efficient, but it is also something we must fight, unless we become complacent and stagnant. Step out of your comfort zone and be committed to your reason for doing so.

Autopilot disengaged.

How Much Does a Data Breach Cost?

There was another big data breach in the news today. Did you see it?

All of the reports said hundreds of millions of records were stolen. They all gloss over the details with the same cold, ruthless efficiency they use to tell us about how well the stock market performed. They said emails, passwords, social security numbers, and medical records were all taken. Then, just as soon as the report aired, it ended. That was it. We can now all now go on about our daily lives again like nothing ever happened. 

But, you and I, we know the real story.

We know that thousands of people have their identities stolen each day, even children who have never had a line of credit. That will make an interesting present for their 18th birthday. We know that email an elderly woman received impersonating her doctor was not a random event—it was a targeted attack. We know those precious pictures of a newborn baby that now sit encrypted on a hard drive are priceless, no matter how much a bitcoin costs. We know that picking up the pieces of a life that was shattered due to the negligence of a company that failed to do the right thing is the hardest thing someone can face alone. We know that behind every data breach lies the potential to further weaken our trust in institutions, a cornerstone on which our economy is built.

That is why we wake up each morning and do what we do. It is our job to help keep any one of these stories from happening to the people we serve. All of whom are our neighbors, friends, family members, and fellow citizens. We defend their sensitive data from the criminal intent that would erode the foundation of our society and watch us all crumble for malicious gain.

We know that a breach is not about how many records were stolen or how many files were encrypted, it is instead about how many lives were adversely affected and how much we all stand to lose in the end. 

We stand together, proud to be Information Security professionals. It is not just a job. It is a calling.

Security Agent Bloat: A Growing Concern

“Computer viruses are an urban myth.”

Peter Norton, circa 1988

The 1990s

In the 1990s, having a security agent on your computer meant having an antivirus software package installed (or pre-installed in many cases). The most popular two solutions at the time were McAfee Antivirus and Symantec Antivirus, because they had worked out licensing deals with most Original Equipment Manufacturers (OEMs) to have their software pre-installed on each system their respective OEM sold. Most malicious software at the time was benign, such as the Morris worm or Melissa virus, which seemed to be written more as a proof of concept than to actually cause harm. Most businesses of the era were just starting to adopt computer systems and learn of the potential it could unlock for their workforce and their bottom line.

The 2000s

By the turn of the century, after the DotCom Era Bubble burst, many companies were left picking up the pieces. Attackers, on the other hand, did not slow down. As more money was transacted across the Internet thanks to companies like eBay and Amazon, attackers started to see an opportunity to profit from their nefarious skills. No longer would malicious software be written by highly skilled academics as a proof of concept or unintentionally released by a graduate student to lament his lost girlfriend. It was quickly becoming evident that traditional antivirus software would no longer be adequate. It was time for a new era of security software to step up.

“Hackers are breaking the systems for profit. Before, it was about intellectual curiosity and pursuit of knowledge and thrill, and now hacking is big business.”

Kevin Mitnick

The Mid-2000s

By the mid-2000s, as broadband service providers began to become ubiquitous across America, Internet Commerce began to rise from the ashes of the DotCom Era Bubble and take flight. This also marked the era of spyware and adware software. Seemingly overnight, companies such as Gator Corporation created free software to fill web page forms and help manage financial information like credit card numbers. This software was almost never open-source or made by a community of loving developers. Instead, it was created to collect sites visited, credit card numbers, and other data, all while posing as simple and helpful software. This rise of objectionable software brought us the likes of Spybot Search & Destroy, Malwarebytes Anti-Malware, SUPERAntiSpyware, AdwCleaner, SpywareBlaster, and a whole host of free, online scanners as antivirus manufacturers attempted to innovate. But, most of these solutions would be uninstalled once the system was cleaned, leaving it highly vulnerable to re-infection. Businesses often operated in the same manner, relying on their trusty fallback of a good antivirus solution. The only real innovations in the antivirus market at the time was implementing real-time scanning (in memory), heuristics scanning, and a higher frequency of definition updates.

The 2010s

Around the turn of the decade, Information Security as an industry began to take shape. Many people outside of the industry also began to realize this problem was not going to go away and we could not create the perfect protection mechanism. Security experts knew this in the 1980s, but it took awhile for it to spread as common knowledge.

“Attacks always get better; they never get worse.”

Attributed to NSA by Bruce Schneier

The 2010s quickly escalated things by bringing us Nation State sponsored attacks like Stuxnet, which spread unintentionally; Botnets, or zombie computers used collectively for malicious intent; Ransomware, which encrypted user data for ransom, further enabled by anonymous payments; File-less malware, which could cleanup behind itself; Polymorphic malware, which could create a delta of itself with each install, becoming virtually undetectable with traditional scanning techniques; Crypto-Jacking, or the misuse of computing resources for the purposes of mining cryptocurrencies; and every combination of all of the solutions above.

Along with these new, emerging threats came companies with innovative solutions, such as SentinelOne, CrowdStrike, FireEye, Cylance, Carbon Black, Forcepoint and many, many others. Not only was it becoming important to stop outside threats, it was just as important to stop inside threats. What seemed like overnight, companies began to track file integrity, network traffic, user behavior, database access, and many other aspects of their environments. Being attacked was no longer a question of “if”, but “when”.

All of these solutions required an agent or multiple agents. All of those agents required resources. Each one generally consumed 1-3% of CPU cycles here or 100-200 MBs of RAM there. Added together, they began to form a formidable obstruction to productivity.

All of this brings us to today. This is our current state. At the time of this writing, most experts and practitioners believe our best solution is to deploy threat management technologies in layers. This means, if one layer is compromised or vulnerable, there will still be yet another layer of protection. On the endpoint, having multiple solutions leads to one glaring issue: agent bloat.


A Discussion of Solutions

Due to businesses still facing this issue and the Information Security industry not yet collectively deciding on a path forward, companies must face this challenge independently. Since each selection of deployed threat management solutions is driven by different factors such as cost, features, unique business requirements, and threat models, a common solution to agent bloat may still yet be out of reach for some time. Still, there are some possible commonalities at which to look.

Feature Overlap

Due to the nature of each threat management vendor effectively completing a similar task, many times purchased solutions will be deployed with duplicate features enabled. Each feature should be identified for each deployed solution and disabled in sister products if they cause conflicts in a computing environment. The most robust feature should remain enabled in the intended solution and disabled elsewhere.

Scheduled Scans

Many agents scan in real-time, which in itself can be problematic, but they also often perform periodic full scans of the file system. These full scans should never be scheduled at the same time as another product. Full system scans should be completed outside of business hours, naturally, but also within their own scan window. Scan windows should be maintained meticulously and re-evaluated with the purchase of a new solution or at the time of of renewal or upgrade of the current product.

Whitelisting Products

Most vendors maintain a list of files and directories they recommend whitelisting in other threat management solutions. These lists should be followed, maintained, and re-visited often, to ensure the lowest performance impact for a computing environment. If it is not easily found in the provided documentation, ask for it. Customer Support can often provide this documentation.


While there is still no silver bullet to fix our current predicament, there are still many good steps to take to help cut back on agent bloat. If you find yourself stuck, ask for help. Reach out to Customer Support or your Technical Account Manager to seek out solutions.

In all of our efforts to stay secure, one thing we must keep in mind is to never become a roadblock in the business. Security is not about saying no, it is about making smart decisions to innovate a business and find new ways to stay secure.

“Security should help and enable the business.”

Dr. Eric Cole

Favorite Retro-Computing/Gaming YouTube Channels

For the last few years I have really gotten into retro-computing. I find it fascinating and it really tickles my nostalgia bone. I have compiled a short list of my favorite YouTube channels, for those that might be interested.

  1. LGR (Lazy Game Reviews)
  2. The 8-Bit Guy
  3. Nostalgia Nerd
  4. 10 Minute Amiga Retro Cast (Fairly New Channel)
  5. RetroManCave
  6. Retro Recipes
  7. Gaming Historian
  8. Classic Gaming Quarterly
  9. SplashWave
  10. Game Sack
  11. My Life in Gaming
  12. Wrestling with Gaming
  13. The Pixel THING
  14. Metal Jesus (New and Old Gaming)
  15. Technology Connections

How to Enable Security

Security is not about saying “no”…

As an Information Security professional, we often find ourselves saying “no” to a lot of ideas and proposals.

  • “The business wants to purchase a new piece of software to house customer data after only the first demo.” No.
  • “Making me a local admin would allow me to do my job with less hassle.” No.
  • “I want to use Dropbox to store my company files so I can access them at home.” No.
  • “Can we open port 3389 so I can access the servers from home?” No.
  • “The CEO wants to store data in the cloud.” No.

There are times when saying “no” is easier and faster, but “no” is not a solution. The problem is, the word “no” is a barrier. Often times the Information Security department gets a bad reputation inside of lot of companies because it is seen as a place where ideas go to die. Once this reputation is earned it is hard to convince the business otherwise, but it is possible.


Say “how” instead

As professionals we should not be looking to say “no” to everything. Instead, we should start a dialog and find a way to say “how”. If an idea is presented, especially from upper management, there is often a good chance it is valid and already has momentum. All of the examples listed above are reasonable: initial demos can be very convincing; access to the infrastructure outside of the network can make a Work From Home program a possibility; and putting data in the cloud can be cost effective. More over, there are ways to accomplish these ideas and implement them securely.

It must be kept in mind that new ideas are rough around the edges and fragile. It takes a lot of time, energy, and dedication to keep an idea alive, so any negativity could be seen as confrontational and taken personally. When a proposal first arises, an Information Security professional must participate in early discussions and help their company pursue the idea with security in mind. An new proposal can easily be shaped and molded early on, unlike a proposal that has been discussed with others and then the Information Security department is involved later in the process. It comes back to the adage that security should be built-in from the start. From this perspective, the Information Security department becomes a very integral piece of the business and not just a function of its daily operations.

Information Security should enable the business, as Dr. Eric Cole always reminds us. This means getting involved early, finding viable solutions to tough problems, and staying involved. Slowly, the business will gain confidence in the Information Security department, fostered through cooperation, and the department will flourish. Because Information Security is about much more than configuring firewalls, monitoring logs, and finding new vulnerabilities, it is about helping the business become secure, grow, and operate.

Data Processing

I have been watching many videos on early-era computing and they have led me to give a lot of thought to the fundamental concepts in computing we now take for granted.

For example, sometimes we say “data processing” when we speak about our field, but the origin of that term is often long forgotten. The first computers were not used for watching videos or browsing websites; Instead, they were primarily used for solving large mathematical functions that would have previously taken teams of humans weeks, if not months, to calculate. These computing systems were single-task machines and had about the same general functionality as a modern Central Processing Unit (CPU). “Memory,” as we presently think of it, was in short supply and data was stored on punch cards. One of the first major uses of a computing system was to help tabulate the United States Census. It sped the entire process up tremendously.

Put simply, computers at that time were used to process data. That is all they did. That is all they were capable of doing. It was not for many years and decades that we were we able to build on this core concept to arrive at the computing systems we have today. All of which is built on processing data, manipulating it, and storing it again. 

Upgrading Ubuntu from 17.04 to 17.10

In June of 2017 I upgraded my Ubuntu 16.10 (Yakkety Yak) LTS install to Ubuntu 17.04 (Zesty Zapus). In October of 2017 Ubuntu released 17.10. (Artful Aardvark). I had not needed to upgrade my Ubuntu 17.04 installation for a while, but I felt it was best to keep it up-to-date. I was also experiencing issues with it from day-to-day. I decided an upgrade might work to fix some of the issues I was having.

The upgrade process is still fairly simple, but when performing it over SSH it can still be complicated. I ran into a small snag following my old procedure, so I felt it would be good to adapt it to a more generic upgrade procedure. As before, the process is not set-it-and-forget-it automated–it should be attended. SSH should already be configured and in-use, if this is your intended method, as well. This process assumes you normally access the system via SSH, but for those who do not know how, here is how to setup SSH.

Special Concerns

If the intended system on which the upgrade will be performed is a production system or contains valuable data, please consider performing a backup. There are several methods by which to accomplish this task listed here.

1. Install Screens

First off, make sure your repositories are up-to-date in your current distribution by running:

sudo apt-get update

Now we can install screen:

sudo apt-get install screen

Enter screen for the first time by typing:

screen

and pressing Space key for next page or Enter key to end, in order to accept the license. You can learn more on how to use screen here. The commands to run screen for this process will be included below.

2. Check for Release and Set to Normal Release Distribution

Check if there is even an update available to you by typing the following command:

do-release-upgrade -c

If a newer version of Ubuntu is available, it will be shown in the returned results. If this command does not work, you will need to install the Update Manager by running the following command:

sudo apt-get install update-manager-core

After the package installs, check for the upgrade again:

do-release-upgrade -c

If there is no version available, we need to make sure we are setup to upgrade to the latest normal release, which requires a small edit to a system file. To make this edit, type the following:

sudo nano /etc/update-manager/release-upgrades

Find the line of the document called prompt and make sure it says prompt=normal. If it says prompt=lts please change it.

If changes were made, press ctrl+o to save changes and then ctrl+x to exit. If no changes, just press ctrl+x.

Check for a new version again:

do-release-upgrade -c

If there is not one, your distribution may be too old and you may have to consider upgrading manually to a newer version.

Beware of PPAs

Run the following command to check your repositories listed on the system:

grep -r --include '*.list' '^deb ' /etc/apt/sources.list /etc/apt/sources.list.d/

-or-

grep -r --include '*.list' '^deb ' /etc/apt/ | sed -re 's/^\/etc\/apt\/sources\.list((\.d\/)?|(:)?)//' -e 's/(.*\.list):/\[\1\] /' -e 's/deb http:\/\/ppa.launchpad.net\/(.*?)\/ubuntu .*/ppa:\1/'

If you have PPAs on the system, they may interfere with the upgrade. Consider removing them and returning any packages to default from the support repositories with the following package:

sudo apt-get install ppa-purge

After it installs, remove the PPAs manually with the following command:

sudo ppa-purge ppa-name

Replace ppa-name with name of PPA Repository.

3. Start a New Screen and Upgrade

At this point we need to start a new screen for the upgrade process, because the upgrade process will kill the current SSH session. To do so, type the following commands:

screen -S upgrade

This will drop you into what seems like a new terminal session. In this screen type:

sudo do-release-upgrade

IMPORTANT: When going through the upgrade process you will be given a new port on which SSH will function during the upgrade. Document this number–in my case it was 1022. CANCEL the upgrade once you find the new port.

We need to edit the firewall in order to allow access to the host on the new port by running the following command:

sudo iptables -I INPUT -p tcp --dport PORT# -j ACCEPT

Replace PORT# with the new port number presented in the initial part of the canceled upgrade process.

To resume the upgrade, run this command:

sudo do-release-upgrade

Go through the process until you lose SSH access to the session and then keep following this guide.

4. Re-Establish SSH Access

Once you lose access to the default SSH port during the upgrade, you will have to use the new port number the upgrade process opened to attend the upgrade process. Complete the following steps:

ssh -p PORT# USERNAME@HOSTorIP

Once the new SSH connection is established on the new specified port, complete the following commands to attend the upgrade process:

screen -d
screen -r upgrade

5. Attended Upgrade

There will be multiple prompts during this upgrade process, so it is recommended you sit with it and periodically check it. The upgrade took me roughly 30 minutes in total on a 50 Mbps connection. The download will be roughly 1.4 GB in size, so the connection speed can make this process vary in time drastically.

I wish you the best of luck with the upgrade! Let me know in the comments what your experience was like.

Checking Disks in Linux

To provide a little background: a few months back I accidentally washed a 32 GB flash drive. I waited a few weeks for it to completely dry out and then did not use it for almost four months. I formatted it recently in Windows and it did not seem to exhibit any issues, but I wanted to know with more assurance that it was reliable. Continue reading Checking Disks in Linux

How to Speed Up a Slow External Drive

Problem

A while back a user reached out to me describing a problem of slow access to an external, bus-powered hard drive they had purchased only half a year ago. They said it was a USB 3.0 hard drive and they had also made sure to plug the drive into a USB 3.0 compatible port on their recently purchased laptop. The user also mentioned that the anti-virus solution they were using had unusually long scan times, sometimes running for over 10 hours.

They also described an issue of not being able to properly eject this same external hard drive after using it at the end of the day, but that was a separate issue that will also be covered.

Troubleshooting Methodology

After gaining remote access to this system, so I could see what they were seeing, I checked the configuration of the laptop. They were right, the laptop was powerful with a nice quad-core Intel process, 8 GB of RAM, and a SSD hard drive. But, all of this had little to do with why this external hard drive enclosure, which was a spinning disk, was performing poorly. I loaded the contents of the drive in Windows File Explorer and found multiple folders at the root of the drive. The user began navigating into the the folders and subfolders to find some files they were having issues working with. We navigated down five and six levels deep, and at each level I saw many other folders within each directory. I thought I had spotted the first issue, an index that was far too large to be accessed quickly.

Navigating back out to the root of the external drive, I checked the properties of the folder in which we had just explored and found that while it was not large in size, it had tens of thousands of files and folders within the folder. We checked a few more folders together at the root of the drive and they were the same way, tens of thousands of files and folders within each one. I explained that the drive was formatted as NTFS and that this type of file system kept a Master File Table which was basically an index of every folder and file on the disk. As this Master File Table became larger and larger as times went on, it can also became fragmented. This fragmentation could drastically slowdown the load times of folders and files within folders, because the actuator that controlled the read/write heads would have to constantly bounce around the disk to enumerate the files and folders with all their attributes within a specified directory.

Resolution

We set about resolving this issue by lowering the overall number of files and folders on the disk. We used an application called 7-Zip to compress one of the folders at the root of the external drive and then deleted the original folder from the drive. This lowered the number of entires in the Master File Table, increasing performance almost immediately. Since the user had mentioned that they were seeing incredibly long scan times with their anti-virus solution, I also recommended we password protect the zipped files, which would keep their anti-virus solution from being able to scan the contents of the file.

Over the course of a few days the user managed to compress and password protect many unused folders at the root of the external drive. They reported back much faster performance of the external drive and the anti-virus scans were no longer taking unacceptable periods of time to complete.

Bonus: Cannot Safely Eject External Drive

We had one last issue to tackle. The user was still having an issue ejecting the disk safely after each use. We plugged in the external drive and were immediately able to safely eject the external drive. We systematically opened files on the external drive with each application they used to perform their work, saved the file, and then tried to eject the drive. Everything went smoothly until the user opened an AutoCAD application file, saved the file, and exited the program. The drive would no longer safely eject. We closed a “helper” program for AutoCAD we found in Task Manager and the drive safely ejected. I showed the user this workaround method and also mentioned that a reboot would allow them to safely eject the drive, too.