Upgrading Ubuntu from 17.04 to 17.10

In June of 2017 I upgraded my Ubuntu 16.10 (Yakkety Yak) LTS install to Ubuntu 17.04 (Zesty Zapus). In October of 2017 Ubuntu released 17.10. (Artful Aardvark). I had not needed to upgrade my Ubuntu 17.04 installation for a while, but I felt it was best to keep it up-to-date. I was also experiencing issues with it from day-to-day. I decided an upgrade might work to fix some of the issues I was having.

The upgrade process is still fairly simple, but when performing it over SSH it can still be complicated. I ran into a small snag following my old procedure, so I felt it would be good to adapt it to a more generic upgrade procedure. As before, the process is not set-it-and-forget-it automated–it should be attended. SSH should already be configured and in-use, if this is your intended method, as well. This process assumes you normally access the system via SSH, but for those who do not know how, here is how to setup SSH.

Special Concerns

If the intended system on which the upgrade will be performed is a production system or contains valuable data, please consider performing a backup. There are several methods by which to accomplish this task listed here.

1. Install Screens

First off, make sure your repositories are up-to-date in your current distribution by running:

sudo apt-get update

Now we can install screen:

sudo apt-get install screen

Enter screen for the first time by typing:

screen

and pressing Space key for next page or Enter key to end, in order to accept the license. You can learn more on how to use screen here. The commands to run screen for this process will be included below.

2. Check for Release and Set to Normal Release Distribution

Check if there is even an update available to you by typing the following command:

do-release-upgrade -c

If a newer version of Ubuntu is available, it will be shown in the returned results. If this command does not work, you will need to install the Update Manager by running the following command:

sudo apt-get install update-manager-core

After the package installs, check for the upgrade again:

do-release-upgrade -c

If there is no version available, we need to make sure we are setup to upgrade to the latest normal release, which requires a small edit to a system file. To make this edit, type the following:

sudo nano /etc/update-manager/release-upgrades

Find the line of the document called prompt and make sure it says prompt=normal. If it says prompt=lts please change it.

If changes were made, press ctrl+o to save changes and then ctrl+x to exit. If no changes, just press ctrl+x.

Check for a new version again:

do-release-upgrade -c

If there is not one, your distribution may be too old and you may have to consider upgrading manually to a newer version.

Beware of PPAs

Run the following command to check your repositories listed on the system:

grep -r --include '*.list' '^deb ' /etc/apt/sources.list /etc/apt/sources.list.d/

-or-

grep -r --include '*.list' '^deb ' /etc/apt/ | sed -re 's/^\/etc\/apt\/sources\.list((\.d\/)?|(:)?)//' -e 's/(.*\.list):/\[\1\] /' -e 's/deb http:\/\/ppa.launchpad.net\/(.*?)\/ubuntu .*/ppa:\1/'

If you have PPAs on the system, they may interfere with the upgrade. Consider removing them and returning any packages to default from the support repositories with the following package:

sudo apt-get install ppa-purge

After it installs, remove the PPAs manually with the following command:

sudo ppa-purge ppa-name

Replace ppa-name with name of PPA Repository.

3. Start a New Screen and Upgrade

At this point we need to start a new screen for the upgrade process, because the upgrade process will kill the current SSH session. To do so, type the following commands:

screen -S upgrade

This will drop you into what seems like a new terminal session. In this screen type:

sudo do-release-upgrade

IMPORTANT: When going through the upgrade process you will be given a new port on which SSH will function during the upgrade. Document this number–in my case it was 1022. CANCEL the upgrade once you find the new port.

We need to edit the firewall in order to allow access to the host on the new port by running the following command:

sudo iptables -I INPUT -p tcp --dport PORT# -j ACCEPT

Replace PORT# with the new port number presented in the initial part of the canceled upgrade process.

To resume the upgrade, run this command:

sudo do-release-upgrade

Go through the process until you lose SSH access to the session and then keep following this guide.

4. Re-Establish SSH Access

Once you lose access to the default SSH port during the upgrade, you will have to use the new port number the upgrade process opened to attend the upgrade process. Complete the following steps:

ssh -p PORT# USERNAME@HOSTorIP

Once the new SSH connection is established on the new specified port, complete the following commands to attend the upgrade process:

screen -d
screen -r upgrade

5. Attended Upgrade

There will be multiple prompts during this upgrade process, so it is recommended you sit with it and periodically check it. The upgrade took me roughly 30 minutes in total on a 50 Mbps connection. The download will be roughly 1.4 GB in size, so the connection speed can make this process vary in time drastically.

I wish you the best of luck with the upgrade! Let me know in the comments what your experience was like.

New Upcoming Feature in Windows: Controlled Folder Access and More…

This fall, just a few short months from the time of this writing, Microsoft will be releasing a minor update to follow the most recent Windows 10 Creators Update from earlier this year. It will include some new features, including a few that revolve around their built-in Windows Defender suite. With these changes to Windows Defender, Microsoft hopes to make their latest operating system more resistant to ransomware attacks which have become prolific over the last several years.

One of the features coming with the update is called Controlled Folder Access. Microsoft touts the feature as a direct response to ransomware. It will work via a whitelist approach, with Windows Defender only granting certain applications the privilege to access the data of a protected user account; otherwise, the application is not allowed to read, write, or modify any data a user might own such as documents, pictures, or videos.

The default folder list includes Documents, Pictures, Movies, and Desktop and are hard-coded into the feature with no option for removal, but additional folders can be added manually through the Windows Defender Security Center. There will also be an option to add custom software to the whitelist, but Microsoft states that most software should already be pre-whitelisted. If an application is not whitelisted and attempts to alter data within a protected folder it will be automatically blacklisted and the user will be notified. Although this feature has many benefits, Microsoft will have the feature disabled by default. It can also be enabled in the Windows Defender Security Center under Virus & threat protection settings, as seen below.

How to enable Controlled Folder Access in Windows Defender Security Center Diagram
Controlled Folder Access settings window, courtesy of Microsoft Blog.

Other features coming with the Fall Creators Update include a Cloud Clipboard which will allow copy and pasting between multiple Windows 10 devices; a Timeline feature, which will be similar to the app switcher found on many mobile phone operating systems; Pick Up Where You Left Off, which will be an application synchronization service that developers can use much like the Cloud Clipboard; and OneDrive Files On-Demand, which will allow access to files, even if they are only stored in the cloud and not locally.

Windows 10 is also getting a design language refresh. Microsoft is moving away from the Metro UI to offer a more consistent, depth-enabled interface with lighting and motion effects. It is being likened to Google’s own Material Design. Overall, a welcome change, but one that may be more resource demanding.

Will you be upgrading? What feature do you look forward to most? Leave a comment below!

How to Upgrade Ubuntu Distribution Versions in 5 Simple Steps

A few days ago I was attempting to install the OpenSCAP security compliance evaluation tool on a system I had running Ubuntu 16.10 (Yakkety Yak) LTS. The base package of OpenSCAP was in my distribution’s repositories and I was able to easily install it. But, it turns out the SCAP Workbench package, which I also really wanted, was only available in Ubuntu 17.04 (Zesty Zapus). My choices were to attempt to build from source with directions for Fedora or upgrade my current Ubuntu distribution to 17.04. Continue reading How to Upgrade Ubuntu Distribution Versions in 5 Simple Steps

10 Steps to Securing The Endpoint

 

Endpoint security is often not given a lot of thought or planning within an organization, but it can be incredibly important. For most security-minded people, endpoint security is usually associated with technical controls. These controls are often not perfect, but are a good first line of defense to help build a more secure network. If you, as a Network or Systems Administrator can put these controls in place, it will make your life much easier over time, because a lot of breaches begin with downloaded malware that is installed by the end user, with or without their consent. Here are 10 technical controls you can implement on your network to help mitigate that risk.


Standard User Account

This control can be implemented in almost every operating system or light-weight directory environment. It returns results immediately, as well. The gross majority of malware requires administrative rights to install, even those that do not require user consent. By using a standard account on all endpoints, as an administrator, you greatly limit the attack surface that account would have otherwise provided running with administrative rights.

Account Auto-Lock Policy

It is pretty common, most users do not lock their workstation or laptop when they walk away. But, by having a policy of locking endpoints when they are not in use, you increase user authenticity and bolster physical security. In a Microsoft Windows environment or network, this policy can be easily enforced through Group Policies to all machines on a domain. In a Macintosh environment and network this can also be achieved with Profile Manager and a Macintosh Server. (Microsoft’s Systems Center Configuration Manager 2012 R2 can also manage Apple Macintosh systems, as well as Microsoft Windows systems.)

Operating System Updates

Operating system updates should be top priority in any environment. Almost every major operating system, as of this writing, has a method for automating updates for at least critical, important, and recommended updates (naming scheme is often platform dependent). These updates can also be automated with different centralized management platforms, paid and free. Usually, the easiest way to centrally manage updates across a network is to use the vendor’s configuration utility (mentioned under ‘Account Auto-Lock Policy’). A popular, free solution is Windows Server Update Services (WSUS) by Microsoft. Apple’s Mac OS X Server can also act as an update proxy server.

Third-Party Software Updates

Patching third-party software is usually a little harder to automate, but is just as important as operating system updates. Third-party patching can be harder to perform due to different patching schedules, multiple vendors, and support, or the lack thereof, in patching clients. One of the author’s favorites at the time of this writing is Ninite. Ninite Pro can even be implemented across multiple domain-joined machines in an Active Directory environment. But, there are many other options like Shavlik and GFI’s LanGuard.

Host-Based Anti-Malware

Anti-malware software has gotten a bad rap over the last few years, because most users consider it inadequate. Truthfully, it is by itself; that is why it is only 1/10th of all of the controls in this the recommendation list. Anti-malware software should be used on an endpoint as the first line of security, and considering it comes with most major operating systems, it should be enabled no questions asked. It should acts as the watchdog on the system and any alert that it finds should cause an escalation of the effected system for further analysis. The author recommends Cylance, at the time of this writing, for Windows environments.

Host-Based Firewall

Once again, at the time of this writing, almost every operating system comes with a built-in firewall, much like automatic updates and anti-malware software. It should just be enabled. Again, it can be centrally managed in most environments and third-party vendors have offerings that can be centrally managed, as well. Use it. It is better than nothing.

Install Secure Browser

This should be everyone’s favorite topic, because of all the great, free choices. Personally, the author recommends Google’s Chrome browser at the time of this writing. Google Chrome uses pinned certificates (to combat inauthentic certificates), each tab is a separate process, there is great extension support, and the built-in synchronization settings make everyday tasks almost operating system-agnostic. Mozilla’s Firefox is another great choice, but is relatively less secure by most counts. Internet Explorer honestly should not even be a choice in 2016. Microsoft’s Edge browser is still a little too new and has no extension support. Apple’s Safari is adequate, fairly secure, but lacks the speed and extension support of Chrome.

Full Hard Drive Encryption

Hard Drive encryption is available on most operating systems, if the correct edition is used. Microsoft offers BitLocker and Apple offers FileVault. A good third-party, open-source alternative with Trust-No-One (TNO) security is VeraCrypt. It is best used in situations where physical theft is a risk in the organization. It is highly recommended for portal computers such as tablets and laptops, but it can also prove useful for workstations. Encrypting external hard drives and thumb drives is also highly recommended.

Remote Logging

After any breach or security event, logs are always one of the first ways to track down exactly what happened. The only issue is, if logs are located on the same system that was compromised, the logs can no longer be trusted. The best option is to move the logs, or copy them, to a remote system. Usually, there will be a server (preferably two) that collects the logs from all of the other systems on the network and store them securely. Loggly has an open-source solution, for those looking to get started.

External Backups

Having a good backup should be an integral part of every network. All endpoint data should be located centrally and then protected by a standalone backup appliance. If endpoint user data must be decentralized, even a Windows Backup on an external drive is better than nothing. Getting those backups to an off-site location should also be a top priority. If endpoints are joined to a domain and important files are replicated to a file server, then Datto makes a good selection of standalone backup appliances that replicate data to highly-secure, off-site redundant bi-coastal datacenters.


As an administrator of any type, if you can implement half of these recommendations, you will be doing better than most. But, even if you implement all 10 steps, this is by no means a guarantee that nothing bad will happen. These are 10 strong recommendations to help mitigate risk. If you put these controls in place it will make your life much easier and your network much safer.