February 25th is coming soon.
Most people in South Louisiana celebrate Mardi Gras.Continue reading 1-800-273-8255
February 25th is coming soon.
Most people in South Louisiana celebrate Mardi Gras.Continue reading 1-800-273-8255
For the last few years I have really gotten into retro-computing. I find it fascinating and it really tickles my nostalgia bone. I have compiled a short list of my favorite YouTube channels, for those that might be interested.Continue reading Favorite Retro-Computing/Gaming YouTube Channels
Laws are beginning to change around what Internet Service Providers can do with the data they collect on the browsing habits of their customers. This has raised concerns with some customers regarding their privacy. People are now looking for ways to keep their browsing habits private and away from their Internet Service Providers. While there is a lot of information on the Internet already on exactly how to achieve the results desired, not all of it is equal. We would like to take the time to clear things up and raise awareness for what is now becoming a concern for many.
Disclaimer: There is no such thing as “perfect security.” None of these methods provide a perfect way by which to protect your browsing habits from being seen by everyone. The main purpose of the article is to merely make it harder for your Internet Service Provider to track your browsing habits, so they cannot use it for monetary gain.
Secure Sockets Layer, commonly referred to as “SSL,” is a method by which two computers can communicate with one another in a private, authenticated manner. Most people encounter SSL when they browse the web with their favorite web browser. It is the underlying technology that allows us to login, bank, shop, and do so much more, all securely. It also does not allow for any eavesdropping of the communications between the two computers involved. At this point in time, this means even your own Internet Service Provider (or anyone else in-between) cannot peer into the information that is being sent back and forth–it is encrypted.
Even though this is a great solution that works natively and automatically, it does not stop some browsing data from “leaking.” The leaked data to which we refer is the initial request for the website. An Internet Service Provider and everyone else can see this initial request for the website. After the initial request is sent, all other browsing data transferred is protected once the SSL connection is established. A SSL connection can also be verified in most browsers by looking at or near the address bar and finding a padlock symbol or the word “Secure,” in the case of Google Chrome, at the time of this writing. One may also look for the term “https” in the URL of a webpage.
Since SSL also provides authentication, many websites are moving in that direction already, so it is a very seamless process for those browsing. There are also popular browser plugins like “HTTPS Everywhere” that attempt to always establish a SSL connection to a website when possible.
One way to combat the initial data leak issue of SSL is to attempt to anonymize browsing habits. A good way to do so is to use a Virtual Private Network, or VPN. There are many different uses for VPNs, but for the purpose of anonymizing browsing habits we will involve a VPN provider. There are many reputable VPN providers, so our intention is not to provide a list. Instead, feel free to contact us if you have concerns about reputable VPN providers.
This technology allows a subscriber to form a secure, virtual private tunnel to the servers of a VPN provider in order to encapsulate all of the data being sent between the subscriber and the VPN provider. This includes the initial request for a webpage that using SSL alone does not protect.
The anonymization of the data comes from how browsing data goes out onto the Internet after it reaches the VPN provider. It goes out along with the data of everyone else, from the same point of origin. Finding the browsing data of one specific person once it emerges onto the Internet from a VPN provider has been likened to trying to find one specific snowflake during a blizzard in the middle of winter. While it is not impossible, it is highly improbable.
A free alternative to using a paid VPN provider is to utilize the Tor Network. We understand there are a lot of negative connotations associated with the Tor Network and a lot of people avoid it for this reason, but it has legitimate uses. The Tor Network allows one to anonymize their browsing habits in much the same way as that of a VPN provider. The way it differs is that it encapsulates all of the transmitted data in the same way Russian Nesting Dolls encapsulate smaller dolls within ever larger ones. In this analogy the browsing data is the innermost doll and as the data is passed along the network each “node” removes one encapsulating doll until the browsing data emerges at the end of its path, at the “exit node,” and is sent to the intended website.
Much like the VPN provider, an “exit node” in the Tor Network does have access to all of the browsing data in an unencrypted form as it exits the Tor Network and emerges onto the Internet, but the best way to deal with this is to also use SSL.
The best solution at the time of this writing is to use a combination of SSL and either a VPN provider or the Tor Network. In this way, the initial request for a website is anonymized and all of the data transmitted between the website and the computer used for browsing is encrypted and authenticated, end-to-end. While this is not a perfect solution, it does serve its ultimate purpose of stopping Internet Service Providers from seeing what requests and data is being transferred across their network, when privacy is the main concern.
This blog post also appeared on the InfoTECH Solutions’ Blog.
There are many people that have taken what is considered the Trinity of CompTIA: A+, Network+, and Security+. But, what comes after Security+? Some people in the security industry suggest tests like the CompTIA Advanced Security Practitioner (CASP), Systems Security Certified Practitioner (SSCP), or even the Certified Ethical Hacker (CEH) exam, but those seem like a jump away from the fundamentals that Security+ offers. That is why CompTIA has introduced an exam that fits in between Security+ and CompTIA Advanced Security Practitioner (CASP) called CompTIA Cybersecurity Analyst+ (CSA+).
Like any other exam from the people over at CompTIA, the Cybersecurity Analyst+ test is vendor neutral. It aims to test applicable knowledge with tools used by security professionals for tasks such as threat detection, data analysis, and vulnerability assessment. The ideal candidate should also know how to detect and combat malware and Advance Persistent Threats (APTs) through skills such as user and network behavior analytics. They should be able to show proficient use with tools such as Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS), Security Information Event Manager (SIEM), and packet sniffers.
It is recommended that the candidate looking to take this exam should have 3-4 years of experience in a security role. But, if a candidate feels they do not have the required experience required to take Cybersecurity Analyst+, Security+ is a good exam to take first. For candidates with almost no security experience or knowledge, it is recommended the candidate sit for Network+ to demonstrate core competencies required for Security+ and then Cybersecurity Analyst+.
|Domain||% of Exam|
|1.0 Threat Management||27%|
|2.0 Vulnerability Management||26%|
|3.0 Cyber Incident Response||23%|
|4.0 Security Architecture and Tool Sets||21%|
The Cybersecurity Analyst+ exam is comprised of both multiple choice and performance-based questions. Overall, the exam has 85 questions and allows one to sit for 165 minutes (2:45 H). The performance-based questions are simulations of tasks performed by security analyst in their daily roles. Candidates may be requested to look over log files, tool output to determine false positives, or Event Viewer logs to determine systems with malware. In order to prepare for these questions, CompTIA recommends experience with open-source analytics tools, team work and cyberwarfare exercises with red teams (pen testing) and blue teams (incident responders).
In order to prepare for the Cybersecurity Analyst+ exam, some of the mentioned open-source platforms include Wireshark, a popular packet sniffer; Bro and Snort, two popular Intrusion Detection Systems; and AlienVault Open Source SEIM, a popular Security Information and Event Management (SIEM) platform. The exam is not limited to these softwares, nor is their inclusion an indicator that they will be included on the exam.
At the time of this writing, the CompTIA website states that the exam will be available on February 15, 2017 and will cost $320 for one exam voucher. Potential candidates can also signup for more details at the bottom of the new page and receive sample questions and exam objectives.
You can also download CompTIA Cybersecurity Analyst+ Exam Objectives.
Wiley also has a book, CompTIA Cybersecurity Analyst (CSA+) Study Guide: Exam CS0-001, written by Michael J. Chapple and David Seidl.
After having to piece together a VirtualBox Guest Addition installation method from some old articles, I decided to put together one for Security Onion 14.05.
Once you have Security Onion 14.05 installed and running, start the Terminal Emulator and walk through the following commands, one-by-one:
sudo apt-get update sudo apt-get upgrade sudo apt-get dist-upgrade sudo shutdown -r now
After the reboots, launch the Terminal Emulator and run the following commands:
sudo apt-get install build-essential module-assistant dkms sudo add-apt-repository universe sudo apt update sudo m-a prepare
In the virtual machine window, select Devices > Insert Guest Additions CD image…
cd /media/%username%/VBOXADDITIONS (tab complete) sudo ./VBoxLinuxAdditions.run sudo shutdown -r now
After the reboot, the VirtualBox Guest Additions should be installed and all features should be available.
As a last step, select Devices > Optical Drives > Remove disk from virtual drive…
No Man’s Sky is a game recently developed by an independent studio called Hello Games. It was launched in August of 2016 for PlayStation 4 and PC.
The PlayStation 4 edition was released just three days ahead of the PC version and it received a lot of high praise. The game had been highly anticipated since 2014, when Sony first showed interest in it and had many gamers and publishers eager to get their hands on a demo. The game itself is a novel, remarkable breakaway from traditional space adventure games. There is no pre-determined enemy, no campaign one must follow, or a set storyline that drives the game. There is only one true objective: reach the center of the galaxy. Once at the center of the galaxy (a point no gamer is anticipated to experience within a single lifetime) the game starts over again with another universe where the player is free to explore all over again. All of this is achieved through rich, complex algorithms that build each planet (and the entire galaxy) procedurally. Put simply, the game is completely computer generated. The game is a computer simulation and the backstory plays into this.
With this much anticipation, and having been so well-received on PlayStation 4, it was surprising when three days later the PC edition came out and things started to turn south quickly for Hello Games. Suddenly, there were horror stories of people paying $60 for the game, only to have it crash before ever fully launching. Many people experienced incompatibilities with graphics cards and different chipsets. Some determined people were able to work around these issues and finally play the game, only to still experience terribly low frame rates and other issues with game play. Few people were actually able to play the game without an issue and there seemed to be no correlation as to what mix of hardware and software would allow one to experience the game the way it was meant to be played. Hello Games immediately released an “experimental,” untested patch for Steam users, but that left out people that had not purchased the game through Steam. Things could not look any more bleak for Hello Studios. Then Steam began offering refunds for the game, no matter how long the user had played or owned the game, due to the technical issues.
That leads us to the purpose of this article. Why did Hello Games not consider risk management before launching this game on PC?
A common concept in the security industry, risk management allows an organization to achieve their objective in the safest manner possible by understanding and evaluating risk factors and then putting the proper steps in place to mitigate those risks. Risk is the possibility of loss unrealized.
The risk, in the case of Hello Games, was a soft risk, but a risk nonetheless. The possibility (or occurrence) was high, as well as the consequences. The consequences, as we see now, were terrible for their sales and financial goals. People are upset about the game not functioning properly on PC, the lack of a patch to fix the issues developed in a timely manner, and are returning the game in high numbers. Gamers have also taken to Reddit and other forums to relieve their frustrations. Entire subreddits are now dedicated to this fiasco and are strongly aligned against the studio.
The factor that could have been determined and controlled by Hello Games was possibility. Development could have been delayed on the PC edition (or both editions for a simultaneous launch) until the game was more fully developed and tested. Admittedly, the PC platform is incredibly hard to develop for since there are hundreds of thousands of hardware and software configurations to consider, compared to console development where a developer or studio knows exactly the hardware and software for which they are developing. All of those different configurations increase the possibility of an incompatibility risk, but this is something Hello Games could have mitigated during their development cycles.
Had Hello Games paid more attention to the affect a bad launch would have had on their brand (unrealized loss), they probably would have taken control of the risk possibility for incompatibility and would have released a more highly developed game. This would have mitigated their risk, but it seems that Hello Games was more concerned with meeting a launch date with a simultaneous launch on PlayStation and PC.
Risk mitigation should be considered by every organization and even game studios, because everyone has risk. Hello Games should have considered it when deciding on when to launch their game to the general public. It could have mitigated a lot of the issues they are experiencing today (at the time of this writing). Even with all of the advancements in game play and technology, the launch of No Man’s Sky will now forever be remembered as a failure on PC, where the gaming community often remembers faults and forgives slowly.
Malware is a common term heard throughout the security industry, but it is also heard a lot outside the industry, because it has become so ubiquitous in the computing landscape. Many users have to deal with it often, in its many different forms, on their personal computers, their computers at work, and more recently, even their smartphones. Malware, commonly defined as “malicious software,” is any software that was developed with nefarious intentions. It can be as harmless as a prank or as serious as a complete takeover of one’s computer. Malware can take many forms, but rest assured, none of them are pleasant.
A virus is a software that is self-replicating and designed to spread from host to host. In the dawn of the computing era, every malicious software was referred to as a virus, but as time has progressed, the term malware has taken its place. A virus will often attach to a host file, replacing it or modifying it, so it can then be transported to another host, almost always by user intervention. Early viruses spread through floppy disks, but then transitioned to Internet downloads, jump drives, and emails. One of the first viruses found in the wild was called Elk Cloner. It spread through floppy disks, and upon being ran for the 50th time, would take over the victim’s computer and display a short poem dedicated to itself. A less common form of a virus is a boot sector virus. This type of virus copies itself to the boot sector of a hard drive or floppy disk, allowing it to load itself into memory before the operating system or typical anti-virus software has a chance to run. This allows the virus to be persistent and much harder to remove.
Adware, alone, tends to carry the lowest risk of all of the different types of malware. It is often installed with user consent in a bundle with other software. A lot of download sites bundle adware with the free software they offer for download, in order to generate more revenue. On a side note, that is why software should be downloaded from the original author’s website, instead of download sites. The intention behind adware is to monetize software without consent of the original author. A lot of adware injects ads into webpages they do not own or into popular software they did not author. More aggressive adware has also been known to create popups that cannot be closed and generate an overwhelming number of popups with graphic advertisements. It is usually more annoying than harmful, but that does not exempt it from being malware.
Spyware, while very similar to adware, is written for the specific purpose of capturing information generated by a user. It can also be installed in the same way adware is installed–with consent–or as part of a trojan, which we will discuss later. Some spyware can use a keylogger to steal credentials or just capture browsing habits in order to monetize that user’s browsing habits. While it seems closely related to adware, it takes things a step further by tracking usage habits, capturing keystrokes, and monetizing that information, among other things.
A worm, much like a virus, is self-replicating and designed to spread from host to host, but does not require user intervention. All that is required to be infected by a worm is to be on the Internet (or local network) and be vulnerable. Worms are designed to spread at an exponential rate, because as more systems are infected, they go on to infect more machines themselves. A worm combined with a virus can be even more devastating, because the virus then has a method of transportation that is highly efficient, that it did not have by itself. An example of well-known a worm was one named Code Red. It attacked vulnerable IIS web servers back in 2001, spreading across the entire world in less than 24 hours.
Trojans, much like the story of antiquity, is a piece of software that is deceptive in nature. A trojan is often an executable file that looks legitimate, but is carrying a hidden payload of malware. Trojans are essentially a malware delivery vessel, in other words. An example of a trojan is a piece of software for which one would normally pay, being offered for free on a website that is commonly associated with pirated software. Always proceed with caution with any software that seems “too good to be true,” and always download software from the original author’s website when possible to avoid this type of situation.
A rootkit is one of the more dangerous forms of malware. Most rootkits go undetected, because of the way the malicious software manipulates the underlying file system and presents it to the operating system. Basically, it hides itself. One of the more famous uses of a rootkit was by Sony BMG. When a Sony copy-protected disc was inserted into a computer, a piece of software was automatically installed without user consent and hid itself from the operating system. After much public outcry, Sony released an “uninstaller” that merely un-hid the files, but also installed more software, unbeknownst to the user. These rootkits also introduced vulnerabilities for other malware and eventually led to Sony being hit with multiple class-action lawsuits.
A keylogger, as discussed earlier, logs all keystrokes on a computer. They will often dump all information into a log file which can then be retrieved by an attacker or can be automatically uploaded to a central server that is controlled by an attacker. Some keyloggers are more advanced than others, but they all serve a specific purpose, to log all data input through the keyboard of a computer so an attacker can look for usernames and passwords, credit card information, social security numbers, and other highly valuable information.
Ransomware is by far the most destructive form of malware, and also one of the newest. Ransomware took off around 2013 and has changed the face of malware in a large way. Ransomware, in its many forms, runs in the background encrypting certain file types, and then when it is complete, displays a warning that all of a user’s files are encrypted and holds them for a monetary ransom. That ransom is usually $300 or more dollars, at the time of this writing, per infected machine and is only payable through anonymous payment methods. If remediation is attempted and the malware removed, all files will remain encrypted. Security researchers have discovered decryption methods and keys for some forms of ransomware, but many forms still require payment. When it comes to ransomware, the best defense is prevention and backups.
Although there are many different forms of malware, they all have the same intent: causing issues for users. While malware was more demonstrative and harmless in the beginning, as time has progressed, it has become more destructive and monetized. Malware may be a catch-all phrase in the information technology industry, but now the differences are clear and well defined, and that is important in the information security industry.
Endpoint security is often not given a lot of thought or planning within an organization, but it can be incredibly important. For most security-minded people, endpoint security is usually associated with technical controls. These controls are often not perfect, but are a good first line of defense to help build a more secure network. If you, as a Network or Systems Administrator can put these controls in place, it will make your life much easier over time, because a lot of breaches begin with downloaded malware that is installed by the end user, with or without their consent. Here are 10 technical controls you can implement on your network to help mitigate that risk.
This control can be implemented in almost every operating system or light-weight directory environment. It returns results immediately, as well. The gross majority of malware requires administrative rights to install, even those that do not require user consent. By using a standard account on all endpoints, as an administrator, you greatly limit the attack surface that account would have otherwise provided running with administrative rights.
It is pretty common, most users do not lock their workstation or laptop when they walk away. But, by having a policy of locking endpoints when they are not in use, you increase user authenticity and bolster physical security. In a Microsoft Windows environment or network, this policy can be easily enforced through Group Policies to all machines on a domain. In a Macintosh environment and network this can also be achieved with Profile Manager and a Macintosh Server. (Microsoft’s Systems Center Configuration Manager 2012 R2 can also manage Apple Macintosh systems, as well as Microsoft Windows systems.)
Operating system updates should be top priority in any environment. Almost every major operating system, as of this writing, has a method for automating updates for at least critical, important, and recommended updates (naming scheme is often platform dependent). These updates can also be automated with different centralized management platforms, paid and free. Usually, the easiest way to centrally manage updates across a network is to use the vendor’s configuration utility (mentioned under ‘Account Auto-Lock Policy’). A popular, free solution is Windows Server Update Services (WSUS) by Microsoft. Apple’s Mac OS X Server can also act as an update proxy server.
Patching third-party software is usually a little harder to automate, but is just as important as operating system updates. Third-party patching can be harder to perform due to different patching schedules, multiple vendors, and support, or the lack thereof, in patching clients. One of the author’s favorites at the time of this writing is Ninite. Ninite Pro can even be implemented across multiple domain-joined machines in an Active Directory environment. But, there are many other options like Shavlik and GFI’s LanGuard.
Anti-malware software has gotten a bad rap over the last few years, because most users consider it inadequate. Truthfully, it is by itself; that is why it is only 1/10th of all of the controls in this the recommendation list. Anti-malware software should be used on an endpoint as the first line of security, and considering it comes with most major operating systems, it should be enabled no questions asked. It should acts as the watchdog on the system and any alert that it finds should cause an escalation of the effected system for further analysis. The author recommends Cylance, at the time of this writing, for Windows environments.
Once again, at the time of this writing, almost every operating system comes with a built-in firewall, much like automatic updates and anti-malware software. It should just be enabled. Again, it can be centrally managed in most environments and third-party vendors have offerings that can be centrally managed, as well. Use it. It is better than nothing.
This should be everyone’s favorite topic, because of all the great, free choices. Personally, the author recommends Google’s Chrome browser at the time of this writing. Google Chrome uses pinned certificates (to combat inauthentic certificates), each tab is a separate process, there is great extension support, and the built-in synchronization settings make everyday tasks almost operating system-agnostic. Mozilla’s Firefox is another great choice, but is relatively less secure by most counts. Internet Explorer honestly should not even be a choice in 2016. Microsoft’s Edge browser is still a little too new and has no extension support. Apple’s Safari is adequate, fairly secure, but lacks the speed and extension support of Chrome.
Hard Drive encryption is available on most operating systems, if the correct edition is used. Microsoft offers BitLocker and Apple offers FileVault. A good third-party, open-source alternative with Trust-No-One (TNO) security is VeraCrypt. It is best used in situations where physical theft is a risk in the organization. It is highly recommended for portal computers such as tablets and laptops, but it can also prove useful for workstations. Encrypting external hard drives and thumb drives is also highly recommended.
After any breach or security event, logs are always one of the first ways to track down exactly what happened. The only issue is, if logs are located on the same system that was compromised, the logs can no longer be trusted. The best option is to move the logs, or copy them, to a remote system. Usually, there will be a server (preferably two) that collects the logs from all of the other systems on the network and store them securely. Loggly has an open-source solution, for those looking to get started.
Having a good backup should be an integral part of every network. All endpoint data should be located centrally and then protected by a standalone backup appliance. If endpoint user data must be decentralized, even a Windows Backup on an external drive is better than nothing. Getting those backups to an off-site location should also be a top priority. If endpoints are joined to a domain and important files are replicated to a file server, then Datto makes a good selection of standalone backup appliances that replicate data to highly-secure, off-site redundant bi-coastal datacenters.
As an administrator of any type, if you can implement half of these recommendations, you will be doing better than most. But, even if you implement all 10 steps, this is by no means a guarantee that nothing bad will happen. These are 10 strong recommendations to help mitigate risk. If you put these controls in place it will make your life much easier and your network much safer.
There are a lot of different security certificates one can achieve in order to increase their knowledge in the cybersecurity field. Since cybersecurity is a young, still-growing field, there are a lot of new certificates still being created. One of the more recent certificates is the Cybersecurity Practitioner (CSXP) by ISACA. It is different from many of the other exams in the security field, because it is a real-time, hands-on exam where the candidate has to prove their knowledge by dealing with different incident response scenarios in a virtual environment.
The Cybersecurity Practitioner (CSXP) exam is a vendor-neutral, performance-based certification from ISACA. It is a real-world, live, virtual lab environment designed to test the practical knowledge of a candidate. There are no multiple choice questions. The time limit for the exam is 3.5 hours. There will be few instructions and the candidate will be expected to switch between multiple virtual machines while multitasking. ISACA also offers a lab preview for demonstration purposes.
The exam will test for knowledge in domains of prevention, detection, and response in relation to a cybersecurity incident. It is focused around the role of a first responder, and is designed to demonstrate experience with firewalls, patching, anti-virus, vulnerability scans with basic analysis, and the ability to implement common security controls. It assumes knowledge of Kali Linux, Microsoft Windows Server and Workstation, as well as Ubuntu Linux. It is the second new exam in the ISACA library of certificates, immediately following the Cybersecurity Fundamentals Certificate, which is designed to demonstrate fundamental knowledge of cybersecurity.
Although training will not be required to sit for this exam, ISACA does offer their own training labs, which cost $500 per lab with six months of total access. ISACA also offers bootcamps and training at conferences, for a fee. The author could find no other training outside of ISACA at the time of this writing.
Testing locations are proctored by Prometric. Once a candidate purchases an exam voucher, the candidate is allowed to sit for the exam within the next seven months. Candidates can find testing locations at prometric.com/isaca. Exams are only available in English at the time of this writing.
ISACA provides a PDF with more information, for those serious about sitting the CSXP exam. Candidates can also view the ISACA website for more detailed information than this article provides. One can also find more information on maintaining the CSXP certification there, as well. This certification shows that a candidate has hands-on, practical experience and knows how to protect a network and resolve security-related issues. Since this certification is a real-time, practical exam it stands out from most of the other multiple choice exams in this space. It is definitely a certification worth considering.
In the security realm, two-factor authentication is often seen as a godsend. It more thoroughly proves authenticity of the person making a request, because it requires more than just something a person knows, but also something a person has or is.
We should discuss the “password problem,” to better understand the need for two-factor authentication in the first place. Passwords technically prove authenticity because they are “something only you know.” Passwords require usernames–which seems like two unknown things at first–until you realize most usernames are public knowledge which eliminates them from the “secret” part of the equation. Now, the only part of the equation that is a secret is the password. Passwords could be adequate in a perfect world, where everyone creates strong, long, complex passwords, and never writes them down, but this rarely happens. Oh yeah, they shouldn’t be memorable, either. Are you starting to see the issue with passwords now?
Most passwords are weak, simple, and short. According to a recent article by the Telegraph (a UK-based new site) the top five most common passwords are: “123456,” “password,” “12345678,” “qwerty,” and “12345.” (If any of these are your password, you need to go change your password. Now!) This represents a huge problem in the world of security. Due to dictionary and brute force attacks, these passwords can be guessed in fractions of a second.
In order to increase security across the web a lot of service providers, especially for email, shopping, and banking, have been implementing two-factor authentication. These are services that typically require a higher level of authentication for their users, and for good reason.
But, even with this push towards better security, most people don’t know about two-factor authentication. Even if they do know about two-factor authentication, there are often other aspects that hold them back from enabling it on their accounts. These roadblocks are legitimate and stop people from protecting their accounts with two-factor authentication all the time. To prove that two-factor authentication still has roadblocks to overcome we will visit the process of how to enable two-factor authentication on three major service providers’ platforms and how to make the all-important app-passwords that most client software requires when two-factor authentication is enabled for an account.
Google is “the Internet” for many people. Google allows you to use their service without an account, but having an account enables so many extra features that users want such as email, documents, photos, personalized search, and messaging–not to mention YouTube. If a person owns an Android phone, a Google account is also required. This is a double-edged sword in and of itself. Including the ownership of an Android phone, if someone gains access to a Google account through a weak password they also have a lot of control over your phone and complete digital life. That is why we begin with Google, since it is the one account that most people have that has the most control over their entire digital life.
Google already has a pretty good support page on how to generate an app-password, so I will just re-print that below. No need to reinvent the wheel.
Yahoo! is still a hold-out for many people. Often, if someone does not use Google, the odds are good they use Yahoo! for their search and email needs. Yahoo! is where we see a lot of accounts attacked because of its popularity in the early days of the Internet and all of the dormant accounts they still maintain. They also seem to not believe in security the same way Google does, but they do have a help article on how to configure two-factor authentication, albeit it is a little vague.
The reason I chose Apple as the third platform to discuss is due to their sprawling digital footprint, combined with their services. Many people use only Apple products, which comprises a lot of their digital life, just like we discussed about Google earlier. Apple stores contacts, emails, photos, documents, and a lot more nowadays. Enabling two-factor authentication for their services is just as important as Google, if not more important, depending on if you live in Apple’s walled-garden.
Prerequisite: Apple’s two-factor authentication seems to require at least one iOS or OS X device.
A lot of people are unaware of these help pages or landing pages that I mentioned as the first step of most of these procedures. In the past, when I first started using two-factor authentication on these services I went in unaware of these pages as well. It was hard to find where to change two-factor authentication settings or where to create app-passwords. In following the help articles, the process is greatly simplified, but this is not the experience most people go through. In order to have more people enable two-factor authentication, services will have to make these features more prominent. As of this writing, these features are buried, compared to other security settings like changing a password. App-passwords tend to be even more hidden, although they is used more often than resetting a password or enabling two-factor authentication altogether. This also speaks more to marketability, which we discuss below.
For a lot of people, one of the reasons they do not want to enable two-factor authentication is that they do not fully understand it. This falls back to user education. These services that offer this protection for their accounts need to do a better job of informing the people that use their service of how it works, why they should use it, and even of the drawbacks of using two-factor authentication. In order for someone to make the right decision they need to be fully informed. Then, if they choose not to enable two-factor authentication it is not for a lack of trying to persuade and educate. If the risk has been conveyed clearly, it is then transferred to the user, should they not enable two-factor authentication. That may sound harsh, but it is the truth.
App-passwords somewhat fall back into the realm of misunderstanding, but it is also a problem all to itself. For example, if a person never uses client software, such as when using a banking website, a person might not ever experience using an app-password. The main issue with app-passwords is the fact they exist. This process can be mitigated by smart software developers that are able to build-in a two-factor authentication verification process into the way they add accounts, like Apple does when adding most accounts to their native Mail apps. But, for now, when using older software, app-passwords are a necessary evil.
Security can be used as a part of marketing. When security is marketed correctly it can also help raise awareness of two-factor authentication. A rising tide lifts all ships. Even if everyone else is “doing it,” it could still be presented to the mass public as a marketing device. Word of mouth is not enough when we are talking about services with hundreds of millions of users, if not billions of users, as in the case of Facebook. Were you aware that Facebook also offers a form of two-factor authentication? If not, do not blame yourself. Blame their marketing. You cannot enable something of which you are not aware.
Two-factor authentication is something that I believe everyone should have enabled. Even though it is a “headache” to use for some, the benefits far outweigh the hurdles that have to be overcome. I also believe that smart software developers can help alleviate the pain associated with using two-factor authentication with client software such as email. With time and innovation, combined with proper marketing, we can make accounts more secure across many different platforms. Most major providers already offer a form of two-factor authentication, you may just have to search for it. It is worth it! My most recent account for which I have enabled two-factor authentication is Amazon.com. I did not know they offered it until just a few months ago. It just goes to show that two-factor authentication, while burdensome and hard to find, is a necessity for the safety of our always-online, digital lifestyles.