I don’t want you to be a victim of cyber crime and have to pay a ransom.
In theory, doing backups of your critical systems and having clear rebuild procedures for everything else should make you cyber-resilient. However, in the last 3 years I’ve seen a dramatic shift in the risk profile of businesses. Gone are the days where a hard drive crash or natural disaster are the top threats; instead ransomware and hacking are the most common.
Therefore, the backup practices of yesterday no longer provide the protections needed in 2020.
Let’s look at the top 5 mistakes that organizations make, and provide remedies for each, to bring your backup strategy up to the level required today.
Table of contents
Inadequate protection for backups against hacking events:
- Your only backup is in the cloud and takes “forever” to download
- You don’t do regular test recoveries
Inadequate protection for backups against hacking events
Hacking and ransomware are the latest cancer affecting businesses, government and non-profit organizations worldwide.
There are two methods in which your data could be held to ransom:
- The automated scattergun approach – where ransomware spreads (without manual hacker intervention) onto your network via phishing, malicious downloads, malvertising, worms, etc.
- The post-compromise ransomware attack – where a hacker exploits a particular vulnerability in your firewall, RDP service, Operating System, etc. and penetrates your network. From there, the hacker deletes all backups he/she can find, and installs ransomware.
The problem occurs when backups get destroyed as part of the attack – meaning you will be unable to recover from backup.
There are certainly effective mitigations against automated ransomware, such as BackupAssist’s CryptoSafeGuard. However, the manual hack is much harder to mitigate – after all, if a hacker obtains administrator access to your servers, a lot of damage can be done.
When designing a cyber-resilience strategy, we have to assume that in the worst case, a hacker will be able to delete all online backups. This leads me to the first two mistakes that people make.
Mistake 1 – not having an offline backup
It can be tempting to fully automate your backup system by backing up to a NAS or SAN – whether that NAS/SAN is onsite or offsite. For example, backing up to a NAS located in a different building is going to be effective against accidental user deletions or theft of your server, but if it’s accessible to your backup software, it’s probably accessible to a hacker.
The only 100% guaranteed way to protect against this kind of hacking is to have an offline backup. A hacker cannot destroy a backup if it is offline, sitting on a shelf or in a safe!
Side note: many people confuse offline with offsite – which I discuss in detail in my article, Offline and Offsite backups – the differences and why you need both.
The simplest offline backup is a backup to external hard drives, which can then be disconnected from the computer or network, and placed in a physically safe place. You can also back up to RDX or Tape to achieve the same thing. The simple act of a human disconnecting a cable can be the difference between paying a $250,000 ransom or recovering from backup.
Remedy:
- Best option: perform a bare-metal backup to hard drives or RDX daily, and disconnect and rotate the drives.
- Okay option: perform a bare-metal backup to hard drives weekly, and disconnect and rotate the drives. Perform daily incremental backups of files and data to a cloud location with good access controls (avoiding Mistake #2).
Plug: you can use BackupAssist Classic to implement a Windows Server backup system as described here.
Mistake 2 – Integrated authentication to cloud backup storage
Can a backup to the cloud be considered a truly “offline” backup? From a cyber-resilience standpoint, the answer is “no”. If a hacker has made it this far onto your network, he or she probably has some serious skills… such as the ability to connect to your cloud backup storage services and delete your backups.
However, one basic mistake can make it even easier for the hacker, rolling out the red carpet and saying “attack me”!
I have it on good authority that one of the major ransomware attacks on a major U.S. city was made possible because all their backups were stored in one of the major cloud vendors’ storage systems. For convenience, the I.T. administrators had integrated the local Active Directory with the cloud provider’s authentication system to provide seamless authentication. The unforeseen downside was that when the hacker gained administrator access on the local Active Directory, that automatically granted administrator access to the cloud resources, including the cloud backups! All the backups were deleted before ransomware was installed on the on-premise systems. That mistake cost hundreds of thousands of dollars and payment of the ransom.
So does that example mean that all backups to the cloud are ineffective?
This is a gray area. Any online resource is potentially at risk.
In my view, backups done to the cloud still have a valuable role to play in cyber-resilience. Importantly, they should be viewed as a secondary level of protection, and are not a replacement for a true offline backup. I’ll explore this in a future article, The role of backups to the cloud in modern cyber-resilience but for now, let’s focus back on this mistake and its remedy.
Remedy:
- Ensure that cloud backups are done to cloud storage that is completely disjoint from your main network.
- Never rely solely on cloud backups. You always want multiple recovery options. Refer to the remedy for Mistake #1.
- Also remember Mistake #4 – coming up later.
Plug: You can use the BackupAssist Classic Cloud Offsite Add-on to perform cloud backups as described here, to a location that is disjoint from your main network.
Inadequate backup coverage
It’s self-evident that in a cyber destruction situation, anything that’s not backed up should be considered destroyed and irretrievable.
Most I.T. administrators focus on the server infrastructure, and performing bare-metal backups of those servers generally provides adequate protection for all of the server infrastructure, applications and data running on the server.
However, in a hybrid-cloud environment, a sometimes-forgotten set of data is user data stored on desktops or laptops. This leads us to mistake 3.
Mistake 3 – data stored on desktops or laptops
When hackers compromise a network, it is a relatively straightforward task to push out ransomware to all desktops and laptops connected to that network. Although the best practice for decades has been to store files on a file server (that is backed up), users are humans… and humans can frequently breach instructions and save documents to their local computer. So even if your servers can be recovered after a ransomware attack, the data stored on user workstations may be irrecoverable and force you to pay the ransom.
Remedy:
- Ensure that user roaming profiles are set up in Active Directory, so that all user data is saved back to the server, and the server is backed up.
- Alternatively, set up OneDrive for Business sync to automatically sync locally stored files to the cloud, and back up the OneDrive for Business accounts from the cloud to an alternate location.
Plug: you can use BackupAssist 365 to back up OneDrive for Business, SharePoint documents and mailboxes as described here.
Inadequate recovery
The final category of mistakes is the inadequate recovery. By now, if you’ve avoided Mistakes #1 to #3, it means your backups haven’t been destroyed, and they do contain all the data and systems you need protected.
However, the final part of resilience is the recovery itself. Let’s examine two more common mistakes.
Mistake 4 – Your only backup is in the cloud and takes forever to download
I see this especially in the small business sector, where budgets are tight, and organizations try to get away with the minimum spend. However, others can also make this mistake when trying to fully automate their backup system.
Storing your backups in the cloud can be very convenient, and eliminate the need for human intervention. However, if cloud backups are your only backups, you must ask – how long will it take to download my data back again?
It can be tempting to assume the “best case” scenario – that is, the speed of the download is only limited by your bandwidth. If you’re lucky enough to have a gigabit internet connection, you could (theoretically) download 1TB of data in 2 hours 23 minutes. No problem, right?
The mistake is assuming that the cloud is infinitely fast. Unfortunately it is not. The bottleneck is probably not the speed of your internet connection. Instead, it’s probably a combination of two factors:
- Delays and limitations at the cloud service – these are generally caused when there is contention for resources from multiple clients. Yes, your cloud service will have thousands of other customers, each wanting a piece of their bandwidth. Microsoft for instance will monitor the health of their servers, and implement bandwidth throttling if the server gets overloaded. Someone else’s heavy usage could slow things down for you. That means it’s impossible to predict how fast you’ll get your data back!
- Speed and size of the pipe between your cloud service and you – an oftentimes overlooked fact is that many online backup providers choose to use low cost storage. Unless your contract explicitly states where your data is stored, it might be stored in far-away places. The speed of a connection between the USA and Eastern Europe may be so slow that you cannot download all your data within a reasonable time.
Remedy:
- Do not rely on your cloud copy for a speedy bare-metal recovery. It is far better to do a bare-metal recovery from a local backup (which you created as part of Remedy #1) even if that backup is a week old. Then, do an incremental recovery from the cloud, to bring your data up-to-date from your latest cloud backup.
- Only rely on having to fully download your cloud backup in extreme circumstances – such as the unfortunate situations in Australia, where large scale fires caused destruction to millions of hectares of land and thousands of buildings.
Plug: You can use BackupAssist Classic’s system image and cloud-offsite backup features for fast bare-metal recovery and efficient incremental cloud recovery as described here.
Mistake 5 – You don’t do regular test recoveries
The final mistake I’ll cover is perhaps the most common – not doing regular test recoveries.
Granted, this takes time to do, and just like practice fire drill evacuations, no one wants to do them. But, there’s no way to find any unwanted surprises until you run through the procedure. A recovery situation is always stressful – many I.T. administrators will be working through the night, sleep deprived, under pressure from management – and it’s difficult to think clearly in those situations.
On top of this, you might be battling unexpected recovery problems like missing RAID drivers, incompatibilities between different types of hardware, faulty Active Directory syncs, and so on.
Remember: the best time to learn how to use a parachute is before you jump out of the plane. It’s best to iron out all the creases and prepare ahead of time.
Remedy:
- Perform a test recovery at least once every 6 months.
- Follow a well documented set of procedures, such as in the BackupAssist Recovery Bible, which contains walk-throughs for 20 of the most common recovery scenarios on Windows platforms.
Plug: The BackupAssist Recovery Bible contains not just the walk-through procedures, but a handy flow chart to follow in a recovery situation.
Conclusion
So there you have it – this is the best advice I can give after talking to countless MSPs, security experts, forensic investigators and data recovery specialists.
If you avoid these 5 mistakes, I’m confident you’ll be cyber-resilient, and able to recover in your time of need.
I wish you godspeed.