IT departments looking to save time and money shouldn’t be doing so at the expense of their data protection. A study from the University of Texas revealed that 43% of companies suffering from catastrophic data loss close and never reopen, and 51% close within two years.
Ignoring Hardware Failures
Hardware failures are the leading cause of data loss. Though most IT professionals don’t completely disregard hardware that is failing to back up company data and systems, many do often ignore the fact that certain backup mediums have high failure rates, such as tape or a SAN or NAS storage device that is used as both the source and target of a backup. To reduce the risk of hardware failures, move data from primary storage to a separate, secondary storage device. Disk-to-disk backup is the best approach, as it’s more reliable than tape and still ensures a physically separate secondary storage set that can survive hardware and system failures.
Trusting Co-workers to Follow Policies
The reality is that employees aren’t always great at following company policies, and even when they do, mistakes still happen. The best defenses against human error are automation and retention. Automation enables automatic execution and strict enforcement of created policies and procedures, and retention enables data recovery, regardless of whether the data loss is noticed right away or weeks later.
By now, most companies have at least basic security solutions, such as firewalls and anti-virus software, in place to defend against malware. But cybercriminals are becoming very adept at breaking through traditional cyber defenses. IT professionals should evaluate their infrastructure, identify areas of vulnerability and implement advanced security solutions to overcome them. These solutions include web monitoring software for safe Internet usage, end-point protection for bring-your-own-device management and a sandbox to fight targeted attacks. From a backup perspective, the best approach is to operate backup and disaster-recovery solutions on a non-Windows operating system. Windows has long been one of cybercriminals’ favorite targets, and running protection software on an operating system that is relentlessly under attack just doesn’t make sense.
Playing the Odds
Despite data-loss horror stories, many companies still don’t have disaster-recovery plans in place to protect information from natural and man-made disasters. And many of the companies that do have set plans have just one general set of guidelines that apply to all disaster situations. A strong plan focuses on people, infrastructure and processes, and clearly outlines how each is affected in different disaster scenarios.
Failing to Test Disaster-Recovery Plans
Failure to test disaster-recovery plans, or testing them on an infrequent basis, can greatly increase the risk of data loss in the event of a disaster. Since IT infrastructure evolves daily, thorough testing must be done on a consistent schedule that allows it to be adopted as yet another standard business practice.