Page 1 of 2

UGH! Mirror backup deletes backup if source drive failed

Posted: Fri Jun 28, 2013 2:35 pm
by DahlgrenS
Yesterday I started a topic with the same subject in the Troubleshooting forum. Softland has acknowledged there that FBackup was intentionally designed to delete the backup files if the source drive is not accessible when running a Mirror backup. In other words, if the drive containing your source files fails before or during the mirrror job, Fbackup will delete what is likely to be your only remaining copy of the files. This destructive behavior is counter-intuitive for a backup software product, so I'm posting this message to warn other users. This is a typical use case of FBackup yet it is very risky. Hopefully Softland will act to make my warning more prominent, and hopefully they will change their decision to delete the backup when the source drive is inaccessible.

I think it would be trivial for Softland to fix this. They could change FBackup's default behavior so it constructs a list of "obsolete" files to be deleted from the backup, and then tests whether the source drive is still accessible before assuming the list is valid.

(They may have another risky behavior to fix too, depending on how FBackup deals with changed files that are supposed to overwrite their backup copies. It would be a poor design to delete a backup copy while a failure of the source drive during the job may lose the changed source file too. It would be safer to first make a copy of the source file with a temporary unique filename, when there is enough free space on the backup drive, and then delete the old copy and rename the new copy. I hope FBackup already does it this way; I may do an experiment to test this, by checking whether it loses the backup copy if the changed file becomes inaccessible during the job. To maximize the available free space on the backup drive during the job, they could be clever with the order of operations: First backup changed files in order of decreasing size, and then backup new files.)

Softland said Fbackup deletes the backup because it assumes the source drive had been intentionally removed by the user (implying the backup is obsolete). So they may want to add an option (to the Mirror tab of each job's properties) to make this dangerous behavior optional but not enabled by default. This would allow users who prefer it to knowingly enable it. But I think only a foolish user would enable it. Users who want to delete obsolete backups can quickly and easily delete them manually using Windows Explorer. The option should be clearly labeled as dangerous.

I've asked Softland in the Troubleshooting forum to confirm that this destructive behavior depends on whether the "Removed excluded or deleted files from backup" option is enabled. (The option is located in the Mirror tab of each job's properties.) I ran a quick experiment using a USB flash drive as source drive and with that option disabled, and the result was that FBackup did not delete the copies when I ran the job again after removing the flash drive. Hopefully they will confirm soon that disabling the "Remove excluded or deleted files..." option will always protect the backup. It would make possible a risk-reducing, slightly tedious workaround of the bad design: The user can create two mirror backup jobs that are identical except for two things: one job has the "Removed excluded or deleted files..." option enabled and the other has it disabled, and only the job that has it disabled is scheduled for automatic running. Over time, the backup will accumulate obsolete copies of files deleted from the source, plus redundant copies of files moved or renamed. To delete the obsolete and redundant files (when the backup drive is getting low on free space) the unscheduled job can occasionally be run manually. The user would need to stay alert while running the manual job; in case the source drive happens to fail during the job it is vital to quickly cancel the job to minimize the non-obsolete files, non-redundant files permanently destroyed by Fbackup.

EDIT: The possible workaround that I described in the paragraph above DOES NOT WORK. I will post a new message in a moment, describing my test of it.

Best wishes.

Re: UGH! Mirror backup deletes backup if source drive failed

Posted: Fri Jun 28, 2013 2:54 pm
by Adrian (Softland)
Dear DahlgrenS,

The "Remove excluded or deleted files from backup" is NOT enabled by default as you said. Please note we know the importance of such an option which can cause you problems if used without knowledge. The Help file also explain the role of that option.

Re: UGH! Mirror backup deletes backup if source drive failed

Posted: Fri Jun 28, 2013 4:18 pm
by DahlgrenS
I didn't say the "Remove excluded and deleted files..." option is enabled by default. I said that if a new option "Delete backup if source drive is unavailable" is added to FBackup, then the new option should not be enabled by default.

Where in the Help file does it explain the risk I've warned about? Here is what the Help file says: "Use this option to remove from the backup the files that were deleted (or excluded) from the backup sources. When you select this option (unchecked by default), you'll get a warning message that the deleted files will be removed from the backup too - press Yes if you want to continue."

There is a huge difference between "excluded and deleted" files and files that are inaccessible because the source drive is inaccessible. The Help file neglects to mention that inaccessible files will also be deleted from the backup drive. Nowhere, and at no time, is the user warned that inaccessible files will be deleted too.

The typical use case of mirroring is to make an exact copy of the source (which the Help file acknowledges). To make an exact copy, it is essential to enable the removal of deleted files. So, even though the option is not enabled by default it is likely that many users have enabled it without knowing the risk. It is not normal for software to delete a mirror when the source has failed, and it would be easy for FBackup to distinguish between "excluded and deleted" files and inaccessible files, so users are unlikely to be aware that the option places all copies of all their files at risk.

EDIT: Oh, I shouldn't have written above that ALL of the users' files are at risk. Files on the Windows system drive, usually drive C:, might not be at risk because FBackup would not be launched by the scheduler if the system drive fails before launch, and FBackup is likely to stop working if the system drive fails while the backup job is running. If FBackup isn't launched or stops working, it won't be able to do any harm. It's the files on non-system drives, and the mirror copies of those drives, that are at risk.

Regards.

re: UGH! Mirror backup deletes backup if source drive failed

Posted: Fri Jun 28, 2013 11:36 pm
by Green
Softland wrote in the other, original forum, 'No other client complained about that behavior until now.' I am complaining. Softland's design seems grossly defective: the product seems hazardous. Softland is defending its design decision to undo the backup precisely when the source has failed or disappeared. Softland should with great thanks immediately implement DahlgrenS's free consulting work that benefits Softland and all potential users. Softland should stop doing dishonest public-relations efforts and instead do useful work on the software.

Re: UGH! Mirror backup deletes backup if source drive failed

Posted: Sat Jun 29, 2013 2:58 pm
by DahlgrenS
I tested two more of Softland's claims about the "Remove excluded and deleted files..." option. I agree with them it's disabled by default when I create a new mirror backup job. However, when I enable it, FBackup does NOT ask me if I am certain (nor display any warning).

Re: UGH! Mirror backup deletes backup if source drive failed

Posted: Sun Jun 30, 2013 2:27 pm
by DahlgrenS
As noted above where I edited an earlier post, the workaround I had proposed there has a problem. (The proposed workaround was to create a "twin" mirror job that's identical except it enables "Removed excluded and deleted files from backup" and only run this risky job occasionally and manually in order to remove accumulated obsolete files, while watching alertly to be able to cancel if a drive fails during the job.) The problem is caused by the fact that it would use two separate catalog files, which causes unnecessary overwriting of the backup, and probably other nasty effects too.

A better, more straight-forward workaround would be to use a single job with the "Remove..." option normally disabled. This will cause obsolete excluded and deleted files to accumulate in the backup over time (and means the backup will not be a true mirror). Occasionally, such as when the backup drive gets low on free space, remove the accumulated obsolete files by temporarily enabling the option and running the job while watching alertly. Don't forget to disable the option again afterward.

By the way, I've discovered two more bugs in FBackup:

Bug #2: When the "Remove excluded and deleted files..." option is enabled, Fbackup fails to remove excluded or deleted files unless there happens to be at least one new file to be added to the backup. This means users can't rely on FBackup to produce a mirror that's an exact copy of the source.

Bug #3: If the user or virus deletes or modifies files stored in the backup, FBackup will not restore them when mirroring. I assume this is because FBackup relies entirely on its catalog file, comparing the source against the catalog rather than against the attributes of the files in the backup to determine which files need to be copied to the backup. I discovered this while running a test job after manually deleting the backup... I added a new file to the source, ran the job, and when the job finished the backup was empty except for the one new file. All of the other source files needed to be re-copied to the backup because they'd been (manually) removed from the backup, but FBackup didn't notice this was needed.

I don't understand why FBackup uses catalog files at all. What does FBackup store in the catalogs that provides a performance advantage? Or to put this another way, why not determine which source files need to be backed up by directly comparing the attributes of the source files with the attributes of the backed up files, the same as many other backup and sync products do? FBackup can be finicky about catalogs; for example I once created a job to mirror from three source drives, replacing three jobs that each mirrored from only one source drive, and FBackup then wasted a whole day needlessly overwriting a perfectly up-to-date set of backups. Another example: If you use Windows to change the volume letter of a source drive, say X: to Y:, and modify the job so it will use Y: as the source drive, then FBackup will delete the catalog, create a new one, and waste an enormous amount of time needlessly overwriting a perfectly good backup. Comparing file attributes directly is not a time-consuming operation, so why maintain a catalog?

(I labeled these bugs #2 and #3 because I consider bug #1 to be the destruction of the backup when the source drive has failed, as described in the first post in this forum topic. Even though Softland said this is by design, in my opinion it deserves to be called a major bug.)

I'm going to research other software for mirroring. Perhaps rsync or Microsoft's SyncToy. (The "Echo" mode of SyncToy.)

EDIT: SyncToy "echo" looks pretty good. Microsoft says it is careful not to lose files; by default it moves removed files to the Recycle Bin so they may be recovered. Hopefully it won't remove files due to inaccessibility of the source; I intend to test it when I find time.

EDIT #2: Microsoft SyncToy no longer looks good. It can't copy files that are open in other applications, and it can't copy files with long filenames (or long filepaths). However, I found software named FreeFileSync, an open source project at SourceForge, which doesn't have these problems and has excellent reviews. I tested FreeFileSync a little today by scheduling a job that mirrors from three volumes on three separate hard drives to three folders on a fourth hard drive. I noticed that it scans the backup drive as well as the source drives to calculate which files need to be copied or deleted, instead of relying on accuracy of a catalog file. When it deletes, by default it moves the files to the Recycle Bin for safety. When it overwrites, for safety it first copies with a unique filename and only if that is successfully written does it delete & rename. Its list of features mentions something about detecting moves & renames but that didn't work as expected: after I moved a file on a source drive, FreeFileSync deleted & re-copied (the same way FBackup does) instead of detecting that it could simply (and quickly) move the copy already in the backup. Three minor complaints so far: (1) Some options (such as whether to test backup files after writing them) are global when it would be simple to make them properties of each job. (2) Some global options (including the one I just mentioned) can only be set by manually editing a .xml configuration file. (3) Its user interface is almost entirely GUI and mouse-dependent; very little can be done using the keyboard, which means it doesn't meet modern accessibility standards. Nevertheless, unless I discover it's unreliable or risky, I plan to use it for my nightly mirroring.

Re: UGH! Mirror backup deletes backup if source drive failed

Posted: Mon Jul 01, 2013 7:46 am
by Adrian (Softland)
Hi,

1. We will put your feature request on our Wishlist, to add such an option that avoids the removal of missing sources files from destination if the source drive is not connected.
2. The backup does not start if there aren't new or modified files/folders to be backed up.
3. It is considered that the files in destination should not be deleted or altered in any way.

FBackup uses the backup catalog for many reasons. Comparing the source directly with the files in destination when you have 10,000,000 of files and only 50 files modified each time, will take a lot of extra time. Restoring all the files (or selectively) to the original location would not be possible without the catalog. Without the catalog file, which files should FBackup restore? it would be a simple copy files application.

Re: UGH! Mirror backup deletes backup if source drive failed

Posted: Mon Jul 01, 2013 9:10 am
by DahlgrenS
Thank you for adding this to the Wishlist. I hope the way you implement it will also protect the backup from failures of the source drive that occur after the mirror job has started. In other words, don't simply check for drive availability at the beginning of the job; check again for every file that may be removed.

EDIT: Until your users have that protective "feature," please find a way to warn as many users as possible that FBackup will destroy their backup precisely when they need it.

When you say the backup does not start if there are no new or modified files to back up, you have acknowledged bug #2: FBackup fails to remove excluded or deleted files unless there exists at least one new or modified file. Why not fix this?

Regarding the catalog files... When you say it would take much longer to examine the attributes (timestamp, etc) of millions of files directly than by using a catalog file, I do not believe the extra time would be significant compared to the reading and writing of the contents of gigabytes that need to be backed up. The attributes (filename, timestamp, size, location on drive, etc) of the files in a folder are stored together so drives' built-in read-ahead and caching algorithms, and Windows additional disk caching, will read many files' attributes fast. Windows ChkDsk runs pretty fast on my drives and it must read similar info from the drives about each file. And FBackup must already be reading attributes for all the source files anyway, which isn't taking a long time. I'd be interested in seeing benchmark test results with and without catalogs, to see if the time savings outweigh the undesirable side effects (hours spent overwriting perfectly good backups).

I don't understand what you meant about an advantage of the catalog for restoring. What does its information offer during restoring that can't be calculated on the fly by directly comparing the attributes of the files, the same way Windows does during a copy operation when a file in the destination already has the same filename?

Re: UGH! Mirror backup deletes backup if source drive failed

Posted: Mon Jul 01, 2013 11:22 am
by Adrian (Softland)
Hi,

About your bug #2 report: Asking that, I am not sure you understand the way FBackup works. It does not fail to remove the removed or deleted files if there aren't new or modified files because it does not even try to remove them as long as there is nothing to be backed up. If you press the Test button for a mirror backup job you will see each backup version has some new or modified files/folders backed up. There cannot be a backup version with only some files removed. I hope that is clear enough.
As a suggestion: if the removed file is under a source folder, when the file is deleted the modified date of the source folder is changed, so that is enough for FBackup to run the backup. You can add the source files with the parent folder in the backup sources to make sure they are removed from destination even if there is no other new/modified file.

As per the backup catalog question: if there would be no backup catalog, so no organized evidence of the files backed up, how should FBackup know which files to restore and to which location?

Re: UGH! Mirror backup deletes backup if source drive failed

Posted: Thu Jul 04, 2013 10:06 pm
by DahlgrenS
Regarding bug #2, this appears to be another case where Softland considers it to be "by design" rather than a bug. But none of the documentation says deleted files will be removed from the backup only when at least one source file has been added or changed, and it's contrary to users' expectation that a mirror backup job will maintain an exact copy of the source, so it's reasonable to call this a bug. I feel pretty sure that Softland could fix this with a little tweaking of their design, if they choose to do so.

Softland's suggestion that the failure to remove deleted files applies only to files in the root, not files in a folder, is a good tip. In my test that revealed FBackup can fail to remove deleted files, those files were indeed in the root directory. I don't have time to test the tip, unfortunately.

Regarding Softland's question of how it could be possible to restore if there were no catalog... The file locations for restoring would be relative to the source path and the backup path contained in the mirror job's properties. (If the job properties have been lost, simply ask the user for the two paths.) Given those two paths, files in the backup that are missing from the source drive can be identified and copied to the source drive. Isn't restoring from a mirror backup equivalent to mirroring in the reverse direction? (Note that when a mirror backup job is run for the first time, no catalog has been constructed yet, but FBackup is able to determine the locations using the paths in the job properties.)

It would help explain the usefulness or necessity of catalogs if Softland would indicate which information (filenames? paths? timestamps? hashes?) is stored in the catalogs and when the info is used.

FBackup stores some .fkc files in the backup folder, and also in AppData\Roaming\Softland. I assume the .fkc files in the backup are copies intended to protect against the possible loss of the primary catalog files. However, there are some .fkc files in my backup folder that do not correspond to any .fkc files in AppData\Roaming\Softland, which suggests FBackup has another bug: it fails to delete obsolete .fkc catalog copies from the backup when it deletes them from AppData\Roaming\Softland.