

You can implement one of the following approaches to backing up your data online:
Relativity databases may experience a very high number of inserts and updates and can become corrupt at any time. An "active database" experiences moderate to heavy use. For this type of database, data loss would be catastrophic to business.
For your active databases, we recommend using one of the following backup strategies.
Follow these steps to perform nightly full database backups with log backups for point-in-time recovery.
Data file:
Log file: Follow best practices for managing log files as outlined in
Note: A Relativity database log file may occasionally experience a high amount of growth. If the log files fill the log drive, those workspaces become inaccessible. If this happens on the SQL Server that contains the EDDS database, the entire environment becomes inaccessible.
Follow these steps to perform weekly full database backups with nightly differentials and log backups.
Data file:
Log file:
Relativity may write a lot of data to the log files at times. If the business only requires a four-hour increment for point-in-time recovery, but the system could write enough data to the log files in four hours to fill the drives, then you must either run log backups more frequently, or increase the size of the drives. Be sure that you understand how to restore log files; document the procedure for restoring log files and practice doing it.
The following formula determines the higher bound constraint of frequency of log backups:
For example, if during a normal hour of production 100 GB are written to the system and 100GB are read from the disk by production systems, and the remaining capacity is only an additional 50GB of sequential read data, then b= 2 and this is not a value of b< 2. Log backups may not complete before the next round is scheduled, or the production system performance suffers because you're operating your system at the upper limit of what your system can handle.
Note: Whereas the lower limit of the frequency of log backups is controlled by the need of the business for point-in-time recovery , performing more log may be required by the need to prevent a drive from becoming full.
Follow these steps to perform weekly full database backups with nightly differentials and no log backups.
Log file:
In this configuration, set the recovery model of the database to SIMPLE and size the log files appropriately, as outlined in
Note: If you suspect excessive logging at any time, please report it to Relativity Support to identify or determine a root cause.
Inactive databases also require attention. You may not routinely back up inactive databases, but a stored backup may become corrupted over time for various reasons—such as head crashes, aging, wear in the mechanical storage devices, etc..
Data can become corrupted just sitting on a disk. This is called silent data corruption. No backup can prevent silent data corruption—you can only mitigate risk.
The consequences of a silent data corruption may lay dormant for a long time. Many technologies have been implemented over the years to ensure data integrity during data transfer. Server memory uses Error Correcting Code (ECC), and Cyclic Redundancy Checks (CRCs) protect file transfers to an extent.
It's important to maintain the integrity of data at rest on disk systems that aren't accessed over a long period of time. Without a high degree of protection, data corruption can go unnoticed until it's too late.
For instance, a user attempting to access the database may receive the following error when running certain queries:
Error 605
Severity Level 21
Message Text
Attempt to fetch logical page %S_PGID in database '%.*ls' belongs to object '%.*ls', not to object '%.*ls'.
Then, while running DBCC to try to repair it, the database administrator receives this error: "System table pre-checks: Object ID 7. Could not read and latch page (1:3523) with latch type SH. Check statement terminated due to unrepairable (sic) error."
After this occurs, the database administrator checks for backups, and hopefully recovery happens quickly and inexpensively.
Sometimes, the backup is also corrupt. Often times when this happens, there is no way to recover missing data (e.g., the database can’t be repaired if complete tables have been destroyed.
When DBCC checks are not run regularly against backup files, a corruption such as the previous example may go unnoticed for weeks.
To prevent this data corruption for business-critical databases, the following steps should occur after completing every backup:
Follow these guidelines to prevent data corruption of inactive databases:
Freeware tools, such as ExactFile, include features that make it easy to test hash values by creating a digest of a directory of files. This digest can be tested and rebuilt daily or on a sensible schedule that meets business obligations.
Why was this not helpful?
Check one that applies.
Thank you for your feedback.
Want to tell us more?
Great!