

When a workspace has a large number of users, you must scale the environment to the large number of concurrent connections to the database. This
While a SQL Server is very sensitive to the size of the data tables it hosts, web servers are very sensitive to the number of users. The primary method of scaling Relativity web servers is to add more web servers.
The web server is the first line of defense against system saturation. A surge in review often causes a need for additional web resources before any other server begins to feel pressure. While web servers are usually the first to show signs of stress due to a large number of users, a large user load could also strain SQL.
Relativity enhances scalability through the use of application pools. By creating purpose-driven pools, you can move services that become resource intensive and redistribute them. The three main application pools are listed below, along with the user activities that place load on the server:
You can move the following functions (by offload) to separate servers. Ultimately, the best approach is to add more web servers and to implement a proper load balancing strategy. Relativity comes with the built-in Relativity User Load Balancer, but you must configure it. If a particular workspace has a latency issue, set up a separate web server to act as a download handler. You can easily configure this at the workspace level.
When under load, Relativity web servers respond well to scaling either up or out. In other words, either increase the RAM and CPU allocation to the existing web servers, or add more web servers. Load balancing is also a great way to maximize computing resources, but you must take care in certain areas.
Load balancing applies to the following categories:
Successfully implement physical devices to physically load balance web servers. It’s important that the physical device not interfere with, rewrite, or otherwise alter the actual web calls made by the Relativity.distributed service.
Physical devices must use IP affinity. Users can't migrate to less loaded servers after logging on to the server. Note that Relativity doesn't innately support physical load balancers. While we can provide advice on Relativity and authentication, you should contact the appliance vendor regarding any problems with a physical device.
Microsoft Network Load Balancing (NLB) works best when you distribute the user base equally across multiple external networks. Due to Relativity’s persistent session requirement, if too many users access Relativity from one network, the load distribution may become lopsided. This means one web server bears too great a load. For this reason, we recommend using Relativity User Load Balancing (RULB) in tandem with NLB.
Relativity User Load Balancing (RULB) technology is a Relativity-core technology that ensures number of users on any given web server doesn't exceed the number of users on any other web server. Once a user is on the server, they stay there until they've logged off and re-accessed the system. When you implement RULB with SSL, RULB requires a secure certificate for each server in the farm.
The amount of bandwidth consumed depends heavily on both the number of reviewers and the number documents being reviewed. Understand the following conditions when assessing bandwidth:
Reviews are slower when reviewers examine documents for longer periods of time. Slow reviews can create a slower connection because the next document downloads in the background. This is true if you enable the Native Viewer Cache Ahead setting for each user.
Sometimes, due to bandwidth considerations and the potential lack of high-speed internet for the reviewers, you can deploy a Citrix/TS server farm. You can virtualize Citrix and TS, but be sure to avoid over-commitment.
Carefully monitor the number of users on a single Citrix server. Too many users on a single machine may quickly drain resources in certain conditions, including the following:
You should scale Citrix outward by deploying more servers. When operating Citrix machines in a virtual environment, adding more cores to a single server is viable only if physical cores are available, and commitment thresholds aren't breached. You must also consider the impact of having a single, large, multi-cored machine on the same virtual host as many, fewer cored machines.
Environments with a large number of users will most likely review a large number of documents. This equates to more conversion requests being made to the conversion agents.
A good rule to follow in a highly transactional environment is to have 1 GB of RAM per 1,000 queries per second. Another popular approach is to buy as much RAM as you can afford. In an environment with many users, the number of cores may be the first thing that maxes out. Adding more cores may be the answer, but you also need more RAM.
The following considerations may pertain to SQL environments that experience heavy traffic:
Attend our
In workspaces with many users, ensure that users with access to create views, fields, indexes, and other objects in the workspace are aware of best practices for large workspaces. The following are some of these best practice guidelines:
Relativity provides users with the ability to add or delete fields on the document table on-the-fly. This operation momentarily locks the entire table while adding the field.
You can also add and remove fields to the full-text index on demand. This can cause increased loads on the SQL Server, especially if the fields contain a lot of data, or a lot of new records are suddenly added. The following are guidelines for working with custom fields in a large workspace:
Why was this not helpful?
Check one that applies.
Thank you for your feedback.
Want to tell us more?
Great!