Data Analysis
Data Analysis is a combination of machine stages that make predictions, perform calculations, curate machine output, and generate reports.
There are two types of data, and when it comes to finding PI, Data Breach Response treats each one differently.
- Structured data—data that is organized in a specific and predefined way, typically in a table with columns and rows, and where each data point has a specific data type.
Data Breach Response identifies table boundaries and detects header & column content to predict PI. - Unstructured data—unlabeled or otherwise unorganized data. Detections for unstructured data are currently text based such as email, text documents, etc. with additional unstructured data sources in a future state such as photos, audio files, etc.
Data Breach Response uses the context of the document to differentiate different types of PI.
Data Analysis stages
Stage | Description | Can documents be reviewed while the stage is running? | When to run the stage |
---|
Run Blocklisting | The blocklist consists of terms added from the Blocklisting tool that will not be detected as PI. PI detectors will ignore new detections that match the blocklisted terms, and prior detections matching blocklisted terms will be marked as Blocklisted and have their links broken. Manually added detections that match terms in the blocklist will not be removed. If blocklisting is run when no changes have been made to the blocklist, the stage status will show as Skipped in Progress section. | No | - Data Analysis is being run for the first time
-
New documents are added
-
Changes to detectors have been made
- Unstructured documents have been reviewed
-
Changes have been made to the blocklist
|
Run Unstructured Detectors | Identifies PI by running all enabled PI detectors on unlocked unstructured documents. As soon as unstructured detectors have finished processing on a document, the document becomes available for review. | Yes | - Data Analysis is being run for the first time
-
New documents are added
-
Changes to detectors have been made
- Unstructured documents have been reviewed
-
Changes have been made to the blocklist
|
Run Structured Detectors | Identifies PI by running all enabled PI detectors on unlocked structured documents. In addition, all names and PI from structured documents are automatically linked. As soon as structured detectors have finished processing on a document, the document becomes available for review. | Yes | - Data Analysis is being run for the first time
-
New documents are added
- Structured documents have been reviewed
-
Changes to spreadsheet QC have been made
-
There have been changes to structured documents and you will be running normalization - this ensures all entity links in structured documents are up to date
|
Run Normalization | Standardizes names and PI into consolidated entities and generates an updated entity report. | No | - Running Data Analysis for the first time
- New documents are added
- Deduplication settings are updated
-
New entities are linked
-
Conflicts have been reviewed
|
Compile Insights | Calculates and consolidates PI and entity statistics for reporting. | No | - Running Data Analysis for the first time
- New documents are added
- Changes to detectors have been made
-
Changes have been made to the blocklist
- Structured documents have been reviewed
- Unstructured documents have been reviewed
- Changes to spreadsheet QC have been made
- Conflicts have been reviewed
|
Running Data Analysis
This section provides instructions for running Data Analysis and lists common use cases when running Data Analysis.
To run Data Analysis:
- Select Run Data Analysis in the console.
- Select the processes to run.
- Click Run to start Data Analysis.
Choosing stages to run
On each run you can configure Data Analysis to run all, or some, stages.
Depending on what the goal of running Data Analysis is, it may be helpful to only select some stages to run. Common use cases when running Data Analysis include:
Scenario 1: Running Data Analysis for the first time- Run Blocklisting
- Run Unstructured Detectors
- Run Structured Detectors
- Run Normalization
- Compile Insights
The initial Entity Centric Report is generated from spreadsheet entities only. Entities from unstructured documents will appear on the Entity Centric Report when they are linked in the document viewer.
Scenario 2: Running Data Analysis during QC reviewQC review primarily focuses on refining detectors and potentially blocklisting false hits. At this stage, having an up-to-date Entity Centric Report is not the priority. For this reason and to reduce runtime, run the following stages only:
- Run Blocklisting
- Run Unstructured Detectors
- Run Structured Detectors
- Compile Insights
Scenario 3: Running Data Analysis during reviewJust as in the the QC process, detectors may be refined during Review. You can choose to run the same steps in Case 2 if you wish to make detector or blocklist updates during Review.
If you wish to just generate updated versions of the Reviewer Progress and/or Document Report, run the following stage only:
Scenario 4: Running Data Analysis during normalizationDuring the deduplication process, entities may be merged, entities may be unmerged, or Deduplication Settings may be updated. It is not typical that detectors are updated at this stage.
If changes have been made to structured documents, for example adding or removing PI, since the entity report was last generated, include Run Structured Detectors. This stage is responsible for automatically linking names and PI in structured documents to create entities. Running Structured Detectors ensures those links are up to date for the entity report.
Monitoring Data Analysis status
A run’s progress can be monitored on the Data Analysis page. Data Analysis breaks down each of stage into sections that include dashboard summaries, sub-job details, and counts.
Overall progress
Overall progress can be monitored using the Progress section. Statuses can be:
- Not Started—the stage has not begun.
- Still Running—the stage is in the middle of processing.
- Completed— the stage has finished processing successfully.
- Completed with Failures—the stage has finished processing and some items encountered failures during processing.
- Failed—the stage has finished processing and many items encountered failures during processing, so the stage is considered to have failed.
- Skipped—the stage was not run.
- Interrupted—the stage was stopped in the middle of processing.
Blocklisting details
Dashboard numbers
- Stage—the current status of the stage.
- Errors—the number of errors that occurred during blocklisting. You can retry these errors, see Canceling and retrying Data Analysis for details.
If Blocklisting is run but there have been no changes to the blocklist, the status section displays a Skipped status.
PI Detection details
Dashboard numbers
- Ready for Review—the number of documents that have finished processing through the PI Detection stage and can start to be reviewed. View Documents will take you to the Project Dashboard to view these documents. See Data Analysis and document review for more information.
- Structured Documents Completed—the number of structured documents that have finished processing through the PI Detection stage. Structured and unstructured detection run in parallel.
- Unstructured Documents Completed—the number of unstructured documents that have finished processing through the PI Detection stage. Structured and unstructured detection run in parallel.
- Errors—the number of errors that occurred during PI Detection. View Errors will take you to the Project Dashboard to view these errors.
You can retry these errors, see Canceling and retrying Data Analysis for details.
When running Data Analysis, you can choose to run only unstructured or structured detection. If one is not run, it’s Documents Completed Count will remain zero. For example, if only unstructured detection is run, Structured Documents Completed will display zero documents as the detectors were not run.
Entity Normalization and Deduplication details
- Address Standardization—aligns addresses into a single address format/consistent address formats
- Normalizer—consolidates annotation links and records into entities. Merges entities with the same PI.
Dashboard numbers
- Stage—the substage currently being run
- Completion—percent completion of the stage
- Errors—the number of errors that occurred during Entity Normalization & Deduplication. View Errors will take you to the Project Dashboard to view these errors.
You can retry these errors, see Canceling and retrying Data Analysis for details.
Compile Insights details
- Document Report Generation—creates the Document Report by aggregating PI and entity information on a document level.
- Document Indexing—indexes the database for PI and entity search.
- Table Header Analysis—identifies the review status, the number of instances of a header, and the PI assignment of that header for reporting purposes.
- Precision and Recall—calculates precision and recall. Precision is used to evaluate how accurate a detector’s PI predictions are. Recall is used to evaluate how well a detector is retrieving PI.
- Training—PI detector models are retrained based on user additions, edits, and deletions of PI on unstructured documents.
Dashboard numbers
- Stage—the substage currently being run
- Completion—percent completion of the stage
- Errors—the number of errors that occurred during Entity Normalization & Deduplication. View Errors will take you to the Project Dashboard to view these errors.
You can retry these errors, see Canceling and retrying Data Analysis for details.
Data Analysis and document review
While Data Analysis is running, reviewers will not be able to add/edit/delete entities or PI on documents. However, to reduce time to review, the Unstructured and Structured PI Detection stages follow a document streaming approach. This means that as individual documents finish the PI Detection stage they become available for review. Blocklisting, Entity Normalization & Deduplication, and Compile Insights do not follow this approach and all documents must finish processing before they become available for review, and this should be taken into consideration when selecting what stages to run.
All documents are assigned a Data Analysis Status to indicate it’s availability for review:
- Data Analysis run required—the initial status after ingestion. Indicates Data Analysis has not yet been run on the document.
Not ready for review—the document is currently being processed through Blocklisting, PI Detection, or Compile Insights. The status will change to Not ready for review when Data Analysis is run and reviewers will not be able to edit the document.
Running normalizer—the document is currently being processed through Entity Normalization & Deduplication. The status will change to Running normalizer when Entity Normalization & Deduplication is running and reviewers will not be able to edit the document.
- Ready for review—the document has finished processing through PI Detection and/or the Data Analysis run is complete. Reviewers are able to edit the document.
You can view a document’s Data Analysis Status on the Project Dashboard Document List and the field can be searched on using PI and Entity Search.
Canceling and retrying Data Analysis
You can stop Data Analysis at any time while it is in progress. To stop it, select the Cancel Run button in the Project Actions console.
If a stage fails or Data Analysis is manually stopped, it can be restarted using the Retry button in the Progress card. Data Analysis will restart from the failed or interrupted stage when retrying.
To start a new run, select Run Data Analysis in the Project Actions console.
Document errors
The Errors field will indicate the number of documents that have errored or encountered an issue during that stage. Select View Errors or the View Errored Documents button in the console to view these documents and their specific errors on the Project Dashboard. For more information on how to address these, see Document Flags.
Data Analysis history
Data Analysis, like other complex features in Relativity, provides the option in the View Run History modal for gathering audits of various runs.
The following information is available in View Run History:
Run details
- Status—the status of the Data Analysis run
- Duration—the run time
- Start Time—the date and time the run was started
- End Time—the date and time the run ended
Stage history
- Stage—the name of the stage
- Status—the status of stage
- Start Time—the date and time the stage was started
- End Time—the date and time the stage ended
- Duration—the run time of the stage