Last date modified: 2025-Nov-24

Best Practices

Refer to the tips and recommendations below to effectively use aiR Assist.

Prompting aiR Assist

Following these best practices will help ensure accurate, efficient, and relevant responses from aiR Assist.

  1. Ask clear and focused questions
    For optimal results, keep questions concise and specific. Focus each query on a single topic or piece of information so it can be mapped to documents. Long, compound, or highly complex questions may take longer to process, may not map to documents, and can reduce clarity in the generated response.
  2. Leverage keywords and synonyms
    Retrieval engines like specific terms. Include likely variations of keywords (such as, bribe, gift, incentive) or entity synonyms (such as, bt or bt.us for Big Thorium). Retrieval benefits from alternative phrasing.
  3. Stay within the saved search context
    aiR Assist generates responses based on the documents included in the public saved searches used to build its indexes. Each saved search defines the specific dataset aiR Assist can draw from within the workspace. Questions should therefore relate to the content of those saved searches rather than general or external topics.
  4. Expect some variation in repeated queries
    Submitting the same or similar questions multiple times may produce slightly different answers, as aiR Assist regenerates responses dynamically. However, the core content and conclusions should remain consistent.
  5. Review citations and supporting references
    Each aiR Assist response includes citations and supporting document references. Review these sources to verify accuracy and context, especially when using the results for analysis, reporting, or decision-making.
  6. Maintain high-quality indexed data
    Response quality depends on the content indexed. Ensure that the dataset includes clean, text-extractable documents and that saved searches accurately capture relevant materials. Avoid including duplicate or irrelevant documents within indexes.
  7. Avoid overly broad or “find everything” queries
    aiR Assist is optimized to find and synthesize the most relevant information, not to return exhaustive lists of all matching documents. For comprehensive discovery, you can use standard search tools in combination with aiR Assist or, depending on the use case, consider using our other aiR Suite products.
  8. Avoid Using aiR Assist for Calculations
    Don’t rely on aiR Assist to add up invoices, total damages, or perform complex calculations. The right data may be scattered across multiple files, and LLMs aren’t perfectly reliable at maths.
  9. Break Down Complex Reasoning or Multi-Step Queries
    Avoid questions that require multiple steps or “leaps” (such as, “show me emails from the director who signed the compliance policy”). Break these into smaller, sequential questions so aiR Assist retrieves the most relevant information and clearly understands your objective.
  10. Don’t Rely on Metadata for filtering or context
    Currently, aiR Assist works on extracted text, not metadata. If you need to filter by metadata (such as, only emails between specific dates), results may be incomplete.

Here are examples of some supported and unsupported questions:

Supported question examples Unsupported question examples
  • What evidence is there of [issue]?
  • Why did [event] happen? • Did [actor] discuss [topic]?
  • What topics did [actor] discuss with [actor]?
  • Who is [actor]?
  • What is [actor]’s role in the company?
  • Explain the key dates in the RFP process.
  • Find me documents similar to this one.
  • Find me all examples of [issue].
  • Find me all conversations that took place between [actor] and [actor].
  • Find me emails between March 1-March 31, 2012.
  • Find me emails sent after-hours.
  • Write me a prompt for aiR for Review related to this matter.
  • What is [document name] about?
  • What are the discrepancies between [doc 1] and [doc 2].
Return to top of the page
Feedback