Chat is still the main way to interact with Sofie. Use chat for one-off work and exploration. Use Orchestrations when the process itself should be reused.
Choose the right Sofie surface
Start by deciding whether you need an Orchestration at all.| Need | Use |
|---|---|
| One question or one analysis pass | Chat |
| A complex one-time task that should be planned first | Plan Mode |
| A reusable instruction you paste into chat | Saved prompt |
| A document deliverable | CoDraft |
| A spreadsheet, tracker, or analysis table | CoSheet |
| A shared source set | Workspace |
| A repeatable workflow with inputs, steps, tools, tests, and review points | Orchestration |
- The same workflow repeats for many projects, batches, documents, studies, products, or investigations.
- Users can provide a consistent set of inputs.
- The output can be reviewed against expectations.
Define the workflow before you build
Before creating agents or tasks, write the workflow in plain language. Use this structure:Understand the main settings
The Orchestration editor includes settings that change how a run behaves. Set these before you tune individual agents and tasks.| Setting | What it controls | Use it intelligently |
|---|---|---|
| Process Type | Whether tasks run as a Sequential process or a Hierarchical process. | Use Sequential when task order is fixed. Use Hierarchical when a manager-style planning step should coordinate agents and tasks. |
| Enable Planning | Whether a hierarchical Orchestration creates a plan before work is delegated. | Turn it on for complex work with branching, source gathering, or multiple review points. It is not available in sequential mode. |
| Short-Term Memory (Within Run) | Whether tasks can save and retrieve useful information from other tasks in the same run. | Use it when later tasks need findings from earlier tasks but should not receive a giant copied transcript. |
| Long-Term Memory (Cross-Run Learning) | Whether agents can save general lessons from prior runs and use them in future runs. | Use it for stable workflow lessons, recurring reviewer preferences, and repeated process patterns. Do not use it as the source of record for project facts. |
| Max Requests Per Minute | The run’s request pacing. | Lower it when runs hit service limits or use many source/tool steps. Avoid increasing it unless your environment supports the load. |
| Require Citations | Whether responses should include citations. | Use it for source-backed research, evidence review, and quality workflows. Still review the cited sources yourself. |
| Allow Parallel Task Execution | Whether independent tasks may run at the same time. | Use it for independent source reviews or analyses. Turn it off when every task depends on the exact output of the previous task. |
Design around inputs
Inputs are the contract between the person running the Orchestration and the workflow. Good inputs are specific:| Weak input | Better input |
|---|---|
Files | Deviation evidence files |
Info | Deviation description |
Data | Effectiveness metrics CoSheet |
Document | CAPA plan CoDraft |
Workspace | Investigation Workspace |
Date | Observation window end date |
{Investigation Workspace} or {CAPA plan CoDraft}. This keeps the workflow reusable because the instruction points to the run’s selected input instead of today’s example file.
Use dropdown inputs when you want users to choose from a controlled list, such as Review focus, Product type, Investigation phase, or Output destination.
Set source rules
Most weak Orchestration runs come from vague source handling. Tell the workflow what sources matter and what to do when they conflict. Include source rules such as:- Use the selected Workspace as the project source set.
- Use Workspace search for project files before using older chat assumptions.
- Treat attached files as the current run’s evidence.
- Use CoDraft templates for structure, not as factual evidence unless the user says so.
- Use CoMeeting transcripts as discussion context, not final source decisions.
- If sources conflict, list the conflict instead of resolving it silently.
- If a required source is missing, pause and ask for it.
Use agents only when roles are different
Agents are useful when the workflow needs different responsibilities. They are not useful when they only split one task into more names. Good agent roles:- Evidence reviewer.
- Data analyst.
- Source gap reviewer.
- Report drafter.
- Quality reviewer.
- Researcher.
Expert 1,Expert 2,Expert 3.ReviewerandCheckerwith the same instructions.- A general
Quality expertexpected to do every step.
Use agent memory carefully
Agent memory is useful, but it can also make a workflow harder to reason about if you use it for the wrong information. Use Short-Term Memory (Within Run) when tasks in the same run need to pass forward useful discoveries:- The evidence reviewer finds a missing batch record page.
- The data analyst identifies outlier lots that the drafter must mention.
- The researcher finds a source conflict that the reviewer must resolve.
- A QA reviewer repeatedly asks for facts, assumptions, gaps, and SME questions to be separated.
- A report drafter learns the preferred structure for a recurring CoDraft template.
- A data analyst learns that a specific metric should be explained with the same caveat.
Make each task reviewable
A task should produce something a person can inspect before trusting the next step. Weak task:- Specific.
- Source-aware.
- Limited to one kind of work.
- Easy to compare against the expected output.
- Clear about what Sofie should not decide.
Choose output modes deliberately
Use the output mode that matches how the result will be reviewed or reused.| Output mode | Use it when | Example |
|---|---|---|
| Text Output | A person will read a narrative, list, or table in the run result. | Investigation summary, source gaps, review notes. |
| Structured Output | Later tasks or tests need named fields and repeatable sections. | Findings array, metric assessment fields, risk table rows. |
| Fill Template | The workflow should populate a CoDraft template. | Validation protocol section, CAPA report, URS draft. |
Add tools with intent
Tools let an Orchestration do work beyond plain text generation. Add tools because a task needs a capability, not because the workflow might use it someday. Use tools when the workflow needs to:- Search selected Workspace content.
- Read or create CoDraft content.
- Analyze CoSheet data.
- Use CoMeeting context.
- Request human input.
- Create or update an artifact.
- Use connected app context when your organization enables it.
Put human review before risk
Use Require Human Review when the workflow should pause before continuing. Add review before:- Drawing a root cause conclusion.
- Recommending CAPA effectiveness.
- Interpreting conflicting evidence.
- Creating a final CoDraft.
- Saving output to a shared Workspace.
- Sending or changing content in a connected app.
- Continuing after missing required sources.
Build small, then expand
The first version should be the smallest useful Orchestration.Create the core path
Build the required inputs, one or two agents, and the smallest task sequence that creates a useful output.
Run it with realistic inputs
Use a real Workspace, representative files, and a source set similar to what users will provide later.
Inspect the first failure
Look for vague inputs, missing source rules, unsupported conclusions, weak output shape, or late human review.
Fix one layer at a time
Improve inputs first, then task instructions, then tools, then output shape, then tests.
Test like a user will run it
Use tests to check whether the workflow still behaves the way you expect. In the editor, use Create Test to define test inputs and expected task outputs. Use Run Test or Run All Tests after changes. If your workspace shows Drift Detected, review the drift before running the test again. Test cases should cover:- A normal source set.
- Missing optional inputs.
- Missing required evidence.
- Conflicting sources.
- A larger Workspace with irrelevant files.
- A CoSheet with unexpected blank values.
- A template output with required placeholders.
- A run that should pause for human review.
| Validation strategy | Use it for |
|---|---|
| Exact | Stable labels, fixed statuses, and values that should not vary. |
| Semantic | Wording that may vary while meaning should stay the same. |
| Contains | Required phrases, source names, warnings, or section headings. |
| Regex | IDs, dates, codes, or formats that follow a pattern. |
| Length | Outputs that should stay within a reviewable size. |
| AI Judge | Qualitative checks such as accuracy, completeness, tone, missing support, or custom criteria. |
Publish only after users can run it
Keep an Orchestration private or shared with a small group while it is changing. Publish to the organization when:- The name and description tell users when to run it.
- Required inputs are clear.
- Optional inputs are helpful but not confusing.
- Source priority is explicit.
- Each task has a reviewable output.
- Human review happens before key decisions or shared outputs.
- Tests cover realistic inputs.
- A person other than the builder can run it without hidden context.
Share editing carefully
Use sharing to bring in people who can improve or review the workflow. Suggested roles:| Collaborator | Useful contribution |
|---|---|
| Process owner | Confirms when the workflow should run and what outputs matter. |
| SME | Checks source interpretation and missing evidence handling. |
| QA reviewer | Reviews assumptions, review points, and output structure. |
| Data owner | Checks CoSheet inputs and metric interpretation. |
| Template owner | Confirms CoDraft template fields and placeholder meaning. |
Run Orchestrations from chat intelligently
When you run an Orchestration from chat, Sofie starts by asking for required information if it is missing. Good run request:- The Orchestration name is the one you intend to use.
- The Workspace and artifacts are the correct project.
- Attached files are current.
- Dates and options are unambiguous.
- Sofie should pause before any conclusion or artifact creation that needs review.
Maintain Orchestrations over time
An Orchestration can become stale when source patterns, templates, team expectations, or review needs change. Review important Orchestrations when:- A template changes.
- The expected output changes.
- A new source type becomes common.
- Users report confusing inputs.
- Tests start failing.
- Drift Detected appears.
- A workflow produces unsupported or hard-to-review output.
- A published workflow should be limited again.
Common problems and fixes
| Problem | Likely cause | Fix |
|---|---|---|
| Users provide the wrong files | Input labels are too vague | Rename inputs and add specific descriptions. |
| Output mixes facts and assumptions | Task instructions do not separate them | Require separate sections for facts, assumptions, gaps, and questions. |
| Sofie reaches a conclusion too early | Review point is too late or missing | Add Require Human Review before conclusion tasks. |
| Run searches unrelated material | Source rules are too broad | Limit the Workspace, file set, or search scope. |
| Output is hard to test | Output shape is loose | Use tables or Structured Output fields. |
| Run created or modified the wrong artifact | Output destination or artifact instruction is too loose | Tighten the task instruction, require a destination input, or add human review before artifact changes. |
| Workflow is too slow or noisy | Too many agents or tools | Remove overlapping agents and unused tools. |
| Published workflow confuses users | It assumes builder knowledge | Add a clearer description, required inputs, and example run guidance. |
| Tests fail after edits | Workflow changed but test stayed old | Update the test or use drift repair after confirming the new behavior. |
Useful Orchestration prompts
Use these in chat before or after editing an Orchestration.Design before building
Design before building
Review an existing Orchestration
Review an existing Orchestration
Create a test plan
Create a test plan
Improve a failed run
Improve a failed run
Prepare for publishing
Prepare for publishing
Life sciences patterns
Deviation investigation
Use the Orchestration to separate evidence collection from interpretation. Recommended structure:- Required inputs: deviation description, investigation Workspace, evidence files, batch or equipment references.
- Early task: confirmed facts and timeline.
- Middle task: source gaps and SME questions.
- Review point: before root cause analysis.
- Output: CoDraft investigation section with placeholders for unresolved evidence.
CAPA effectiveness check
Use the Orchestration to make evidence and criteria visible before drafting. Recommended structure:- Required inputs: CAPA plan, effectiveness criteria, observation window, metrics CoSheet, evidence files.
- Early task: criteria extraction.
- Middle task: metric review and evidence table.
- Review point: before conclusion.
- Output: CoDraft section plus evidence table.
Validation protocol generation
Use the Orchestration to move from source comparison to controlled drafting. Recommended structure:- Required inputs: URS, risk assessment, equipment or process description, protocol template, Workspace.
- Early task: source map and gaps.
- Middle task: protocol outline and test section plan.
- Review point: before acceptance criteria or final section generation.
- Output: filled CoDraft template with placeholders for SME confirmation.
Batch record review
Use the Orchestration to keep exceptions, source references, and follow-up questions traceable. Recommended structure:- Required inputs: batch record file, product, batch number, review focus, Workspace.
- Early task: section inventory and missing pages.
- Middle task: exception table.
- Review point: before disposition language.
- Output: CoSheet tracker or CoDraft review summary.