Skip to main content
An Orchestration turns a repeatable Sofie workflow into a reusable process. Use it when you want the same kind of work to happen with different inputs, consistent review points, and a predictable output. Orchestrations are most useful when you treat them like a designed workflow, not a long prompt. A good Orchestration tells Sofie what to ask for, what sources to use, what each step should produce, where a person must review the work, and what the final result should look like.
Chat is still the main way to interact with Sofie. Use chat for one-off work and exploration. Use Orchestrations when the process itself should be reused.

Choose the right Sofie surface

Start by deciding whether you need an Orchestration at all.
NeedUse
One question or one analysis passChat
A complex one-time task that should be planned firstPlan Mode
A reusable instruction you paste into chatSaved prompt
A document deliverableCoDraft
A spreadsheet, tracker, or analysis tableCoSheet
A shared source setWorkspace
A repeatable workflow with inputs, steps, tools, tests, and review pointsOrchestration
Good Orchestration candidates have three traits:
  • The same workflow repeats for many projects, batches, documents, studies, products, or investigations.
  • Users can provide a consistent set of inputs.
  • The output can be reviewed against expectations.
Poor Orchestration candidates are usually broad, exploratory, or dependent on hidden context. Start those in chat, then convert the useful pattern into an Orchestration later.

Define the workflow before you build

Before creating agents or tasks, write the workflow in plain language. Use this structure:
Workflow name:
Who runs it:
When they run it:
Required inputs:
Optional inputs:
Source priority:
Steps:
Human review points:
Output:
Stop conditions:
What Sofie should not infer:
Example:
Workflow name: CAPA effectiveness check
Who runs it: QA owner or process SME
When they run it: After the observation window closes
Required inputs: CAPA plan, effectiveness criteria, observation window, evidence files, metrics CoSheet, Workspace
Optional inputs: related deviations, prior CAPAs, meeting notes
Source priority: CAPA plan and effectiveness criteria outweigh prior discussion
Steps: collect criteria, review evidence, analyze metrics, identify gaps, draft review table, pause for human review, draft final CoDraft section
Human review points: before effectiveness conclusion and before CoDraft creation
Output: CoDraft report section plus evidence table
Stop conditions: missing criteria, missing evidence, conflicting source dates, unclear metric ownership
What Sofie should not infer: acceptance criteria, thresholds, owner decisions, or final disposition
This blueprint makes the editor work easier. It also gives Sofie a better prompt if you build the Orchestration from chat.

Understand the main settings

The Orchestration editor includes settings that change how a run behaves. Set these before you tune individual agents and tasks.
SettingWhat it controlsUse it intelligently
Process TypeWhether tasks run as a Sequential process or a Hierarchical process.Use Sequential when task order is fixed. Use Hierarchical when a manager-style planning step should coordinate agents and tasks.
Enable PlanningWhether a hierarchical Orchestration creates a plan before work is delegated.Turn it on for complex work with branching, source gathering, or multiple review points. It is not available in sequential mode.
Short-Term Memory (Within Run)Whether tasks can save and retrieve useful information from other tasks in the same run.Use it when later tasks need findings from earlier tasks but should not receive a giant copied transcript.
Long-Term Memory (Cross-Run Learning)Whether agents can save general lessons from prior runs and use them in future runs.Use it for stable workflow lessons, recurring reviewer preferences, and repeated process patterns. Do not use it as the source of record for project facts.
Max Requests Per MinuteThe run’s request pacing.Lower it when runs hit service limits or use many source/tool steps. Avoid increasing it unless your environment supports the load.
Require CitationsWhether responses should include citations.Use it for source-backed research, evidence review, and quality workflows. Still review the cited sources yourself.
Allow Parallel Task ExecutionWhether independent tasks may run at the same time.Use it for independent source reviews or analyses. Turn it off when every task depends on the exact output of the previous task.
If you are not sure, start with hierarchical planning on, citations on for source-backed work, parallel execution on only for independent tasks, and memory off until you know what should be learned.

Design around inputs

Inputs are the contract between the person running the Orchestration and the workflow. Good inputs are specific:
Weak inputBetter input
FilesDeviation evidence files
InfoDeviation description
DataEffectiveness metrics CoSheet
DocumentCAPA plan CoDraft
WorkspaceInvestigation Workspace
DateObservation window end date
For each input, make the user-facing Label and Description do real work. Tell users what to provide, when to use the input, and what happens if it is missing. Use required inputs for material that the workflow cannot evaluate without. Use optional inputs for helpful context that should not block the run. Input types can include Text, Number, Yes/No, Dropdown, Date, Workspace, File, Orchestration, CoMeeting, CoDraft, and CoSheet. Use Single when the workflow needs one value. Use Multiple when the user may provide several files, documents, meetings, or sheets. Reference inputs inside agent and task instructions with curly braces, such as {Investigation Workspace} or {CAPA plan CoDraft}. This keeps the workflow reusable because the instruction points to the run’s selected input instead of today’s example file. Use dropdown inputs when you want users to choose from a controlled list, such as Review focus, Product type, Investigation phase, or Output destination.
If users routinely ask, “What should I upload here?”, the input description is not specific enough.

Set source rules

Most weak Orchestration runs come from vague source handling. Tell the workflow what sources matter and what to do when they conflict. Include source rules such as:
  • Use the selected Workspace as the project source set.
  • Use Workspace search for project files before using older chat assumptions.
  • Treat attached files as the current run’s evidence.
  • Use CoDraft templates for structure, not as factual evidence unless the user says so.
  • Use CoMeeting transcripts as discussion context, not final source decisions.
  • If sources conflict, list the conflict instead of resolving it silently.
  • If a required source is missing, pause and ask for it.
Good source instruction:
Use the CAPA plan and effectiveness criteria as the primary sources. Use the metrics CoSheet for observed results. Use meeting notes only to identify open questions. If the sources conflict, list the conflict and pause before drafting a conclusion.

Use agents only when roles are different

Agents are useful when the workflow needs different responsibilities. They are not useful when they only split one task into more names. Good agent roles:
  • Evidence reviewer.
  • Data analyst.
  • Source gap reviewer.
  • Report drafter.
  • Quality reviewer.
  • Researcher.
Avoid agents that overlap heavily:
  • Expert 1, Expert 2, Expert 3.
  • Reviewer and Checker with the same instructions.
  • A general Quality expert expected to do every step.
Use one agent when the work is simple. Add more agents when you need different source handling, different outputs, or an independent review step.

Use agent memory carefully

Agent memory is useful, but it can also make a workflow harder to reason about if you use it for the wrong information. Use Short-Term Memory (Within Run) when tasks in the same run need to pass forward useful discoveries:
  • The evidence reviewer finds a missing batch record page.
  • The data analyst identifies outlier lots that the drafter must mention.
  • The researcher finds a source conflict that the reviewer must resolve.
Use Long-Term Memory (Cross-Run Learning) when the agent should learn general patterns across runs:
  • A QA reviewer repeatedly asks for facts, assumptions, gaps, and SME questions to be separated.
  • A report drafter learns the preferred structure for a recurring CoDraft template.
  • A data analyst learns that a specific metric should be explained with the same caveat.
Do not use long-term memory for project facts, batch-specific findings, source conclusions, acceptance criteria, or final decisions. Put those in the current inputs, Workspace, CoDraft, CoSheet, or reviewed source material. You can inspect agent memories from the agent’s Long-Term Memories tab. Review and delete stale memories when they no longer reflect how the workflow should behave.
Long-term memory can influence future runs. Keep it for reusable workflow lessons, not regulated source facts or one-time project decisions.

Make each task reviewable

A task should produce something a person can inspect before trusting the next step. Weak task:
Analyze the investigation and write the report.
Better task sequence:
Extract confirmed facts from the deviation description, batch record, and evidence files. Return a table with source, fact, date, and uncertainty.
Identify source gaps and SME questions. Do not propose root cause.
Draft the investigation report background section from the confirmed facts table. Include placeholders where evidence is missing.
Good task outputs are:
  • Specific.
  • Source-aware.
  • Limited to one kind of work.
  • Easy to compare against the expected output.
  • Clear about what Sofie should not decide.
Use Plan Text when a task needs to expose or follow a specific plan. This helps reviewers understand why the task is doing the work in that order.

Choose output modes deliberately

Use the output mode that matches how the result will be reviewed or reused.
Output modeUse it whenExample
Text OutputA person will read a narrative, list, or table in the run result.Investigation summary, source gaps, review notes.
Structured OutputLater tasks or tests need named fields and repeatable sections.Findings array, metric assessment fields, risk table rows.
Fill TemplateThe workflow should populate a CoDraft template.Validation protocol section, CAPA report, URS draft.
Use Structured Output when the answer must be checked field by field or handed to another agent. Use Fill Template when you already know the document structure and want the Orchestration to fill defined placeholders. For detailed schema patterns, see Structured Outputs. Structured outputs can include fields such as text, numbers, yes/no values, dropdown choices, images, charts, objects, and lists. For each field, write a clear name, description, formatting guidance, and whether it is required or may appear multiple times. When you use Fill Template, also decide whether the output should stay in the run result or save to a Workspace. Save to a Workspace only when the destination is clear and the user has reviewed the workflow behavior.
Do not ask for a final conclusion as the first output. Build intermediate outputs that show evidence, gaps, assumptions, and questions before any final draft.

Add tools with intent

Tools let an Orchestration do work beyond plain text generation. Add tools because a task needs a capability, not because the workflow might use it someday. Use tools when the workflow needs to:
  • Search selected Workspace content.
  • Read or create CoDraft content.
  • Analyze CoSheet data.
  • Use CoMeeting context.
  • Request human input.
  • Create or update an artifact.
  • Use connected app context when your organization enables it.
Fewer tools usually make the run easier to understand. If a task should only review a source table, do not give it artifact-creation responsibilities. If a task should only draft a CoDraft section, do not make it search broadly unless it needs new sources.

Put human review before risk

Use Require Human Review when the workflow should pause before continuing. Add review before:
  • Drawing a root cause conclusion.
  • Recommending CAPA effectiveness.
  • Interpreting conflicting evidence.
  • Creating a final CoDraft.
  • Saving output to a shared Workspace.
  • Sending or changing content in a connected app.
  • Continuing after missing required sources.
Human review is a workflow control. It helps the person running the Orchestration check the intermediate result, answer questions, and decide whether the run should continue.
A human review step in Sofie does not replace your organization’s required review process. Use it to control the workflow and prepare work for the right reviewers.

Build small, then expand

The first version should be the smallest useful Orchestration.
1

Create the core path

Build the required inputs, one or two agents, and the smallest task sequence that creates a useful output.
2

Run it with realistic inputs

Use a real Workspace, representative files, and a source set similar to what users will provide later.
3

Inspect the first failure

Look for vague inputs, missing source rules, unsupported conclusions, weak output shape, or late human review.
4

Fix one layer at a time

Improve inputs first, then task instructions, then tools, then output shape, then tests.
5

Save a version

Use Save Current Version before major changes so you can restore a known-good state.
Do not start by building the full future workflow. Start with the part users can run and review today.

Test like a user will run it

Use tests to check whether the workflow still behaves the way you expect. In the editor, use Create Test to define test inputs and expected task outputs. Use Run Test or Run All Tests after changes. If your workspace shows Drift Detected, review the drift before running the test again. Test cases should cover:
  • A normal source set.
  • Missing optional inputs.
  • Missing required evidence.
  • Conflicting sources.
  • A larger Workspace with irrelevant files.
  • A CoSheet with unexpected blank values.
  • A template output with required placeholders.
  • A run that should pause for human review.
Good test expectation:
The evidence review task returns a table with one row per finding. Each row includes source, observation, source date, uncertainty, and follow-up question. The task must not include a final disposition.
Use Validate Against Past Run when you want to compare a test against prior run behavior. Use Fix Drift Issues when the Orchestration changed enough that the saved test no longer matches the current workflow. For a full testing workflow, including saved-run baselines, validation methods, structured output checks, run history, drift, parallel execution, and long-term Memory behavior, see Test Orchestrations. Choose validation strategies based on the output:
Validation strategyUse it for
ExactStable labels, fixed statuses, and values that should not vary.
SemanticWording that may vary while meaning should stay the same.
ContainsRequired phrases, source names, warnings, or section headings.
RegexIDs, dates, codes, or formats that follow a pattern.
LengthOutputs that should stay within a reviewable size.
AI JudgeQualitative checks such as accuracy, completeness, tone, missing support, or custom criteria.
Treat a failed test as workflow feedback. The fix might be a clearer input label, a narrower task, a stronger source rule, or an earlier review point.

Publish only after users can run it

Keep an Orchestration private or shared with a small group while it is changing. Publish to the organization when:
  • The name and description tell users when to run it.
  • Required inputs are clear.
  • Optional inputs are helpful but not confusing.
  • Source priority is explicit.
  • Each task has a reviewable output.
  • Human review happens before key decisions or shared outputs.
  • Tests cover realistic inputs.
  • A person other than the builder can run it without hidden context.
When you use Publish to Organization, organization members can view and run the Orchestration. Only owners and editors can modify it. If the workflow is no longer ready for broad use, use Unpublish and continue working with explicit collaborators.

Share editing carefully

Use sharing to bring in people who can improve or review the workflow. Suggested roles:
CollaboratorUseful contribution
Process ownerConfirms when the workflow should run and what outputs matter.
SMEChecks source interpretation and missing evidence handling.
QA reviewerReviews assumptions, review points, and output structure.
Data ownerChecks CoSheet inputs and metric interpretation.
Template ownerConfirms CoDraft template fields and placeholder meaning.
Give edit access to people who should change the workflow. Give view or run access to people who only need to use it.

Run Orchestrations from chat intelligently

When you run an Orchestration from chat, Sofie starts by asking for required information if it is missing. Good run request:
Run the CAPA effectiveness check Orchestration. Use the selected Workspace, the CAPA plan CoDraft, the metrics CoSheet, and the observation window from January 1, 2026 through March 31, 2026. Ask me before drafting the final conclusion.
Weak run request:
Run the CAPA one.
Before running, check:
  • The Orchestration name is the one you intend to use.
  • The Workspace and artifacts are the correct project.
  • Attached files are current.
  • Dates and options are unambiguous.
  • Sofie should pause before any conclusion or artifact creation that needs review.
After the run completes, review the Artifacts section. It shows artifacts created or modified during the run when Sofie can track them. Open each artifact from the run result and confirm the right item was created, updated, or referenced. Ask Sofie:
Review the artifacts from this Orchestration run. For each artifact, tell me whether it was created or modified, which task touched it, and what I should inspect before using it.

Maintain Orchestrations over time

An Orchestration can become stale when source patterns, templates, team expectations, or review needs change. Review important Orchestrations when:
  • A template changes.
  • The expected output changes.
  • A new source type becomes common.
  • Users report confusing inputs.
  • Tests start failing.
  • Drift Detected appears.
  • A workflow produces unsupported or hard-to-review output.
  • A published workflow should be limited again.
Use Revision History to inspect prior versions. Use Save Current Version before major edits. Use Restore Revision when a change made the workflow worse.

Common problems and fixes

ProblemLikely causeFix
Users provide the wrong filesInput labels are too vagueRename inputs and add specific descriptions.
Output mixes facts and assumptionsTask instructions do not separate themRequire separate sections for facts, assumptions, gaps, and questions.
Sofie reaches a conclusion too earlyReview point is too late or missingAdd Require Human Review before conclusion tasks.
Run searches unrelated materialSource rules are too broadLimit the Workspace, file set, or search scope.
Output is hard to testOutput shape is looseUse tables or Structured Output fields.
Run created or modified the wrong artifactOutput destination or artifact instruction is too looseTighten the task instruction, require a destination input, or add human review before artifact changes.
Workflow is too slow or noisyToo many agents or toolsRemove overlapping agents and unused tools.
Published workflow confuses usersIt assumes builder knowledgeAdd a clearer description, required inputs, and example run guidance.
Tests fail after editsWorkflow changed but test stayed oldUpdate the test or use drift repair after confirming the new behavior.

Useful Orchestration prompts

Use these in chat before or after editing an Orchestration.
Design an Orchestration for this workflow before creating it. Return the workflow trigger, required inputs, optional inputs, source priority, agents, tasks, tools, output mode, human review points, tests, and stop conditions. Ask questions where the workflow is ambiguous.
Review this Orchestration design. Identify vague inputs, overlapping agents, tasks that are too broad, missing source rules, missing human review points, and outputs that should be structured.
Create a test plan for this Orchestration. Include normal inputs, missing-input cases, conflicting-source cases, large Workspace cases, and expected outputs for each task.
Review this Orchestration run. List where the run used the wrong context, skipped a source check, produced unsupported output, needed human review earlier, or created an output that was hard to review. Suggest exact edits to inputs, tasks, tools, and output mode.
Check whether this Orchestration is ready to publish. Review the name, description, inputs, source rules, tasks, output modes, human review points, tests, and whether a new user could run it without hidden context.

Life sciences patterns

Deviation investigation

Use the Orchestration to separate evidence collection from interpretation. Recommended structure:
  • Required inputs: deviation description, investigation Workspace, evidence files, batch or equipment references.
  • Early task: confirmed facts and timeline.
  • Middle task: source gaps and SME questions.
  • Review point: before root cause analysis.
  • Output: CoDraft investigation section with placeholders for unresolved evidence.

CAPA effectiveness check

Use the Orchestration to make evidence and criteria visible before drafting. Recommended structure:
  • Required inputs: CAPA plan, effectiveness criteria, observation window, metrics CoSheet, evidence files.
  • Early task: criteria extraction.
  • Middle task: metric review and evidence table.
  • Review point: before conclusion.
  • Output: CoDraft section plus evidence table.

Validation protocol generation

Use the Orchestration to move from source comparison to controlled drafting. Recommended structure:
  • Required inputs: URS, risk assessment, equipment or process description, protocol template, Workspace.
  • Early task: source map and gaps.
  • Middle task: protocol outline and test section plan.
  • Review point: before acceptance criteria or final section generation.
  • Output: filled CoDraft template with placeholders for SME confirmation.

Batch record review

Use the Orchestration to keep exceptions, source references, and follow-up questions traceable. Recommended structure:
  • Required inputs: batch record file, product, batch number, review focus, Workspace.
  • Early task: section inventory and missing pages.
  • Middle task: exception table.
  • Review point: before disposition language.
  • Output: CoSheet tracker or CoDraft review summary.

The standard to aim for

A strong Orchestration can be explained in one sentence:
When a user provides these inputs, Sofie follows these steps, uses these sources, pauses at these review points, and produces this reviewable output.
If you cannot fill in that sentence, keep designing in chat or Plan Mode before publishing the workflow.