Skip to main content
Use the Orchestration editor when you need detailed control over a reusable workflow. You can start from scratch, use the Library, upload an Orchestration file when available, or ask Sofie to draft the design from chat. For design judgment before editing, see Use Orchestrations intelligently.

Create or open an Orchestration

1

Open Orchestrate

Click Orchestrate in the sidebar.
2

Choose a starting point

Click Create, start from the Library, or open an existing Orchestration.
3

Name the workflow

Use a name that describes the repeatable job, such as Deviation investigation report draft.
4

Add a description

Describe who should run it, what inputs it expects, and what it creates.
5

Keep it in draft

Build and test as a draft until the workflow behaves consistently with realistic inputs.

Configure settings

Use the Settings tab to control how the Orchestration runs.
SettingWhat to choose
NameA short workflow name users can recognize in chat and Orchestrate.
DescriptionWho should run it, when to run it, required sources, and expected output.
Process TypeSequential for fixed task order. Hierarchical when planning and delegation should guide the run.
Max Requests Per MinuteThe request pace for the run. Lower it when the workflow uses many source or tool steps.
Enable PlanningAvailable in hierarchical mode. Use it when Sofie should plan before delegating tasks.
Short-Term Memory (Within Run)Let tasks save and retrieve information from other tasks in the same run.
Long-Term Memory (Cross-Run Learning)Let agents save general learnings across runs.
Require CitationsAsk tasks to cite sources where citations are available.
Allow Parallel Task ExecutionLet independent tasks run at the same time. Tasks with dependencies still run in order.
For strategy guidance on these settings, see Use Orchestrations intelligently. For testing guidance on parallel execution, long-term Memory, validation methods, and run history, see Test Orchestrations.

Define inputs

Inputs are the information users provide when they run the Orchestration. Input fields can describe artifacts, files, Workspace context, text values, dates, options, numbers, yes/no values, JSON, or tables. Depending on the workflow, you can allow a single value or multiple values. For each input, write:
FieldHow to use it
KeyA short identifier used by the Orchestration.
LabelThe user-facing name.
DescriptionWhat the user should provide and why.
TypeThe kind of input, such as Text, Number, Yes/No, Dropdown, Date, Workspace, File, Orchestration, CoMeeting, CoDraft, or CoSheet.
MultiplicityUse Single for one value or Multiple for several files, artifacts, or source items.
Required ModeUse Required when the workflow cannot run without it. Use Optional when the input improves the result but should not block the run.
Default ValueA fallback for optional inputs.
Dropdown optionsThe allowed values for Dropdown inputs.
Good input labels:
  • Investigation Workspace
  • Deviation description
  • Evidence files
  • CAPA plan
  • Effectiveness criteria
  • Observation window
  • SME interview CoMeeting
  • Metrics CoSheet
Define inputs before writing agents and tasks. You can insert inputs into task instructions so the workflow references the user’s exact sources and choices.
Reference inputs in agent goals, backstories, task descriptions, plan text, and expected outputs with curly braces, such as {Investigation Workspace} or {Metrics CoSheet}.

Add agents

Agents are roles inside the Orchestration. Each agent should have a focused job. Agent configuration can include:
FieldUse it for
RoleThe persona or responsibility, such as investigator, reviewer, drafter, or data analyst.
GoalThe outcome the agent is trying to produce.
BackstoryContext that helps the agent interpret the work.
ModelThe AI model used by that agent when your organization offers choices.
Max StepsThe amount of iterative work the agent can perform before stopping.
Long-Term MemoriesAgent-specific memories learned across runs when long-term memory is enabled.
Keep agents specific. A Deviation evidence reviewer is easier to evaluate than a general Quality expert. Open an agent’s Long-Term Memories tab to inspect learned memories for that agent. Use long-term memory for general workflow lessons, not current-run source facts or final decisions.

Add tasks

Tasks are the ordered work an agent performs. For each task, define:
  • Name
  • Description
  • Output Mode
  • Expected Output Description for text output
  • Plan Text when the task should explain or follow a plan
  • Require Human Review when a person must confirm the result before the workflow continues
Write task descriptions as instructions, not goals. Weak task:
Analyze the files.
Better task:
Review the evidence files for the deviation. Extract observed event details, immediate actions, affected batch or equipment identifiers, missing evidence, and direct source references. Do not conclude root cause.

Choose an output mode

Tasks can produce different kinds of output.
Output modeUse it when
Text OutputThe task should return prose, a list, or a table in the run result.
Structured OutputDownstream tasks need named fields, typed values, or repeatable sections.
Fill TemplateThe result should populate a CoDraft template or defined deliverable structure.
For Structured Output, click Define Output and define fields with clear names, types, descriptions, options, and formatting guidance. For deeper schema design guidance, see Structured Outputs. Structured output fields can include:
  • Field name.
  • Type.
  • Description.
  • Dropdown options when the type is Dropdown.
  • Image prompt template when the type is Image.
  • Chart type when the type is chart-oriented.
  • Formatting guidance.
  • Required setting.
  • Allow multiple setting.
For Fill Template, choose the template and output location. Use Run output only (no workspace) while testing. Use Save to workspace only when the destination is clear.
Do not make a task produce a final conclusion unless the input sources support it and the workflow includes a human review point where needed.

Add tools

Tools give an Orchestration capabilities beyond text generation. Available tools depend on your organization and the workflow surface. Use tools for tasks such as:
  • Searching or reading provided context.
  • Creating or updating artifacts.
  • Requesting human input.
  • Working with Workspace content.
  • Producing structured files or documents.
Add only the tools the workflow needs. Too many tool choices can make runs harder to review.

Add human review

Turn on Require Human Review for tasks that need confirmation before the workflow continues. Use human review before:
  • Drawing a root cause conclusion.
  • Recommending CAPA effectiveness.
  • Filling a final report section.
  • Saving output to a shared Workspace.
  • Using ambiguous source material.
  • Continuing after missing or conflicting evidence.
Human review inside Sofie controls the Orchestration flow. It does not replace your organization’s review or approval process outside Sofie.

Use versions

Use version controls when changing a workflow that other people run. Common version actions include:
  • Versions to inspect prior saved versions.
  • Save Current Version before a major edit.
  • Restore Revision when a change should be undone.
  • Promote Draft to Live when the draft is ready for others to run.
  • Discard Draft when you do not want to keep draft changes.
Save a version before changing required inputs, output modes, or review points.

Run a test

Run tests with realistic inputs before publishing.
1

Open the run panel

Start a run from the Orchestration editor or the Orchestrate page.
2

Provide required inputs

Select the Workspace, files, CoDraft, CoMeeting, CoSheet, text, dates, or options the workflow asks for.
3

Watch task progress

Review each task as it runs. Stop the run if the workflow starts using the wrong context.
4

Complete human review steps

Answer questions or confirm task results when the workflow pauses for review.
5

Inspect the output

Check source references, assumptions, missing evidence, formatting, and any artifacts shown in the run result.
6

Refine the draft

Edit inputs, agents, tasks, tools, or output mode based on the test run.
Use this test prompt after a run:
Review this Orchestration run. List unclear inputs, weak task instructions, missing review points, unsupported conclusions, and output fields that should be more structured.

Inspect run artifacts

After a completed run, check the Artifacts section before you decide whether the workflow is ready. The section shows artifacts the run created, modified, opened, reviewed, imported, or exported. It can appear in chat results, the run panel, and run history when artifacts are available. Use it to confirm:
  • The run created the expected artifact type.
  • The title and destination are correct.
  • Existing artifacts were modified only when intended.
  • The task that touched the artifact was the right task.
  • The artifact content matches the run output.
  • Human review happened before important edits or saved outputs.
Good review prompt:
Review the artifacts from this run. Tell me which artifacts were created or modified, whether each one matches the expected output, and what I should inspect before publishing this Orchestration.
If the wrong artifact was created or changed, update the Orchestration before publishing. Usually that means tightening input labels, task instructions, output mode, destination rules, or human review steps. When creating tests, match the validation method to the expected output:
Validation methodUse it for
ExactFixed values that should match exactly.
SemanticMeaning that can be correct with different wording.
ContainsRequired terms, source names, warnings, or headings.
RegexIDs, dates, batch numbers, or other patterns.
LengthGuardrails on output size.
AI JudgeQualitative checks such as accuracy, completeness, tone, missing support, or custom criteria.
Use Run All Tests after major edits. Use Validate Against Past Run when you want to compare expected behavior with a prior run. If Drift Detected appears, review the drift before running the test again. For detailed instructions on manual tests, saving successful runs as tests, structured validation, AI Judge criteria, drift repair, run history, and deterministic test design, see Test Orchestrations.

Publish and share

Publish when:
  • The required inputs are clear.
  • The workflow handles missing information.
  • Review points occur before conclusions or saves.
  • The output is specific enough to review.
  • A realistic test run produced useful results.
  • The description tells users when to run it.
Share or publish only with the people who should run or edit the Orchestration. Keep drafts private or limited while the workflow is still changing.

Build with Sofie chat

If you prefer to describe the workflow in plain language first, use Build an Orchestration with Sofie chat. Chat is best for brainstorming the design. The editor is best for precise input, task, output, version, and publishing control.