Use Structured Outputs in Orchestrations to create repeatable fields, cleaner task handoffs, and more reliable testing.
Structured Outputs let a task return named fields instead of only a block of prose. Use them when the next task, a reviewer, a test, a CoDraft, or a CoSheet needs the output to stay consistent across runs.In an Orchestration, a text output might say the right thing in a different order every time. A Structured Output can return fields such as confirmedFacts, sourceGaps, riskLevel, recommendedAction, and reviewQuestions. That makes the result easier to pass to another agent, easier to validate, and easier to turn into an artifact.
Structured Outputs do not remove the need for review. They make Sofie’s work easier to inspect because the important parts are separated into fields instead of hidden in prose.
Use Structured Output when a task output needs a stable shape.Good candidates:
Evidence tables.
Source gap lists.
Batch record exceptions.
CAPA action assessments.
Quality risk rows.
Validation protocol sections.
URS requirements.
CoSheet-ready data.
CoDraft template inputs.
Intermediate findings that another task must use.
Use Text Output when the task only needs a readable narrative, such as a short explanation, summary, or reviewer note.Use Fill Template when the task should populate a CoDraft template or another defined deliverable structure.
Orchestrations often depend on one task’s output becoming another task’s input. Prose is flexible, but it is hard for a downstream task to use precisely.Structured Outputs improve handoff because each field has a clear purpose:
Handoff need
Structured Output pattern
A reviewer task needs confirmed facts
A confirmedFacts list with source, observation, and uncertainty fields.
A drafter task needs source gaps
A sourceGaps list with missing item, impact, and owner question fields.
A data task needs metric results
A metricResults list with metric name, value, unit, trend, and interpretation fields.
A final task needs decision boundaries
A stopConditions list and a readyForConclusion true/false field.
A CoDraft task needs sections
Fields that map to document sections or template placeholders.
This is especially important when Allow Parallel Task Execution is enabled. Independent tasks can run at the same time, but the synthesis task should receive consistent outputs from each upstream task.
If a later task needs to quote, sort, filter, count, test, or reuse a previous task’s output, use Structured Output.
Narrative values, source notes, facts, questions, rationales, or section drafts.
rootCauseEvidence, summary, openQuestion
Number
Counts, measured values, scores, percentages, limits, or numeric results.
defectCount, failureRate, sampleSize
True/False
Binary answers or gates.
sourceConflictFound, readyForConclusion
Dropdown
Controlled categories where the value should come from a defined list.
riskLevel: low, medium, high
Group
A nested object with multiple related fields.
ownerReview with reviewer, question, and due date fields
Image
An image result or generated visual reference.
processDiagram
Chart
Chart output from structured data.
trendChart
Use Allow Multiple when the field should return a list, such as multiple findings, sources, questions, metrics, or actions.Use Required when the field must be present for the output to be useful. Leave it optional only when the task can legitimately have no value for that field.
Field descriptions guide Sofie and help reviewers understand the output.Weak description:
Findings.
Better description:
Each confirmed finding from the selected sources. Include only facts directly supported by the provided evidence. Do not include assumptions or conclusions.
Good field descriptions explain:
What belongs in the field.
What does not belong in the field.
Which sources matter.
How uncertainty should be represented.
Whether the field is for review, drafting, testing, or handoff.
Use Group when one item has multiple subfields.Example sourceGap group:
Nested field
Type
Description
missingItem
Text
The missing source, page, record, data field, or decision.
whyItMatters
Text
Why the missing item matters for the workflow.
ownerQuestion
Text
The question to ask the responsible SME or reviewer.
blocksConclusion
True/False
Whether the gap should stop a conclusion or final draft.
Then set Allow Multiple on the sourceGaps field so Sofie can return a list of source gaps.This pattern is better than one long sourceGaps paragraph because tests can validate each item and downstream tasks can use the list directly.
Use Dropdown when Sofie should choose from known values instead of inventing labels.Good dropdowns:
Field
Options
riskLevel
low, medium, high
findingStatus
confirmed, potential, unsupported, needs review
sourceConfidence
high, medium, low
reviewOutcome
continue, pause, ask for input
artifactDestination
run output only, CoDraft, CoSheet
Dropdowns are useful for testing because Exact Match can validate the selected option.Avoid dropdowns when the values are not stable. If reviewers often need a new category, use Text plus a clear description instead.
Set Allow Multiple when the task should return more than one item.Use lists for:
Findings.
Exceptions.
Requirements.
Risks.
CAPA actions.
Metrics.
Sources.
Review questions.
Draft sections.
For a list of simple values, such as source names, use a Text field with Allow Multiple.For a list of complex values, use a Group field with Allow Multiple and define nested fields.Example batchRecordExceptions list:
Structured Outputs are one of the best ways to make Orchestration tests useful.Text tests often fail because phrasing changes. Structured tests can check fields:
A required field exists.
A dropdown value matches a controlled option.
A number is within a range.
A list has at least one item.
A list item contains a required source.
A field does not contain unsupported conclusion language.
An array matches by identity fields instead of row order.
When a Structured Output contains a list of objects, tests can match items in different ways.
Match strategy
Use it when
Exact order
The list order is part of the requirement.
Identity fields
Items have stable fields such as source, section, finding, metric, or requirement ID.
Semantic similarity
Items may be worded differently but should refer to the same concept.
Use identity fields for most life sciences lists. A finding list should not fail only because Sofie returned source gaps in a different order.Good identity fields:
source
section
finding
requirementId
metricName
testName
Avoid identity fields that are vague or likely to change, such as long narrative descriptions.
The CoDraft task fills sections and leaves placeholders.
Write downstream task instructions that name the upstream fields:
Use the confirmedFacts and sourceGaps from the evidence review task. Do not use assumptions as facts. If blockedConclusions contains any items, pause before drafting conclusion language.
This is more reliable than saying “use the previous task’s analysis.”
A Structured Output can still produce a readable result. The key is to keep critical data in fields and use narrative for explanation.Good pattern:
Fields contain facts, lists, values, statuses, and questions.
Narrative explains the result for a reviewer.
Tests validate fields first.
AI Judge or semantic checks validate the narrative only when needed.
Avoid putting the only important conclusion in narrative text if a later task must use it. Put the conclusion state in a field such as readyForConclusion, conclusionStatus, or blockedConclusions.
Structured Outputs are useful before creating or filling a CoDraft.Use them to prepare:
Section titles.
Draft section content.
Placeholder values.
Source notes.
Reviewer questions.
Tables that should appear in a document.
Statements that need SME confirmation.
Example:
Field
Type
Purpose
backgroundSection
Text
Draft text for the background section.
evidenceTable
Group, Allow Multiple
Rows for a document table.
placeholders
Group, Allow Multiple
Missing values that should remain visible in the CoDraft.
reviewQuestions
Text, Allow Multiple
Questions to resolve before export.
Use Fill Template when the task should write directly into a template. Use Structured Output first when another task should review, transform, or test the values before the document is created.
When a task searches Workspace content, Structured Output helps keep source-backed findings separate from assumptions.Useful fields:
sourceUsed
sourceReference
quotedOrSummarizedFact
confidence
conflictingSource
needsReview
missingEvidence
In the task description, tell Sofie how to handle missing or conflicting Workspace information:
Return each confirmed fact with its sourceReference. If a required source is missing, add an item to sourceGaps. If two sources conflict, add an item to sourceConflicts and do not resolve it silently.
Design a Structured Output schema for this Orchestration task. Include field names, field types, descriptions, required settings, which fields allow multiple values, and which fields downstream tasks should use.
Improve handoff
Review this Orchestration. Identify places where one task hands prose to another task. Recommend Structured Outputs that would make the handoff clearer and easier to test.
Make output testable
Turn this task's expected output into a Structured Output schema that can be tested field by field. Include suggested validation rules for each field and list matching keys for any repeated objects.
Review a schema
Review this Structured Output schema. Find vague field names, missing descriptions, fields that should be required, lists that need identity fields for testing, and places where the schema asks Sofie to infer unsupported conclusions.