Skip to main content
Structured Outputs let a task return named fields instead of only a block of prose. Use them when the next task, a reviewer, a test, a CoDraft, or a CoSheet needs the output to stay consistent across runs. In an Orchestration, a text output might say the right thing in a different order every time. A Structured Output can return fields such as confirmedFacts, sourceGaps, riskLevel, recommendedAction, and reviewQuestions. That makes the result easier to pass to another agent, easier to validate, and easier to turn into an artifact.
Structured Outputs do not remove the need for review. They make Sofie’s work easier to inspect because the important parts are separated into fields instead of hidden in prose.

When to use Structured Outputs

Use Structured Output when a task output needs a stable shape. Good candidates:
  • Evidence tables.
  • Source gap lists.
  • Batch record exceptions.
  • CAPA action assessments.
  • Quality risk rows.
  • Validation protocol sections.
  • URS requirements.
  • CoSheet-ready data.
  • CoDraft template inputs.
  • Intermediate findings that another task must use.
Use Text Output when the task only needs a readable narrative, such as a short explanation, summary, or reviewer note. Use Fill Template when the task should populate a CoDraft template or another defined deliverable structure.

Why Structured Outputs help handoff

Orchestrations often depend on one task’s output becoming another task’s input. Prose is flexible, but it is hard for a downstream task to use precisely. Structured Outputs improve handoff because each field has a clear purpose:
Handoff needStructured Output pattern
A reviewer task needs confirmed factsA confirmedFacts list with source, observation, and uncertainty fields.
A drafter task needs source gapsA sourceGaps list with missing item, impact, and owner question fields.
A data task needs metric resultsA metricResults list with metric name, value, unit, trend, and interpretation fields.
A final task needs decision boundariesA stopConditions list and a readyForConclusion true/false field.
A CoDraft task needs sectionsFields that map to document sections or template placeholders.
This is especially important when Allow Parallel Task Execution is enabled. Independent tasks can run at the same time, but the synthesis task should receive consistent outputs from each upstream task.
If a later task needs to quote, sort, filter, count, test, or reuse a previous task’s output, use Structured Output.

Create a Structured Output

1

Open the task

In the Orchestration editor, select the task that should return structured data.
2

Choose Structured Output

In Output Mode, select Structured Output.
3

Define the output

Click Define Output. Sofie opens Output Definition.
4

Describe the output

Add an Overall Description that explains what this task should return and how downstream tasks will use it.
5

Add fields

Click Add Field for each named piece of data the task should return.
6

Configure each field

Set Field Name, Type, Description, Required, and Allow Multiple.
7

Save

Click Save. The task now shows the number of fields in its output definition.
After you define the schema, you can use Edit Output to change it or Remove to switch back to text output.

Define field names well

Field names are the contract between tasks. Make them short, stable, and specific. Good field names:
  • confirmedFacts
  • sourceGaps
  • metricResults
  • riskLevel
  • reviewQuestions
  • readyForConclusion
  • draftSection
Weak field names:
  • stuff
  • analysis
  • data
  • notes
  • output
  • thing1
Use simple field names without spaces. Prefer camelCase so the name is readable and reusable.

Choose field types

The output definition supports these field types.
TypeUse it forExample
TextNarrative values, source notes, facts, questions, rationales, or section drafts.rootCauseEvidence, summary, openQuestion
NumberCounts, measured values, scores, percentages, limits, or numeric results.defectCount, failureRate, sampleSize
True/FalseBinary answers or gates.sourceConflictFound, readyForConclusion
DropdownControlled categories where the value should come from a defined list.riskLevel: low, medium, high
GroupA nested object with multiple related fields.ownerReview with reviewer, question, and due date fields
ImageAn image result or generated visual reference.processDiagram
ChartChart output from structured data.trendChart
Use Allow Multiple when the field should return a list, such as multiple findings, sources, questions, metrics, or actions. Use Required when the field must be present for the output to be useful. Leave it optional only when the task can legitimately have no value for that field.

Write useful field descriptions

Field descriptions guide Sofie and help reviewers understand the output. Weak description:
Findings.
Better description:
Each confirmed finding from the selected sources. Include only facts directly supported by the provided evidence. Do not include assumptions or conclusions.
Good field descriptions explain:
  • What belongs in the field.
  • What does not belong in the field.
  • Which sources matter.
  • How uncertainty should be represented.
  • Whether the field is for review, drafting, testing, or handoff.

Use Groups for nested information

Use Group when one item has multiple subfields. Example sourceGap group:
Nested fieldTypeDescription
missingItemTextThe missing source, page, record, data field, or decision.
whyItMattersTextWhy the missing item matters for the workflow.
ownerQuestionTextThe question to ask the responsible SME or reviewer.
blocksConclusionTrue/FalseWhether the gap should stop a conclusion or final draft.
Then set Allow Multiple on the sourceGaps field so Sofie can return a list of source gaps. This pattern is better than one long sourceGaps paragraph because tests can validate each item and downstream tasks can use the list directly.

Use Dropdown for controlled categories

Use Dropdown when Sofie should choose from known values instead of inventing labels. Good dropdowns:
FieldOptions
riskLevellow, medium, high
findingStatusconfirmed, potential, unsupported, needs review
sourceConfidencehigh, medium, low
reviewOutcomecontinue, pause, ask for input
artifactDestinationrun output only, CoDraft, CoSheet
Dropdowns are useful for testing because Exact Match can validate the selected option. Avoid dropdowns when the values are not stable. If reviewers often need a new category, use Text plus a clear description instead.

Use lists for repeated items

Set Allow Multiple when the task should return more than one item. Use lists for:
  • Findings.
  • Exceptions.
  • Requirements.
  • Risks.
  • CAPA actions.
  • Metrics.
  • Sources.
  • Review questions.
  • Draft sections.
For a list of simple values, such as source names, use a Text field with Allow Multiple. For a list of complex values, use a Group field with Allow Multiple and define nested fields. Example batchRecordExceptions list:
Nested fieldType
sectionText
observationText
possibleImpactText
sourceReferenceText
reviewQuestionText
dispositionAllowedTrue/False

Design for testing

Structured Outputs are one of the best ways to make Orchestration tests useful. Text tests often fail because phrasing changes. Structured tests can check fields:
  • A required field exists.
  • A dropdown value matches a controlled option.
  • A number is within a range.
  • A list has at least one item.
  • A list item contains a required source.
  • A field does not contain unsupported conclusion language.
  • An array matches by identity fields instead of row order.
For detailed testing workflow, see Test Orchestrations.
Build the Structured Output before you create the test. The schema gives the test editor fields to validate instead of one large expected answer.

Match list items during tests

When a Structured Output contains a list of objects, tests can match items in different ways.
Match strategyUse it when
Exact orderThe list order is part of the requirement.
Identity fieldsItems have stable fields such as source, section, finding, metric, or requirement ID.
Semantic similarityItems may be worded differently but should refer to the same concept.
Use identity fields for most life sciences lists. A finding list should not fail only because Sofie returned source gaps in a different order. Good identity fields:
  • source
  • section
  • finding
  • requirementId
  • metricName
  • testName
Avoid identity fields that are vague or likely to change, such as long narrative descriptions.

Structure handoffs between agents

When one task hands work to another, define the upstream output around what the downstream task needs. Example handoff:
TaskStructured OutputDownstream use
Evidence reviewerconfirmedFacts, sourceGaps, sourceConflictsThe analysis task uses only confirmed facts and gaps.
Data analystmetricResults, outliers, dataLimitationsThe reviewer checks whether data supports the conclusion.
ReviewerreviewedFindings, blockedConclusions, reviewQuestionsThe drafter writes only from reviewed findings.
DrafterdraftSections, placeholders, sourceNotesThe CoDraft task fills sections and leaves placeholders.
Write downstream task instructions that name the upstream fields:
Use the confirmedFacts and sourceGaps from the evidence review task. Do not use assumptions as facts. If blockedConclusions contains any items, pause before drafting conclusion language.
This is more reliable than saying “use the previous task’s analysis.”

Keep narrative and structure separate

A Structured Output can still produce a readable result. The key is to keep critical data in fields and use narrative for explanation. Good pattern:
  • Fields contain facts, lists, values, statuses, and questions.
  • Narrative explains the result for a reviewer.
  • Tests validate fields first.
  • AI Judge or semantic checks validate the narrative only when needed.
Avoid putting the only important conclusion in narrative text if a later task must use it. Put the conclusion state in a field such as readyForConclusion, conclusionStatus, or blockedConclusions.

Use Structured Outputs with CoDraft

Structured Outputs are useful before creating or filling a CoDraft. Use them to prepare:
  • Section titles.
  • Draft section content.
  • Placeholder values.
  • Source notes.
  • Reviewer questions.
  • Tables that should appear in a document.
  • Statements that need SME confirmation.
Example:
FieldTypePurpose
backgroundSectionTextDraft text for the background section.
evidenceTableGroup, Allow MultipleRows for a document table.
placeholdersGroup, Allow MultipleMissing values that should remain visible in the CoDraft.
reviewQuestionsText, Allow MultipleQuestions to resolve before export.
Use Fill Template when the task should write directly into a template. Use Structured Output first when another task should review, transform, or test the values before the document is created.

Use Structured Outputs with CoSheet

Use Structured Outputs when the result should become a table, tracker, or analysis input. Good CoSheet-oriented fields:
  • rows
  • columns
  • metricName
  • value
  • unit
  • calculationNote
  • trend
  • outlierFlag
  • reviewQuestion
Example metricResults group:
Nested fieldType
metricNameText
valueNumber
unitText
trendDropdown
outlierFlagTrue/False
interpretationText
sourceReferenceText
This gives Sofie a stable table-like result to reuse in a CoSheet, a CoDraft, or a later review task.

Use Structured Outputs with Workspace context

When a task searches Workspace content, Structured Output helps keep source-backed findings separate from assumptions. Useful fields:
  • sourceUsed
  • sourceReference
  • quotedOrSummarizedFact
  • confidence
  • conflictingSource
  • needsReview
  • missingEvidence
In the task description, tell Sofie how to handle missing or conflicting Workspace information:
Return each confirmed fact with its sourceReference. If a required source is missing, add an item to sourceGaps. If two sources conflict, add an item to sourceConflicts and do not resolve it silently.

Avoid over-structuring

Structured Outputs are useful when the fields will be reused, reviewed, or tested. They are not necessary for every task. Avoid over-structuring when:
  • The task only needs a short answer.
  • The output will not be used by another task.
  • You are still exploring the workflow in chat.
  • The schema would force Sofie to guess values that may not exist.
  • Reviewers need a flexible narrative more than fields.
If a task has more than about 12 top-level fields, check whether it should be split into smaller tasks or use nested groups.

Life sciences examples

Deviation evidence review

Use this schema when the task should collect evidence before analysis.
FieldTypeMultiplePurpose
confirmedFactsGroupYesFacts directly supported by sources.
sourceGapsGroupYesMissing or unclear evidence.
sourceConflictsGroupYesConflicting source statements.
readyForRootCauseReviewTrue/FalseNoWhether the workflow can move to root cause review.
reviewQuestionsTextYesQuestions for the SME or investigator.
Nested confirmedFacts fields:
  • fact
  • sourceReference
  • date
  • confidence
  • uncertainty

CAPA effectiveness review

Use this schema when the task should compare criteria with observed evidence.
FieldTypeMultiplePurpose
effectivenessCriteriaGroupYesCriteria from the CAPA plan.
metricResultsGroupYesObserved metric results.
evidenceGapsGroupYesMissing data or unclear evidence.
effectivenessConclusionStatusDropdownNoready, not ready, or needs review.
draftConclusionGuardrailsTextYesLanguage limits for the drafter.
Use a human review step before final conclusion language.

Validation protocol drafting

Use this schema when the task should prepare sections before filling a CoDraft.
FieldTypeMultiplePurpose
protocolSectionsGroupYesDraftable protocol sections.
acceptanceCriteriaPlaceholdersGroupYesCriteria that need SME confirmation.
sourceMapGroupYesSource documents mapped to protocol sections.
unresolvedQuestionsTextYesQuestions that block finalization.
This keeps protocol drafting from inventing missing acceptance criteria.

Batch record exception tracker

Use this schema when the task should create a reviewable exception list.
FieldTypeMultiplePurpose
exceptionsGroupYesOne item per exception.
missingPagesTextYesPages or sections not found.
reviewNeededTrue/FalseNoWhether human review is required before disposition language.
summaryForReviewerTextNoShort review summary.
Nested exceptions fields:
  • section
  • observation
  • possibleImpact
  • sourceReference
  • ownerQuestion
  • dispositionAllowed

Common mistakes

MistakeWhy it hurtsBetter pattern
One field named analysisDownstream tasks still have to parse prose.Use separate fields for facts, gaps, conflicts, and questions.
Every field is optionalMissing outputs are hard to catch.Mark fields required when the workflow depends on them.
Dropdown options are vagueTests cannot tell what the category means.Use clear, controlled options such as confirmed, unsupported, and needs review.
Long lists use exact order in testsTests fail when order changes.Match by identity fields.
Source and conclusion live in the same fieldReviewers cannot trace reasoning.Separate source facts from interpretation.
A final task gets loose prose from upstream tasksHandoff becomes unreliable.Pass named fields into the final task.
The schema encodes decisions Sofie should not makeThe task may fill a value too confidently.Add review fields and stop conditions.

Build checklist

Before you rely on a Structured Output:
  • The Overall Description explains the output’s purpose.
  • Each Field Name is stable and specific.
  • Each field has a useful Description.
  • Required fields are marked Required.
  • Repeatable values use Allow Multiple.
  • Related fields are grouped under Group.
  • Dropdown options are controlled and meaningful.
  • Downstream tasks refer to field names directly.
  • Tests validate important fields and list behavior.
  • Human review happens before final conclusions or shared artifacts when needed.

Useful prompts

Design a Structured Output schema for this Orchestration task. Include field names, field types, descriptions, required settings, which fields allow multiple values, and which fields downstream tasks should use.
Review this Orchestration. Identify places where one task hands prose to another task. Recommend Structured Outputs that would make the handoff clearer and easier to test.
Turn this task's expected output into a Structured Output schema that can be tested field by field. Include suggested validation rules for each field and list matching keys for any repeated objects.
Review this Structured Output schema. Find vague field names, missing descriptions, fields that should be required, lists that need identity fields for testing, and places where the schema asks Sofie to infer unsupported conclusions.