
The Edit Agent
A small agent that lets users edit release metadata by describing what they want changed, no forms required.
ONCE Engineering
Editing distributed release metadata is tedious. You open a modal, scan a grid of form fields, find the one you need, type the new value, repeat. For a simple change like "make all tracks explicit" you're clicking through every track row one by one.
We built a small agent that sits inside the Edit & Redistribute modal and lets you skip the form entirely. Type what you want changed in plain language, and it maps your request to structured field edits.
How It Works
The Edit Agent lives in a bar at the top of the Edit & Redistribute modal. It has an input field, a small animated face that tracks your cursor and reacts to state changes, and a status strip that shows what it's doing.
You type something like:
"Change the genre to R&B and make all tracks explicit"
The agent receives your request along with the current release metadata and track list as context. It calls the OpenAI Responses API with a tightly constrained system prompt that maps natural language to structured edits. The response comes back as strict JSON:
{
"assistant_message": "Updated genre to R&B and marked all tracks explicit.",
"needs_clarification": false,
"confidence": "high",
"edits": [
{
"target": "release",
"field": "genre",
"value": "R&B"
},
{
"target": "all_tracks",
"field": "explicit_flag",
"value": true
}
]
}
The modal applies each edit to the form state. The user sees their fields update instantly and can review before submitting.
Three Edit Targets
Every edit the agent produces maps to one of three targets:
release: Release-level fields like title, artist, genre, date, label, parental advisory, UPCtrack: A single track, identified bytrack_numberortrack_titleall_tracks: Every track at once (for bulk changes like language or explicit flag)
The agent figures out which target you mean from context. "Change the artist on track 2" becomes a track edit with track_number: 2. "Make everything explicit" becomes an all_tracks edit. "Update the release date" becomes a release edit.
When the agent can't resolve a track reference, it asks for clarification instead of guessing.
The Clarification Loop
Not every request is unambiguous. If you say "change the title" but the release has 4 tracks, the agent doesn't know if you mean the release title or a specific track.
Instead of guessing, the agent sets needs_clarification: true and returns a question:
{
"needs_clarification": true,
"clarification_question": "Do you want to change the release title, or a specific track title?",
"edits": []
}
The UI shows the question in the status bar and waits for a follow-up reply. The follow-up gets sent with full clarification context (the original request, the question asked, any prior replies) so the agent can resolve the ambiguity in one round without losing track of the original intent.
Allowed Fields and Sanitization
The agent can only write to a fixed set of fields. Release fields: title, primary_artist_name, label, genre, release_date, parental_advisory, upc. Track fields: title, primary_artist_name, language, genre, explicit_flag, isrc.
Every value goes through field-specific sanitization before reaching the form:
release_date: Parsed and normalized toYYYY-MM-DDparental_advisory: Mapped to"Explicit","NotExplicit", or""explicit_flag: Coerced to boolean from various inputs ("yes","true","explicit"→true)- String fields: Trimmed to 220 characters max
If the model returns an unknown field, it gets silently dropped. If a value fails sanitization, the edit is skipped and a warning is added to the response. The agent can never write to a field that doesn't exist in the schema.
Slotting Small Agents into Processes
The Edit Agent is a good example of how we slot small, focused agents into existing UI flows. The Edit & Redistribute modal already worked as a form. Users could manually edit every field. The agent doesn't replace the form — it sits alongside it as a faster input method.
This pattern recurs across the ONCE pipeline:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Edit Agent │ │ Validation │ │ Distribution │
│ (mini agent │────▶│ Agent │────▶│ Backend │
│ in modal) │ │ (at submit) │ │ (syncs) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
┌─────────────────┐
│ Genre Agent │
│ (maps genres) │
└─────────────────┘
Each agent is small:
- The Edit Agent has a ~60-line system prompt, uses
gpt-5-miniwith low reasoning effort, and caps output at 1,500 tokens. It runs in under a second. - The Validation Agent has a much larger prompt with Apple Music style guide excerpts and full distribution requirements. It runs at submission time, not during editing.
- The Genre Agent resolves freeform genres to the distributor's taxonomy. It only fires when no exact match exists.
None of these agents know about each other. They don't share state or coordinate. They each handle one step in the pipeline and produce clean, structured output for the next step.
Why Mini Agents Work
The Edit Agent is intentionally limited. It doesn't validate against Apple Music's style guide. It doesn't check contributor credits. It doesn't enforce songwriter full-name requirements. Those are the Validation Agent's job.
What it does do:
- Runs fast (low reasoning effort, small prompt,
gpt-5-mini) - Maps natural language to exact field writes with no ambiguity
- Asks for clarification instead of guessing
- Rejects invalid fields and values at the sanitization layer
- Applies changes to local form state, not directly to the database
That last point is important. The Edit Agent changes what you see in the form. Nothing is persisted until you click "Save & Redistribute." This makes the agent safe to experiment with: you can ask it to change things, review the result, undo by reverting the form, and try again.
The Face
The agent bar includes a small animated face built with SVG and Framer Motion. The eyes track your mouse cursor when idle and follow your text caret when you're typing. It blinks naturally with occasional double-blinks for personality. During processing, the face dissolves into an animated gradient cloud. On success, it smiles. On error, it frowns.
This is a small detail, but it makes the agent feel responsive and alive. You're typing a request to something that's watching and reacting, not submitting a form into a void. We're always experimenting with things like this, to see how our users respond to the agent's personality and behavior. If you hate it, let us know!
Distribution Awareness
The Edit Agent carries a small set of distribution rules in its system prompt. It knows that primary_artist_name should be a single name, that dates need YYYY-MM-DD format, and that genre values should match the distributor's catalog when available. These are lightweight guardrails that keep edits compliant without adding latency or complexity.
The full distribution spec lives in the Validation Agent, which runs at submission time with comprehensive enforcement. The Edit Agent just needs to avoid producing edits that are obviously wrong.
What's Next
We're exploring expanding the Edit Agent's field coverage to include contributor credits and songwriter data. These are more complex than simple string fields (they involve arrays of objects with roles and share percentages), so the sanitization and targeting logic needs to get smarter. We're also looking at surfacing recent edits as quick-action chips so users can apply common changes with a single tap.
The Edit Agent lives inside every Edit & Redistribute modal. Say what you want changed and it handles the rest.