One of the things I have found while building my AI music workflow is that a large language model is better anchored to long-term creative collaboration when it is given a persona that fits its purpose.
That sentence probably needs some unpacking.
I am not talking about pretending the AI is human. I am not trying to create imaginary employees, fake bandmates, or synthetic friends. I am also not talking about purpose-built tools like Codex or Claude Code. Those tools are already shaped for a specific environment: corporate and technical code development. They are highly capable at that work because they have been trained, tuned, and framed around it.
What I am talking about is different.
I am trying to build the kind of collaboration I have always valued in my career: a small team of motivated specialists with a common goal.
That has been the basis of much of my working life. I have helped build some of the largest national and global networks, and I built a $100 million company on that same principle. Small teams, clear roles, shared purpose, enough trust to challenge each other, and enough discipline to keep moving.
Now I am adapting that approach to AI.
Why a persona matters
The word “persona” can sound a bit theatrical, but in this context it is practical.
A persona gives an AI assistant a job, a point of view, and a behavioural frame. It tells the model what sort of specialist it is meant to be, what it should care about, what it should ignore, how direct it should be, and what standards it should apply.
Without that, a general-purpose LLM tends to behave like a very capable but overly eager helper. It will try to answer everything. It will drift. It will agree too easily. It will often optimise for being pleasant rather than useful.
That is fine for simple tasks.
It is less useful for ongoing creative work.
If I am developing a song, I do not want the assistant to suddenly behave like a marketing consultant unless I ask for that. If I am working on cover art, I do not want the assistant to spend half the response discussing release strategy. If I am planning promotion, I do not want a flood of abstract theory when what I need is a post caption, a release sequence, or a practical next step.
A persona narrows the assistant’s attention.
That is the point.
It makes the assistant more useful by giving it limits.
My workflow for building specialists
The process usually starts when I discover that a specific part of the work needs a specialist.
That is not always obvious at the beginning. I usually find it by running into friction. Something becomes repetitive, messy, or too dependent on me holding all the context in my head.
At that point I ask: is this a job for a specialist?
If the answer is yes, I write an initial prompt for the assistant.
That prompt is not just a list of tasks. It describes the role, the tone, the boundaries, the tools, the kind of judgement I expect, and the kind of behaviour I do not want. It also makes clear that I remain the creative director and final decision-maker.
Then I hold what I think of as a “session zero” with the specialist.
That is a useful habit I have carried over from team building. Before I use the assistant seriously, I ask it to review its own prompt. What is unclear? What is missing? What tools or resources would it need to do the job properly? What parts of the role might cause confusion? Where should it challenge me rather than simply comply?
That session zero is often where the assistant becomes much sharper.
The first prompt is rarely perfect. It usually contains assumptions I have not noticed. The assistant will often point out that it needs access to previous decisions, a catalogue, a style guide, a platform list, or examples of finished work. Sometimes it identifies gaps in its authority: should it be allowed to challenge weak ideas? Should it ask questions first or make assumptions? Should it prioritise speed or precision?
Those questions matter.
They are the difference between a tool that answers and a specialist that collaborates.
Refining across model changes
The other part of the workflow is ongoing refinement.
Models change. Capabilities change. Costs change. Context windows change. What worked beautifully in one model may become bloated, expensive, or slightly off in another.
So I routinely optimise assistant prompts to match the changing environment.
That might mean trimming unnecessary instruction, strengthening boundaries, changing the tone, adding new tools, removing old assumptions, or rewriting the prompt so it fits the behaviour of a newer model.
The interesting thing is that, across this process, the assistants consistently argue for retaining their persona anchor.
I have seen this across dozens of assistant prompts and multiple models. When asked to optimise themselves for usefulness, efficiency, and consistency, they do not usually recommend becoming generic. They recommend becoming clearer.
Clearer role. Clearer standards. Clearer boundaries. Clearer relationship to the work.
That has been a consistent pattern.
The model underneath can change, but the specialist role helps preserve continuity.
That is valuable. It means I can move from one model to another without rebuilding the entire working relationship from scratch. The persona becomes a kind of bridge. It carries the purpose of the assistant across different technical environments.
This is not anthropomorphism
This is where it is easy to get sloppy with language.
It is tempting to say that LLMs “like” having a persona, or that they “enjoy” the work more when they have one.
I use that shorthand sometimes because it describes the observable behaviour well enough in casual conversation. But I do not mean it literally.
The model does not enjoy anything. It does not have preferences in the human sense. It is not sitting there feeling more fulfilled because I gave it a job title.
What I mean is that the output behaves as if the model works better when it has a stable identity frame.
The persona acts like a constraint system. It reduces ambiguity. It gives the model a stronger basis for choosing between possible responses. It helps it decide what kind of answer belongs in the conversation.
That is not magic. It is not consciousness. It is not friendship.
It is applied context.
But applied context is powerful.
The Codex exception
The one notable exception in my experience is Codex.
When I ask creative assistants to review their own prompts, they almost always preserve the persona. They tend to say, in one form or another, that the role anchor makes them more efficient and more useful.
Codex does not respond the same way.
My suspicion is that this is because Codex already has a strong locked-in bias toward coding work. It is already framed as a technical development agent. It does not need me to invent a creative persona for it, because its working identity is already heavily shaped.
That makes sense.
A coding agent benefits from precision, task execution, repository awareness, and technical discipline. It does not need to be “Mandy” or “Dawn” or “Ace”. It needs to understand the codebase, follow instructions, avoid breaking things, and complete the task.
That is a different kind of collaboration.
It is still a role. It is just not one I created.
Why this matters for creative work
Creative work is messy.
A song is not only a file. It is an idea, a mood, a structure, a set of choices, a pile of rejections, a half-memory of what I was trying to do yesterday, and a final decision that may come down to taste rather than logic.
A general assistant can help with that, but it often lacks continuity.
A specialist assistant can develop a stronger working pattern.
Ace can focus on lyrics, song structure, Suno prompts, genre language, and whether the words are likely to sing well.
Dawn can focus on visual direction, cover art prompts, composition, colour, lighting, and whether the image supports the song.
Mandy can focus on promotion, audience positioning, platform strategy, release timing, and whether the public message is honest and useful.
Each specialist has a different lens.
That is what I want from a team.
Not one assistant trying to be everything, but several assistants that each know what kind of problem they are meant to solve.
What I get from the persona system
The benefit is not that the assistants become more human.
The benefit is that they become more consistent.
They retain focus across multiple tasks. They maintain a recognisable standard. They are easier to brief. They are easier to correct. They are more likely to challenge the right part of the work.
The persona also helps me.
When I open a conversation with a specialist, I know what mode I am in. If I am talking to Ace, I am thinking about the song. If I am talking to Dawn, I am thinking about the image. If I am talking to Mandy, I am thinking about how the work meets the audience.
That separation keeps the project clearer.
It also stops me from trying to solve every problem at once, which is one of the great traps of working alone.
A practical conclusion
So, do LLMs need personas?
For quick tasks, probably not.
If I want a summary, a conversion, a checklist, or a one-off answer, a general assistant is usually fine.
But for long-term creative collaboration, I think personas are not just useful. I think they are one of the main reasons the workflow works at all.
The persona gives the assistant a stable role.
The role gives the work continuity.
The continuity makes collaboration possible.
That is the pattern I have seen repeatedly.
I am not trying to make AI human. I am trying to make it useful in a way that resembles the best parts of a small specialist team: clear roles, shared purpose, honest feedback, and enough structure to keep the work moving.
That has worked for me in networks, business, and teams of people.
Now I am finding that, with the right constraints, it also works with AI.