Role prompting - Giving AI a Persona
- 4 days ago
- 5 min read

"You're an expert X.""Act as a Y"Role prompting is one of the most widely repeated tips in AI circles. The underlying idea is that if you assign the model a job title or area of expertise, it will respond more like a person who holds that role, making it more relevant, more precise, more expert-sounding.
The research on whether that actually holds up is more nuanced than the advice suggests. And the nuance is important if you want to try this out yourself.
What is role prompting?
Role prompting means giving an AI model a character, job title, or area of expertise before asking it to complete a task. A basic example:
You are an experienced HR manager. Review the following job description and suggest improvements.The model has not changed. Its training data and knowledge base are no different. But the assigned persona shapes how it interprets the request and how it frames its response. That framing effect is real but also limited in ways worth understanding.
Does assigning an AI persona actually improve outputs?
The common assumption behind role prompting is that an expert label produces expert results. A 2023 study tested this directly, evaluating persona prompting across four major language model families. The finding was that personas generally had no effect on performance, or a small negative one, compared to prompting with no persona at all.
A 2025 report from the Wharton Generative AI Lab reached a similar conclusion for factual tasks. Assigning a model the label "physics expert" before a physics question produced no consistent improvement in accuracy across tested models (with the notable exception of Gemini 2.0 Flash, which did show measurable impact from expert personas). The report's title makes the point plainly: Playing Pretend: Expert Personas Don't Improve Factual Accuracy.
The reason is fairly straightforward. A persona does not add knowledge that the model does not already hold. If the training data does not contain reliable information on a topic, attaching an expert label to the prompt will not change the answer.
A persona does not add knowledge the model does not have. The expertise is either in the training data or it is not.
A 2024 paper described the persona as a "double-edged sword", and the edge cuts sharper than the framing suggests. Role-playing prompts do not just fail to improve reasoning; in tests on Llama 3, persona-based prompts produced worse performance than neutral prompts on seven out of twelve reasoning datasets. The risk for logic-heavy tasks is not that the persona does nothing, but that it actively gets in the way.
The effects are also largely unpredictable. Some personas occasionally produce gains, but the pattern does not hold reliably enough to plan around. Automated strategies designed to identify the best persona for a given question have been tested and found to perform no better than random selection.
It is not just the "expert" label that matters, either; a persona's gender, domain, and specific framing can all shift outputs in ways that are not intuitive or consistent.
Where role prompting does help
The research is not uniformly negative. The effects are task-dependent, and that is where the practical value lies.
For writing, communication, and tone-sensitive tasks, role prompting shows more consistent benefits. When you assign the model the perspective of "a plain-language editor reviewing a policy document", you are not asking it to access hidden knowledge. You are narrowing the range of plausible responses toward a particular style and set of priorities.
Compare the same request with and without a persona:
Give feedback on this email.You are a direct manager reviewing an email before it goes to a client. Give feedback on tone and clarity, and flag anything that could be misread.The second prompt will produce more pointed, more actionable feedback. The persona has established a vantage point for the model to respond from. The model's underlying knowledge is unchanged.
Used for tone and framing, a persona gives the model a useful perspective. When used as a substitute for expertise, it tends to disappoint.
How to write a role prompt that works
A few things make the difference between a useful persona and a decorative one.
Specify the audience, not just the role. "You are a marketing consultant writing for a non-technical audience" gives the model more to work with than "you are a marketing expert." The audience shapes vocabulary and level of detail as much as the role does.
Match the persona to the output you actually want. A persona works best when it naturally implies the tone and priorities you are after. "You are a copy editor focused on concision" tells the model what to prioritise.
Avoid low-knowledge personas. Assigning roles such as "young child", "toddler", or "layperson" is consistently harmful to output quality. Research shows these labels reliably reduce benchmark accuracy across tasks. If you want simpler language, specify that in the output instructions rather than in the persona itself.
Consider asking the model to generate its own persona. Rather than handcrafting a role, you can ask the model to describe the kind of expert best suited to the task, then use that description as the persona. Research suggests LLM-generated personas tend to produce more stable results than those written by hand.
Avoid relying on a persona for factual questions. If you need accurate information, prompt clearly and verify the output independently. An expert label will not make the answer more reliable and may produce unwarranted confidence.
A note on mitigating the risks
For users who want the framing benefits of a persona without the risk of degraded reasoning, one approach worth knowing about is what researchers call the "Jekyll and Hyde" method. This involves running the same task twice: once with a role-playing prompt and once with a neutral prompt, then using a model to evaluate both outputs and select the better one. It adds a step, but it guards against the cases where a persona pulls the response in the wrong direction.
Final thoughts
Role prompting is a useful technique when applied to the right kinds of tasks. For tone, style, framing, and audience-sensitive work, a well-chosen persona can make outputs more relevant. For reasoning and knowledge-heavy tasks, the evidence points the other way. Personas often have no effect, can actively degrade performance, and behave unpredictably enough that even targeted attempts to find the right persona tend to fail.
Use role prompting to define how the model should respond, not to conjure knowledge or reasoning ability it does not have. Avoid low-knowledge personas, consider letting the model generate its own role description, and for high-stakes reasoning tasks, treat any persona with scepticism.
Role prompting sits alongside zero-shot and few-shot prompting as part of a broader toolkit. If you are building from first principles, the article on the fundamentals of prompting is the right starting point. Few-shot prompting covers the next most practical technique for shaping AI outputs.
References
When "A Helpful Assistant" Is Not Really Helpful: Personas in System Prompts Do Not Improve Performances of Large Language Models (2023). https://arxiv.org/abs/2311.10054
Wharton Generative AI Lab (2025). Playing Pretend: Expert Personas Don't Improve Factual Accuracy. https://arxiv.org/abs/2512.05858
Persona is a Double-edged Sword: Enhancing the Zero-shot Reasoning by Ensembling the Role-playing and Neutral Prompts (2024). https://arxiv.org/abs/2408.08631




Comments