Replies: 1 comment
-
|
(this is just my personal opinion and intuition) I think most clients would consider it a security issue to allow Prompts (which can change on the fly) to inject system-level context. Server instructions are also system-level with many clients, but it is static and part of the capability negotiation / init handshake. If prompts had a system role, that's a very powerful vector for attacks. And I'm not sure I agree that system-level instructions belong in workflows to be shared in the open MCP ecosystem. Yes, instructions given in system-level context are strong, but with modern models you generally don't need that. And if you want an MCP server to provide personas, I think it is better to do that via resources and then let users manually configure this. One can argue that connecting to an MCP server is an inherent act of trust, and that is true, but I think many clients developers would consider this beyond what an MCP server should be able to do. Consider that many clients don't actually allow users to freely change system instructions. I wouldn't necessarily be hard against adding a System role for Prompts , but I honestly don't think it is needed, and I think the downsides outweigh the benefits, because I can't see a realistic scenario where they make sense. Prompts are barely used in the first place in the open ecosystem. I don't think this would change that - but it would add a new security risk. If you're in a closed system (controlling both client and server) then you can just use _meta or resources to achieve the same thing. Sampling is different because it is a stand-alone message generation - it doesn't have the ability to interact with the user's context or device. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Pre-submission Checklist
Question Category
Your Question
Hi MCP Team,
The Observation:
systemPromptfield. This allows the server to explicitly define the persona/rules for the LLM.userorassistant.The Problem:
When a server exposes a complex workflow via a Prompt, it often needs to set a specific Instruction Manual or Persona (e.g., "Act as a technical validator"). Since there is no system role in the PromptMessage, we are forced to:
Neither of these is as robust as the system role supported by most modern LLM APIs and the MCP Sampling spec itself.
Questions:
Beta Was this translation helpful? Give feedback.
All reactions