Examples
These examples show real interactions with the neuroflash MCP server. The LLM automatically selects the best interaction mode based on the server's instructions — you just ask your question naturally.
Chat with a Digital Twin
Traditional Mode fits here because the task is a clear, linear sequence: find the group, pick a twin, send a message. No branching logic or data transformation required.
Import a Brand Voice from a URL
Traditional Mode is the right choice — this is a single import operation. The only setup step is looking up the workspace ID, then one tool call does the work.
Generate a LinkedIn Post
Three sequential lookups with no conditional logic — Traditional Mode handles this cleanly without the overhead of code execution.
Compare Output Across Three Brand Voices
Code Mode is the right call here — this task requires fetching multiple brand voices, running a generation call for each, collecting results, and then reasoning across all of them. That's exactly the kind of multi-step loop with data transformation that Code Mode is built for.
Compare Usage Between Workspaces
Exploratory Mode suits open-ended comparison questions where the LLM needs to discover what data is available before fetching it. The discover → compare flow is ideal for pulling parallel data from multiple workspaces in one step.