Chat-Vervollständigungen
Basis-URL: https://app.neuroflash.com/api/ds-prototypes
Textvervollständigungen mit KI-Modellen generieren.
Chat-Vervollständigungen
POST
/chat/completionsChat-Vervollständigungen mit Wörteranalyse und Anbieter-Routing verarbeiten
Anfrage-Body
FeldTypErforderlichBeschreibung
messagesarray<object>JaListe von Nachrichten mit „role" und „content"modelstringJaName des ModellstemperatureobjectNeinTemperatur des Modellsmax_tokensobjectNeinAnzahl der zu generierenden Tokens begrenzenreasoningobjectNeinParameter für den Reasoning-Modusreasoning_effortobjectNeinStufe des Reasoning-ModustoolsobjectNeinFür Tools und Funktionsaufrufetool_choiceobjectNeinTool-Auswahl beeinflussenseedobjectNeinZufälliger Seedresponse_formatobjectNeinFür strukturierte Ausgaben zur Definition eines JSON-Schemasweb_search_optionsobjectNeinZusätzliche Optionen für WEB-Suchestructured_outputsobjectNeinfrequency_penaltyobjectNeinPenalisierungsoptionpresence_penaltyobjectNeinPenalisierungsoptionrepetition_penaltyobjectNeinPenalisierungsoptionstopobjectNeinStoppbedingung anpassenstreamobjectNeinStreaming-Modus aktivierenBeispiel
Erforderlicher Header
Alle Anfragen an /api/ds-prototypes benötigen zusätzlich zu Authorization den Header
x-workspace-id. Wird er weggelassen, gibt die API 400 Bad Request zurück.
- cURL
- Python
- Node.js
- Go
curl -X POST "https://app.neuroflash.com/api/ds-prototypes/chat/completions" \
-H "Authorization: Bearer YOUR_ACCESS_TOKEN" \
-H "x-workspace-id: YOUR_WORKSPACE_ID" \
-H "Content-Type: application/json" \
-d '{
"messages": [
{ "role": "user", "content": "Write a tagline for an AI content platform." }
],
"model": "openai/gpt-4.1-mini",
"temperature": 0.7
}'
import requests
response = requests.post(
f"https://app.neuroflash.com/api/ds-prototypes/chat/completions",
headers={"Authorization": f"Bearer {token}", "x-workspace-id": workspace_id, "Content-Type": "application/json"},
json={
"messages": [
{"role": "user", "content": "Write a tagline for an AI content platform."}
],
"model": "openai/gpt-4.1-mini",
"temperature": 0.7
},
).json()
const response = await fetch(
`https://app.neuroflash.com/api/ds-prototypes/chat/completions`,
{
method: "POST",
headers: {
Authorization: `Bearer ${token}`,
"x-workspace-id": workspaceId,
"Content-Type": "application/json",
},
body: JSON.stringify({
messages: [
{ role: "user", content: "Write a tagline for an AI content platform." }
],
model: "openai/gpt-4.1-mini",
temperature: 0.7,
}),
}
).then((r) => r.json());
body, _ := json.Marshal(map[string]any{
"messages": []map[string]string{
{"role": "user", "content": "Write a tagline for an AI content platform."},
},
"model": "openai/gpt-4.1-mini",
"temperature": 0.7,
})
req, _ := http.NewRequest("POST", "https://app.neuroflash.com/api/ds-prototypes/chat/completions", bytes.NewReader(body))
req.Header.Set("Authorization", "Bearer "+token)
req.Header.Set("x-workspace-id", workspaceID)
req.Header.Set("Content-Type", "application/json")
resp, _ := http.DefaultClient.Do(req)
defer resp.Body.Close()
Antwort:
{
"id": "gen-1775000000-AbCdEfGh",
"object": "chat.completion",
"model": "openai/gpt-4.1-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Create smarter content, faster — powered by AI that knows your brand."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 14,
"completion_tokens": 12,
"total_tokens": 26,
"words_used": 9
}
}