Skip to main content
POST
/
api
/
ai
/
v1
/
messages
/
chat
Chat Messages
curl --request POST \
  --url https://{subdomain}.domo.com/api/ai/v1/messages/chat \
  --header 'Content-Type: application/json' \
  --header 'X-DOMO-Developer-Token: <api-key>' \
  --data '
{
  "input": [
    {
      "role": "USER",
      "content": [
        {
          "type": "TEXT",
          "text": "Why is the sky blue?"
        }
      ]
    }
  ]
}
'
{
  "content": [
    {
      "type": "TEXT",
      "text": "The sky appears blue due to a phenomenon called Rayleigh scattering."
    }
  ],
  "modelId": "domo.domo_ai.domogpt-medium-v1.2:anthropic",
  "isCustomerModel": false,
  "sessionId": "e1f6a485-7fb6-4f71-b41c-37d6cb5f6bd3",
  "requestId": "df2256fd-d133-4ea1-b958-295be09be7c1",
  "stopReason": "END_TURN"
}

Documentation Index

Fetch the complete documentation index at: https://domoinc-arun-raj-connectors-domo-479583-raisers-edge-connec.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Authorizations

X-DOMO-Developer-Token
string
header
required

Body

application/json

Request for interacting with the chat message AI service.

input
object[]

The list of input messages to be processed by the AI.

sessionId
string<uuid>

The unique identifier for the AI session associated with this request.

system
object[]

System-level messages or configurations to guide the AI's response.

model
string

The identifier of the AI model to be used for generating a response.

modelConfiguration
object

Specific parameters or settings that configure the AI model behavior.

temperature
number<double>

A parameter for controlling the randomness of the model's output.

maxTokens
integer<int32>

The maximum number of tokens to generate in the response.

responseFormat
object

Model response format specification for structured outputs.

reasoningConfig
object

Configuration for reasoning behavior and effort level.

Response

Successful chat messages response.

Response from a Messages API.

content
object[]

The list of content generated by the model.

Text-based message content.

modelId
string

The id of the model used to generate the response.

sessionId
string<uuid>

The id of the AI Session associated with this request.

stopReason
enum<string>

The reason that the model stopped.

Available options:
TOOL_USE,
MAX_TOKENS,
STOP_SEQUENCE,
END_TURN,
CONTENT_FILTERED,
SAFETY,
UNKNOWN
modelProviderUsage
object

The token usage from the model provider.