Skip to main content
POST
/
api
/
ai
/
v1
/
text
/
beastmode
curl --request POST \ --url https://{subdomain}.domo.com/api/ai/v1/text/beastmode \ --header 'Content-Type: application/json' \ --header 'X-DOMO-Developer-Token: <api-key>' \ --data ' { "input": "Count distinct products", "dataSourceSchema": { "dataSourceName": "Store Sales", "columns": [ { "type": "STRING", "name": "product" }, { "type": "LONG", "name": "store" }, { "type": "LONG", "name": "amount" }, { "type": "DATETIME", "name": "timestamp" }, { "type": "STRING", "name": "region" } ] } } '
{
  "prompt": "# MYSQL\n# {\"dataSourceName\":\"Store_Sales\",\"columns\":[{\"name\":\"product\",\"type\":\"STRING\"},{\"name\":\"store\",\"type\":\"LONG\"},{\"name\":\"amount\",\"type\":\"LONG\"},{\"name\":\"timestamp\",\"type\":\"DATETIME\"},{\"name\":\"region\",\"type\":\"STRING\"}]}\n# Generate a query to answer the following:\n# Count distinct products",
  "output": "COUNT(DISTINCT `product`)",
  "modelId": "domo.domo_ai.domogpt-medium-v1.2:anthropic"
}

Documentation Index

Fetch the complete documentation index at: https://domoinc-arun-raj-connectors-domo-479583-raisers-edge-connec.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Authorizations

X-DOMO-Developer-Token
string
header
required

Body

application/json

Text to Beast Mode AI Service request.

Text-to-Beast-Mode AI Service Request.

Prompt Templates

A prompt template is a string that contains placeholders for parameters that will be replaced with parameter values before the prompt is submitted to the model.

A default prompt template is set for each model configured for the Text-to-Beast-Mode AI Service. Individual requests can override the default template by including the promptTemplate parameter.

Prompt Template Parameters

The following request parameters are automatically injected into the prompt template if the associated placeholder is present:

  • input
  • system
  • dataSourceSchema

Models with built-in support for system prompts and chat message history do not need to include system or chatContext in the prompt template.

Additional parameters can be provided in the parameters map as key-value pairs.

Prompt Template Examples

  • "${input}"
  • "${system}\n${input}"
input
string
required

The input text.

sessionId
string<uuid>

The AI session ID. If provided, this request will be associated with the specified AI Session.

dataSourceSchema
object

The data source schema and metadata to be included in the Text-to-Beast-Mode task prompt to generate a SQL Calculation.

promptTemplate
object

The prompt template to use for the Text-to-Beast-Mode task. The default prompt template will be used if not provided.

parameters
object

Custom parameters to inject into the prompt template if an associated placeholder is present.

model
string

The ID of the model to use for Text-to-Beast-Mode. The specified model must be configured for the Text-to-Beast-Mode AI Service by an Admin.

modelConfiguration
object

Additional model-specific configuration parameter key-value pairs. e.g. temperature, max_tokens, etc.

system
string

The system message to use for the Text-to-SQL task. If not provided, the default system will be used. If the model does not include built-in support for system prompts, this parameter may be included in the prompt template using the "${system}" placeholder.

temperature
number<double>

Controls randomness in the model's output. Lower values make output more deterministic.

maxTokens
integer<int32>

The maximum number of tokens to generate in the response.

disableValidation
boolean

Whether to disable validation of the generated Beast Mode calculation.

reasoningConfig
object

Configuration for reasoning behavior and effort level.

Response

TextAIResponse The generated calculation and model token usage information.

Response from a text AI Service.

prompt
string

The formatted prompt that was used to generate the response.

choices
object[]
deprecated

The list of choices generated by the model.

modelId
string

The id of the model used to generate the response.

sessionId
string<uuid>

The id of the AI Session associated with this request.

output
string

The output of the model.

modelProviderUsage
object

The token usage from the model provider.