SmartSuite Script Engine lets you run custom JavaScript in a managed serverless environment so you can automate data syncs, process records in bulk, call external APIs, schedule recurring jobs, and power advanced workflows without managing infrastructure.
This guide focuses on the practical how-to of using Script Engine:
create and upload scripts
configure secrets securely
test scripts manually
schedule recurring execution
invoke scripts through the API
review run history and logs
use the most relevant endpoints effectively
What Script Engine is best for
Script Engine is especially useful when you need more control than a simple automation step can provide.
Common use cases include:
batched imports
external API sync jobs
scheduled reconciliation processes
webhook-driven processing
bulk record updates
background jobs with retry or monitoring logic
The platform is built around a small set of core resources:
Scripts for storing code and configuration
Executions for running a script now
Runs for tracking execution history
Schedules configured on the script itself
Runtimes for discovering supported runtime environments and installed libraries
Before you begin
Every request to the Script Management API requires two headers:
Authorization: ApiKey YOUR_API_KEYAccount-Id: YOUR_WORKSPACE_ID
Scripts are scoped to a workspace, and the API uses that Account-Id header to isolate all operations.
At this time, the supported runtime is:
nodejs20only
NOTE: python3.12 is planned later, but it is not currently accepted by the API.
How Script Engine works
The standard Script Engine workflow looks like this:
Write a JavaScript handler
Base64-encode the script source
Generate a SHA-256 hash of the raw source
Create the script with
POST /scriptsExecute it with
POST /scripts/{scriptId}/executeOptionally set a recurring schedule on the script
Review run history with
/scripts/{scriptId}/runsDownload logs for debugging when needed
The API base path is /v1/scripting/.
Script Engine API Reference
See Swagger documentation on GitHub.
Step 1: Write your script
Your script must export the handler function that matches the configured entry_point. The default entry point is handler. Supported export styles include:
exports.handler = async (payload, context) => {
return {
message: "hello", input: payload
};
};or
module.exports.handler = async (payload, context) => {
return {
message: "hello",
input: payload
};
};
If the configured entry point is not exported, the script is rejected during validation.
Handler arguments
Your script receives two arguments:
payload
This is the JSON object sent in the execute request.
context
This includes execution metadata and helper methods, such as:
context.secretscontext.runIdcontext.workspaceIdcontext.scriptUuidcontext.getRemainingTimeMs()context.isTimingOut()context.writeArtifact(name, data)
That makes it possible to build scripts that are aware of remaining runtime, can safely use secrets, and can write artifacts for later inspection.
Example: minimal handler
exports.handler = async (payload, context) => {
const name = payload.name || "World";
return {
message: `Hello, ${name}!`,
runId: context.runId,
workspaceId: context.workspaceId
};
};
Step 2: Encode and hash the script
When you create or update a script, the API expects:
script_contentas base64-encoded script sourcescript_hashas SHA-256 of the raw script source
Example in Bash
SCRIPT_RAW='exports.handler = async (payload) => { return { message: "hello" }; };'
CONTENT=$(echo -n "$SCRIPT_RAW" | base64)
HASH=$(echo -n "$SCRIPT_RAW" | shasum -a 256 | cut -d' ' -f1)Important
Hash the raw JavaScript source, not the base64 value. The API decodes the base64 first, then verifies the hash against the decoded content.
Step 3: Create a script
Use POST /scripts to create a new script. This is the main endpoint for initial deployment.
Most important create fields
The ScriptCreateRequest supports these key fields:
id
Human-readable script ID, unique per workspacedisplay_name
Friendly name shown to usersdescription
Optional descriptionruntime
Currentlynodejs20script_content
Base64-encoded JavaScript sourcescript_hash
SHA-256 hash of the raw sourceentry_point
Defaults toexports.handlerin the schema, though the guide useshandleras the function name. In practice, your script should export the named handler you configure.memory_mb
128 to 1024, default 256timeout_seconds
5 to 900, default 30schedule
Optional preset or cron/rate expressionsecrets
Key-value secret maptags
Optional metadata tags
Script ID rules
Script IDs must:
be 3 to 63 characters
use lowercase letters, numbers, and hyphens only
start and end with a letter or number
match
^[a-z0-9][a-z0-9-]{1,61}[a-z0-9]$
Create example
curl -X POST https://your-api.example.com/v1/scripting/scripts \
-H "Authorization: ApiKey YOUR_API_KEY" \
-H "Account-Id: YOUR_WORKSPACE_ID" \
-H "Content-Type: application/json" \
-d '{
"id": "my-script",
"display_name": "My First Script",
"description": "Simple test script",
"runtime": "nodejs20",
"script_content": "'$CONTENT'",
"script_hash": "'$HASH'",
"entry_point": "handler",
"memory_mb": 256,
"timeout_seconds": 30
}'
A successful create returns 201 Created with the full script resource, including metadata such as uuid, script_version, status, timestamps, and secret references.
Manage secrets securely
Many Script Engine jobs need credentials for external APIs, SmartSuite SDK access, webhook endpoints, or database connections. Script Engine supports secrets directly as part of the script definition.
How secrets work
When creating or updating a script, you can provide a secrets object:
{
"secrets": {
"API_KEY": "sk-abc123",
"DB_PASSWORD": "super-secret-password"
}
}
Raw secret values are automatically uploaded to AWS Secrets Manager. When the script is later retrieved, the API returns secret references instead of the original values. Those references use secret://... URIs.
on create, raw credentials are auto-uploaded to Secrets Manager
on returned script resources,
secretsis a map of secret references (secret://URIs)
Accessing secrets in the script
Secrets are exposed at runtime through context.secrets:
const axios = require('axios');
exports.handler = async (payload, context) => {
const apiKey = context.secrets.API_KEY;
if (!apiKey) {
throw new Error("Missing required secret: API_KEY");
}
const response = await axios.get("https://api.example.com/data", {
headers: {
Authorization: `Bearer ${apiKey}`
}
});
return {
count: Array.isArray(response.data) ? response.data.length : null
};
};
Best practices for secrets
Do not hardcode credentials in your script source
Read secrets from
context.secretsValidate required secrets at the start of execution
Never print secret values in logs or return payloads
Use separate secrets for development, staging, and production
Rotate secrets carefully and test after rotation
Updating secrets without losing existing ones
The developer guide is explicit here: when you update a script, include all secrets you want to keep. Keys omitted from the update are removed. If you only want to rotate one secret, include the existing secret:// references for the secrets that should remain unchanged.
Example:
{
"secrets": {
"SMARTSUITE_API_KEY": "secret://aws-secrets-manager/scripts/ws-123/my-script",
"EXTERNAL_API_KEY": "new-rotated-api-key",
"TARGET_TABLE_ID": "secret://aws-secrets-manager/scripts/ws-123/my-script"
}
}
Removing all secrets
To clear secrets, send an empty object:
{
"secrets": {}
}
Step 4: Execute a script manually
Use POST /scripts/{scriptId}/execute to run a script on demand. This endpoint supports both synchronous and asynchronous execution.
Execute request fields
The execute request supports:
mode
syncorasyncpayload
Arbitrary JSON object passed to the handlertrigger_type
http,scheduled, ormanualcaller_ip
Optional caller IP address
Synchronous execution
Use mode: "sync" when you want the API call to wait for completion and return the result inline. This is a good fit for interactive workflows or when another process needs the script’s output immediately. The developer guide also calls this ideal for SmartSuite Automations that need to use the script’s return value in later steps.
Example
curl -X POST https://your-api.example.com/v1/scripting/scripts/my-script/execute \ -H "Authorization: ApiKey YOUR_API_KEY" \ -H "Account-Id: YOUR_WORKSPACE_ID" \ -H "Content-Type: application/json" \ -d '{ "mode": "sync", "trigger_type": "manual", "caller_ip": "127.0.0.1", "payload": { "name": "SmartSuite" } }'A synchronous result returns 200 and includes:
run_idstatusresultdurationerrorif execution failed
Asynchronous execution
Use mode: "async" when the script should run in the background. The API immediately returns 202 Accepted with a run_id and status: "pending". You then poll the run endpoint to track progress.
Example
curl -X POST https://your-api.example.com/v1/scripting/scripts/my-script/execute \
-H "Authorization: ApiKey YOUR_API_KEY" \
-H "Account-Id: YOUR_WORKSPACE_ID" \
-H "Content-Type: application/json" \
-d '{
"mode": "async",
"trigger_type": "manual",
"caller_ip": "127.0.0.1",
"payload": {
"batchSize": 50
}
}'
Example async response
{
"run_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "pending"
}
Step 5: Schedule recurring execution
Scheduling is configured on the script itself using the schedule field in the create or update request. There is no separate public schedules resource in the current API spec. Instead, you set, change, or remove the schedule through POST /scripts or PUT /scripts/{scriptId}.
Supported schedule formats
These presets are supported:
hourlydailyweekly
It also supports EventBridge-compatible expressions such as:
rate(6 hours)cron(0 12 * * ? *)
Create a scheduled script
curl -X POST https://your-api.example.com/v1/scripting/scripts \
-H "Authorization: ApiKey YOUR_API_KEY" \
-H "Account-Id: YOUR_WORKSPACE_ID" \
-H "Content-Type: application/json" \
-d '{
"id": "daily-sync",
"display_name": "Daily Sync",
"runtime": "nodejs20",
"script_content": "'$CONTENT'",
"script_hash": "'$HASH'",
"schedule": "daily",
"timeout_seconds": 600,
"memory_mb": 512
}'
Update a schedule
Use PUT /scripts/{scriptId}:
{
"schedule": "rate(6 hours)"
}
Remove a schedule
The developer guide specifies that removing a schedule is done by sending an empty string:
{
"schedule": ""
}That is an important practical detail because omitting the field leaves it unchanged.
Trigger type for scheduled runs
Run records identify how a script was triggered using trigger_type, which can be:
httpscheduledmanual
That makes it easy to distinguish user-run, API-driven, and scheduled executions in run history.
Step 6: Monitor runs and troubleshoot issues
Script Engine provides dedicated run history endpoints.
List runs for a script
Use:
GET /scripts/{scriptId}/runs
This returns paginated run history for the script. Each run includes fields such as:
idtrigger_typeexecution_modestatusscript_versionstarted_atcompleted_atduration_mserrorresult_summarylogs_s3_keyoutput_s3_key
Get a specific run
Use:
GET /scripts/{scriptId}/runs/{runId}
This is the endpoint to poll after an async execution and to inspect the final status.
Get run logs
Use:
GET /scripts/{scriptId}/runs/{runId}/logs
This endpoint returns a time-limited presigned S3 URL for the log file. The OpenAPI spec notes that:
the URL expires after 15 minutes
timed-out runs may trigger on-demand log consolidation from chunk files
the endpoint returns
404if no logs are available
Example
curl https://your-api.example.com/v1/scripting/scripts/my-script/runs/550e8400-e29b-41d4-a716-446655440000/logs \ -H "Authorization: ApiKey YOUR_API_KEY" \ -H "Account-Id: YOUR_WORKSPACE_ID"
Example response
{
"url": "https://presigned-s3-url.example.com/..."
}Step 7: Update, list, and delete scripts
These are the main lifecycle endpoints for script management.
List scripts
GET /scripts
This returns paginated results using cursor-based pagination and supports:
cursorpage_sizewith default 20 and max 100
Get a script
GET /scripts/{scriptId}
Use this to review the script configuration, current version, runtime, schedule, tags, and secret references. Raw secret values are never returned.
Update a script
PUT /scripts/{scriptId}
All fields are optional. Only the fields you provide are updated. Common update scenarios include:
replacing script content
changing timeout or memory
updating secrets
setting or removing a schedule
changing tags or description
Delete a script
DELETE /scripts/{scriptId}
This returns 204 when successful.
Discover runtimes and installed libraries
Script Engine also exposes endpoints to inspect runtime support.
List available runtimes
GET /runtimes
Returns available runtimes and their libraries.
List libraries for a runtime
GET /runtimes/{runtime}/libraries
The developer guide lists these pre-installed libraries for nodejs20:
smartsuite-sdkaxiosnode-fetchlodashdayjsuuid
Example
curl https://your-api.example.com/v1/scripting/runtimes/nodejs20/libraries \
-H "Authorization: ApiKey YOUR_API_KEY" \
-H "Account-Id: YOUR_WORKSPACE_ID"
This is useful when building scripts that rely on already-installed packages.
Practical examples and patterns
Example 1: Hello World
exports.handler = async (payload) => {
const name = payload.name || "World";
return {
message: `Hello, ${name}!`,
timestamp: new Date().toISOString()
};
};This is the simplest possible script and a good first deployment pattern.
Example 2: External API call using secrets
const axios = require('axios');
exports.handler = async (payload, context) => {
const apiKey = context.secrets.WEATHER_API_KEY;
if (!apiKey) {
return { error: "WEATHER_API_KEY secret not configured" };
}
const city = payload.city || "New York";
const url = `https://api.weatherapi.com/v1/current.json?key=${apiKey}&q=${encodeURIComponent(city)}`;
const response = await axios.get(url);
return {
city: response.data.location.name,
temp_f: response.data.current.temp_f,
condition: response.data.current.condition.text
};
};This pattern is ideal when the script must call a third-party service securely.
Example 3: Scheduled daily sync
const axios = require('axios');
exports.handler = async (payload, context) => {
const apiKey = context.secrets.WEATHER_API_KEY;
if (!apiKey) {
return { error: "WEATHER_API_KEY secret not configured" };
}
const city = payload.city || "New York";
const url = `https://api.weatherapi.com/v1/current.json?key=${apiKey}&q=${encodeURIComponent(city)}`;
const response = await axios.get(url);
return {
city: response.data.location.name,
temp_f: response.data.current.temp_f,
condition: response.data.current.condition.text
};
};This pattern shows several best practices together:
secrets via
context.secretsSmartSuite SDK usage
scheduled execution
graceful handling when approaching timeout
Example 4: Batched outbound import
const SOURCE_URL = "https://jsonplaceholder.typicode.com/posts?_limit=50";
const DESTINATION_URL = "https://httpbin.org/post";
const MAX_BULK = 25;
const ACCOUNT_ID = "spyv9knb"; // Sample 8-character Account-Id
function safeParseJson(text) {
try {
return JSON.parse(text);
} catch {
return text;
}
}
function chunkArray(items, size) {
const chunks = [];
for (let i = 0; i < items.length; i += size) {
chunks.push(items.slice(i, i + size));
}
return chunks;
}
async function fetchSourceItems() {
const response = await fetch(SOURCE_URL, {
method: "GET",
headers: {
Accept: "application/json",
},
});
if (!response.ok) {
const errorText = await response.text();
throw new Error(
`Source fetch failed with status ${response.status}: ${errorText || "No response body"}`
);
}
const posts = await response.json();
return posts.map((post, index) => ({
title: String(post.title || "").slice(0, 120) || `Post ${index + 1}`,
body: post.body || "",
}));
}
async function postBatch(items, token) {
const response = await fetch(DESTINATION_URL, {
method: "POST",
headers: {
"Content-Type": "application/json",
Accept: "application/json",
Authorization: `Token ${token}`,
"Account-Id": ACCOUNT_ID,
},
body: JSON.stringify({ items }),
});
const responseText = await response.text();
const parsedResponse = safeParseJson(responseText);
return {
ok: response.ok,
status: response.status,
itemCount: items.length,
response: parsedResponse,
};
}
exports.handler = async (payload, context) => {
const token = context?.secrets?.DESTINATION_TOKEN;
if (!token) {
throw new Error(
"Missing required secret: DESTINATION_TOKEN"
);
}
const maxBulk =
Number.isInteger(payload?.maxBulk) && payload.maxBulk > 0
? payload.maxBulk
: MAX_BULK;
const items = await fetchSourceItems();
const batches = chunkArray(items, maxBulk);
const results = [];
let successCount = 0;
let failureCount = 0;
for (let i = 0; i < batches.length; i++) {
const batchNumber = i + 1;
const batch = batches[i];
const result = await postBatch(batch, token);
results.push({
batchNumber,
...result,
});
if (result.ok) {
successCount++;
} else {
failureCount++;
}
if (context?.isTimingOut?.()) {
return {
partial: true,
message: "Execution stopped early because the script is approaching timeout.",
totalItems: items.length,
processedBatches: results.length,
remainingBatches: batches.length - results.length,
successfulBatches: successCount,
failedBatches: failureCount,
results,
};
}
}
return {
partial: false,
totalItems: items.length,
batchSize: maxBulk,
totalBatches: batches.length,
successfulBatches: successCount,
failedBatches: failureCount,
results,
};
};
This sample is a practical example of using Script Engine to:
fetch rows from a source API
transform them
split them into chunks
send them to a destination endpoint
collect a result summary
That is one of the most useful real-world patterns for Script Engine because it supports:
imports
bulk push integrations
partner API synchronization
retryable batch workflows
Resource limits and validation rules
The platform enforces these important constraints:
Size limit
maximum raw script size: 5 MB
because base64 adds overhead, it is recommended to keep scripts under about 4 MB in practice
Resource limits
memory_mb: 128 to 1024timeout_seconds: 5 to 900
Blocked patterns
You should be aware of patterns that are rejected during validation, including:
eval()new Function()require('child_process')require('fs')require('net')process.exit()VM and process-control related modules and prototype-chain escape patterns
These rules help keep the environment safe and predictable.
Errors you should expect
The API uses a consistent error envelope:
{
"error": {
"code": "VALIDATION_FAILED",
"message": "description of what went wrong",
"details": []
}
}
Common errors include:
400 VALIDATION_FAILED404 NOT_FOUND409 CONFLICT422 BUSINESS_RULE_VIOLATION503 WORKSPACE_NOT_READY
A few especially important cases:
duplicate script ID returns
409trying to execute a non-active script may return
422if the workspace runtime is not ready, execution may return
503
Summary
For most customers, the most important Script Engine endpoints are:
POST /scriptsto create a scriptGET /scriptsandGET /scripts/{scriptId}to inspect scriptsPUT /scripts/{scriptId}to update content, schedule, or secretsDELETE /scripts/{scriptId}to remove a scriptPOST /scripts/{scriptId}/executeto run nowGET /scripts/{scriptId}/runsto view run historyGET /scripts/{scriptId}/runs/{runId}to inspect a runGET /scripts/{scriptId}/runs/{runId}/logsto download logsGET /runtimesandGET /runtimes/{runtime}/librariesto discover runtime capabilities
The most effective way to use Script Engine is to:
write a reusable handler
upload it with a correct base64 payload and raw-source hash
configure secrets in the script definition
access those secrets through
context.secretstest with synchronous execution
move background jobs to async execution or scheduled runs
monitor status and logs through the runs endpoints
That gives you a practical, production-ready way to use Script Engine for automation, integration, and background processing.
