Pipeline Grammar
About 5680 wordsAbout 19 min
Basic Concepts
Trigger Branch: Corresponds to the branch of the code repository, used to specify which branch the pipeline will build in.Trigger Event: Specifies which event will trigger the build, supporting triggering multiple pipelines.Pipeline: Represents a pipeline, containing one or more stagesStage, eachStageexecuted sequentially.Stage: Represents a build stage, which can consist of one or more tasksJob.Job: The most basic task execution unit.
main: # Trigger branch
push: # Trigger event, corresponding to a build, can contain multiple pipelines. Can be an array or an object.
- name: pipeline-1 # Pipeline structure
stages:
- name: stage-1 # Stage structure
jobs:
- name: job-1 # Job structure
script: echomain: # Trigger branch
push: # Trigger event, corresponding to a build, specifying pipelines via an object
pipeline-key:
stages:
- name: stage-1 # Stage structure
jobs:
- name: job-1 # Job structure
script: echoPipeline
Pipeline represents a pipeline, containing one or more stages Stage, each Stage executed sequentially.
A basic Pipeline configuration is as follows:
name: Pipeline name
docker:
image: node
build: dev/Dockerfile
volumes:
- /root/.npm:copy-on-write
git:
enable: true
submodules: true
lfs: true
services:
- docker
env:
TEST_KEY: TEST_VALUE
imports:
- https://cnb.build/<your-repo-slug>/-/blob/main/xxx/envs.yml
- ./env.txt
label:
type: MASTER
class: MAIN
stages:
- name: stage 1
script: echo "stage 1"
- name: stage 2
script: echo "stage 2"
- name: stage 3
script: echo "stage 3"
failStages:
- name: fail stage 1
script: echo "fail stage 1"
- name: fail stage 2
script: echo "fail stage 2"
endStages:
- name: end stage 1
script: echo "end stage 1"
- name: end stage 2
script: echo "end stage 2"
ifModify:
- a.txt
- "src/**/*"
retry: 3
allowFailure: falsename
- type:
String
Specify the pipeline name, default is pipeline. When there are multiple parallel pipelines, the default pipeline names are pipeline, pipeline-1, pipeline-2, and so on. You can define name to distinguish different pipelines.
runner
- type:
Object
Specify parameters related to the build node.
tags: Optional, specify which tags the build node should have.cpus: Optional, specify the number of CPU cores to use for the build.
tags
- type:
String|Array<String> - default:
cnb:arch:default
Specify which tags the build node should have. See Build Nodes for details.
Example:
main:
push:
- runner:
tags: cnb:arch:amd64
stages:
- name: uname
script: uname -acpus
- type:
Number
Specify the maximum number of CPU cores to use for the build (memory = CPU cores * 2 GB), where CPU and memory do not exceed the actual size of the runner machine.
If not configured, the maximum available CPU cores are determined by the runner machine configuration.
Example:
# cpus = 1, memory = 2G
main:
push:
- runner:
cpus: 1
stages:
- name: echo
script: echo "hello world"docker
- type:
Object
Specify parameters related to docker. See Build Environment for details.
image: The environment image for the currentPipeline. All tasks under thisPipelinewill be executed in this image environment.build: Specify aDockerfileto build a temporary image, used as the value forimage.volumes: Declare data volumes for caching scenarios.
image
- type:
Object|String
Specify the environment image for the current Pipeline. All tasks under this Pipeline will be executed in this image environment.
This property and its sub-properties support referencing environment variables. Refer to Variable Substitution.
image.name:StringImage name, e.g.,node:20.image.dockerUser:StringSpecify the Docker username for pulling the specified image.image.dockerPassword:StringSpecify the Docker password for pulling the specified image.
If image is specified as a string, it is equivalent to specifying image.name.
If using the Docker artifact repository of Cloud Native Build and image.dockerPassword is not set, this parameter will be set to the value of the environment variable CNB_TOKEN.
Example 1, using a public image:
main:
push:
- docker:
# Use the node:20 image from the official Docker repository as the build container
image: node:20
stages:
- name: show version
script: node -vExample 2, using a private image from CNB Artifact:
main:
push:
- docker:
# Use a non-public image as the build container, requiring Docker username and password
image:
name: docker.cnb.build/images/pipeline-env:1.0
# Use environment variables injected by default during CI builds
dockerUser: $CNB_TOKEN_USER_NAME
dockerPassword: $CNB_TOKEN
stages:
- name: echo
script: echo "hello world"Example 3, using a private image from the official Docker repository:
main:
push:
- imports: https://cnb.build/<your-repo-slug>/-/blob/main/xxx/docker.yml
docker:
# Use a non-public image as the build container, requiring Docker username and password
image:
name: images/pipeline-env:1.0
# Environment variables imported from docker.yml
dockerUser: $DOCKER_USER
dockerPassword: $DOCKER_PASSWORD
stages:
- name: echo
script: echo "hello world"docker.yml
DOCKER_USER: user
DOCKER_PASSWORD: passwordbuild
- type:
Object|String
Specify a Dockerfile to build a temporary image to be used as the value for image.
This property and its sub-properties support referencing environment variables, see Variable Replacement.
A complete example of declaring the build environment using build can be found at docker-build-with-by.
The following are explanations of each parameter under build:
build.dockerfile:- type:
String
Path to the
Dockerfile.This property supports referencing environment variables, see Variable Replacement.
- type:
build.target:- type:
String
Corresponds to the --target parameter in docker build, allowing you to selectively build specific stages in the Dockerfile rather than the entire Dockerfile.
- type:
build.by:- type:
Array<String>|String
Used to declare a list of files that are dependencies during the caching build process.
Note: Files not listed in the
bylist, except for the Dockerfile, are treated as non-existent during the image building process.When of type
String, multiple files can be separated by commas.- type:
build.versionBy:- type:
Array<String>|String
Used for version control; if the content of the files pointed to changes, a new version is considered, with the calculation logic being: sha1(dockerfile + versionBy + buildArgs).
When of type
String, multiple files can be separated by commas.- type:
build.buildArgs:- type:
Object
Inserts additional build arguments (
--build-arg $key=$value) during the build process, where the value is null, only the key is added (--build-arg $key).- type:
build.ignoreBuildArgsInVersion:- type:
Boolean
Determines whether buildArgs are ignored in version calculation. See
versionByfor more details.- type:
build.sync:- type:
String
Specifies whether to wait for
docker pushto succeed before continuing. Default isfalse.- type:
If build is specified as a string, it is equivalent to specifying build.dockerfile.
Usage of Dockerfile:
main:
push:
- docker:
# If `build` is a string, it is equivalent to specifying `build.dockerfile`
build: ./image/Dockerfile
stages:
- stage1
- stage2
- stage3main:
push:
- docker:
# Specifies `build` as an `Object`, allowing for more control over the image building process
build:
dockerfile: ./image/Dockerfile
# Build only the builder, not the entire Dockerfile
target: builder
stages:
- stage1
- stage2
- stage3Usage of Dockerfile versionBy:
Example: Cache pnpm in the environment image to speed up subsequent pnpm i processes
FROM node:22
RUN npm config set registry http://mirrors.cloud.tencent.com/npm/ \
&& npm i -g pnpm
WORKDIR /data/cache
COPY package.json package-lock.json ./
RUN pnpm imain:
push:
# Specify the build environment using Dockerfile
- docker:
build:
dockerfile: ./Dockerfile
by:
- package.json
- package-lock.json
versionBy:
- package-lock.json
stages:
- name: cp node_modules
# Copy node_modules from the container to the pipeline working directory
script: cp -r /data/cache/node_modules ./
- name: check node_modules
script: |
if [ -d "node_modules" ]; then
cd node_modules
ls
else
echo "node_modules directory does not exist."
fivolumes
- type:
Array<String>|String
Declare data volumes. Multiple volumes can be passed as an array or separated by commas, supporting environment variable references. Supported formats:
<group>:<path>:<type><path>:<type><path>
Meanings:
group: Optional, volume group. Different groups are isolated from each other.path: Required, absolute path to mount the volume. Supports absolute paths (starting with/) or relative paths (starting with./), relative to the workspace.type: Optional, volume type. Default iscopy-on-write. Supported types:read-writeorrw: Read-write. Concurrent write conflicts must be handled manually. Suitable for serial build scenarios.read-onlyorro: Read-only. Write operations throw exceptions.copy-on-writeorcow: Read-write. Changes (add, modify, delete) are merged after a successful pipeline. Suitable for concurrent build scenarios.copy-on-write-read-only: Read-only. Changes (add, delete, modify) are discarded after the pipeline ends.data: Create a temporary data volume, automatically cleaned up after the pipeline ends.
copy-on-write
Used for caching scenarios, supports concurrency.
copy-on-write technology allows the system to share the same data copy until modifications are needed, enabling efficient cache replication. In concurrent environments, this method avoids read-write conflicts because private copies of data are only created when modifications are actually needed. Thus, only write operations cause data replication, while read operations can safely proceed in parallel without worrying about data consistency. This mechanism significantly improves performance, especially in read-heavy caching scenarios.
data
Used for data sharing, sharing specified directories in the container with other containers.
By creating a data volume and mounting it into each container. Unlike directly mounting directories from the build node into the container, if the specified directory already exists in the container, its contents are automatically copied to the data volume instead of directly overwriting the container directory.
volumes Examples
Example 1: Mount directories from the build node into the container for local caching
main:
push:
- docker:
image: node:20
# Declare data volumes
volumes:
- /data/config:read-only
- /data/mydata:read-write
# Use cache and update simultaneously
- /root/.npm
# Use main cache and update simultaneously
- main:/root/.gradle:copy-on-write
stages:
- stage1
- stage2
- stage3
pull_request:
- docker:
image: node:20
# Declare data volumes
volumes:
- /data/config:read-only
- /data/mydata:read-write
# Use copy-on-write cache
- /root/.npm
- node_modules
# PR uses main cache but does not update
- main:/root/.gradle:copy-on-write-read-only
stages:
- stage1
- stage2
- stage3Example 2: Share files packaged in the container with other containers
# .cnb.yml
main:
push:
- docker:
image: go-app-cli # Assume a Go application is in the /go-app/cli path in the image
# Declare data volumes
volumes:
# This path exists in the go-app-cli image, so when the environment image is executed, its contents are copied to a temporary data volume for sharing with other task containers
- /go-app
stages:
- name: show /go-app-cli in job container
image: alpine
script: ls /go-appgit
- type:
Object
Provides Git repository-related configurations.
git.enable
- type:
Boolean - default:
true
Specifies whether to fetch the code.
For branch.delete events, the default is false. For other events, the default is true.
git.submodules
- type:
Object|Boolean - default:
enable:trueremote:false
Specifies whether to pull submodules.
When the value is of type Boolean, it is equivalent to setting git.submodules.enable to the value of git.submodules, and git.submodules.remote to the default value false.
git.submodules.enable
- type:
Boolean - default:
true
Specifies whether to pull submodules.
git.submodules.remote
- type:
Boolean - default:
false
Determines whether to add the --remote parameter when executing git submodule update, which ensures the latest code of the submodule is pulled each time.
Basic Usage:
main:
push:
- git:
enable: true
submodules: true
stages:
- name: echo
script: echo "hello world"
- git:
enable: true
submodules:
enable: true
remote: true
stages:
- name: echo
script: echo "hello world"git.lfs
- type:
Object|Boolean - default:
true
Specifies whether to fetch LFS files.
Supports Object format to specify specific parameters. If fields are omitted, the default values are:
{
"enable": true
}Basic Usage:
main:
push:
- git:
enable: true
lfs: true
stages:
- name: echo
script: echo "hello world"
- git:
enable: true
lfs:
enable: true
stages:
- name: echo
script: echo "hello world"git.lfs.enable
Specifies whether to fetch LFS files.
services
- type:
Array<String>
Declares services required during the build, in the format: name:[version], where version is optional.
Currently supported services:
- docker
- vscode
service:docker
Enables the dind service.
When operations like docker build or docker login are needed during the build, declare this to automatically inject docker daemon and docker cli into the environment.
Example:
main:
push:
- services:
- docker
docker:
image: alpine
stages:
- name: docker info
script:
- docker info
- docker psThis service automatically logs into the CNB Docker Artifact repository's image source (docker.cnb.build). Subsequent tasks can directly docker push to the current repository's Docker Artifact.
Example:
main:
push:
- services:
- docker
stages:
- name: build and push
script: |
# Dockerfile exists in the root directory
docker build -t ${CNB_DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest .
docker push ${CNB_DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latestservice:vscode
Declared when remote development is needed.
Example:
$:
vscode:
- services:
- vscode
- docker
docker:
image: alpine
stages:
- name: uname
script: uname -aenv
- type:
Object
Specifies environment variables. Defines a set of environment variables for use during task execution. Effective for all non-plugin tasks in the current Pipeline.
imports
- type:
Array<String>|String
Specify the file path of the CNB Git repository (either a relative path or an HTTP address) to read the file as a source of environment variables.
Local relative paths such as ./env.yml will be concatenated into a remote HTTP file address for loading.
Cloud Native Build now supports Keystore, offering enhanced security and file reference auditing.
Typically, a Keystore is used to store account credentials such as those for npm and docker.
Example:
#env.yml
DOCKER_USER: "username"
DOCKER_TOKEN: "token"
DOCKER_REGISTRY: "https://xxx/xxx"#.cnb.yml
main:
push:
- services:
- docker
imports:
- https://cnb.build/<your-repo-slug>/-/blob/main/xxx/env.yml
stages:
- name: docker push
script: |
docker login -u ${DOCKER_USER} -p "${DOCKER_TOKEN}" ${CNB_DOCKER_REGISTRY}
docker build -t ${DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest .
docker push ${DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latestNote: Not effective for plugin tasks.
Supported file formats:
yaml: Parses files with.ymlor.yamlextensions.json: Parses files with.jsonextensions.plain: Each line is inkey=valueformat. All other extensions are parsed this way. (Not recommended)
Priority for duplicate keys:
- When
importsis an array, duplicate parameters are overwritten by later configurations. - If a parameter duplicates one in
env, theenvparameter will override the one from theimportsfile.
Variable Assignment
Paths in imports files can reference environment variables. If an array, subsequent file paths can reference variables from earlier files.
// env.json
{
FILE: "https://cnb.build/<your-repo-slug>/-/blob/main/xxx/env2.yml"
}# env2.yml
TEST_TOKEN: some tokenmain:
push:
- imports:
- ./env1.json
- $FILE
stages:
- name: echo
script: echo $TEST_TOKENReferenced files can declare accessible scopes. See Configuration File Reference Authentication.
Example:
team_name/project_name/*, matching all repositories under a project:
key: value
allow_slugs:
- team_name/project_name/*Allow references from all repositories:
key: value
allow_slugs:
- "**"Most configuration files are simple single-layer objects, such as:
// env.json
{
"token": "private token",
"password": "private password"
}To handle complex configuration files and scenarios, imports supports nested objects! If the parsed object from the imported file contains deep properties (the first layer cannot be an array), it will be flattened into a single-layer object with the following rules:
- Property names are retained, and property values are converted to strings.
- If a property value is an object (including arrays), it will be recursively flattened, with property paths connected by
_.
// env.json
{
"key1": [
"value1",
"value2"
],
"key2": {
"subkey1": [
"value3",
"value4"
],
"subkey2": "value5"
},
"key3": [
"value6",
{
"subsubkey1": "value7"
}
],
"key4": "value8"
}Will be flattened into:
{
// Original property values converted to strings
"key1": "value1,value2",
// If a property value is an object, additional recursive flattening is performed to add properties
"key1_0": "value1",
"key1_1": "value2",
"key2": "[object Object]",
"key2_subkey1": "value3,value4",
"key2_subkey1_0": "value3",
"key2_subkey1_1": "value4",
"key2_subkey2": "value5",
"key3": "value6,[object Object]",
"key3_0": "value6",
"key3_1": "[object Object]",
"key3_1_subsubkey1": "value7",
"key4": "value8"
}main:
push:
- imports:
- ./env.json
stages:
- name: echo
script: echo $key3_1_subsubkey1label
- type:
Object
Assigns labels to the pipeline. Each label value can be a string or an array of strings. These labels can be used for subsequent pipeline record filtering and other functions.
Here is an example workflow: Merge the main branch to release to the pre-release environment, and tag to release to the production environment.
main:
push:
- label:
# Regular pipeline for the Master branch
type:
- MASTER
- PREVIEW
stages:
- name: install
script: npm install
- name: CCK-lint
script: npm run lint
- name: BVT-build
script: npm run build
- name: UT-test
script: npm run test
- name: pre release
script: ./pre-release.sh
$:
tag_push:
- label:
# Regular pipeline for the product release branch
type: RELEASE
stages:
- name: install
script: npm install
- name: build
script: npm run build
- name: DELIVERY-release
script: ./release.shstages
- type:
Array<Stage|Job>
Defines a set of stage tasks, each executed sequentially.
failStages
- type:
Array<Stage|Job>
Defines a set of failure stage tasks. Executed sequentially when the normal flow fails.
endStages
- type:
Array<Stage|Job>
Defines a set of tasks executed at the end of the pipeline. After the pipeline stages/failStages complete, these tasks are executed sequentially before the pipeline ends.
If the pipeline prepare stage succeeds, endStages will execute regardless of whether stages succeed. The success of endStages does not affect the pipeline status (i.e., endStages can fail while the pipeline status is success).
ifNewBranch
- type:
Boolean - default:
false
If true, the Pipeline executes only if the current branch is new (i.e., CNB_IS_NEW_BRANCH is true).
If both
ifNewBranchandifModifyexist, thePipelineexecutes if either condition is met.
ifModify
- type:
Array<String>|String
Specifies that the Pipeline should only be executed when the corresponding files have changed. It is a glob expression string or an array of strings.
Supported Events
- For
pushevents on non-new branches, it compares thebeforeandafterto calculate the changed files. - For
pushevents on non-new branches triggered bycnb:applyin the pipeline, the rules for calculating changed files are the same as above. - For events triggered by a
PR, it calculates the changed files in thePR. - For events triggered by a
PRthroughcnb:apply, it calculates the changed files in thePR.
Since there can be a large number of file changes, the limit for counting changed files is set to a maximum of 300.
Examples
- Example 1:
This Pipeline will be executed when the modified file list includes a.js or b.js.
ifModify:
- a.js
- b.js- Example 2:
This Pipeline will be executed when the modified file list includes files with the js extension. Here, **/*.js matches all js extension files in subdirectories, and *.js matches all js extension files in the root directory.
ifModify:
- "**/*.js"
- "*.js"- Example 3:
Reverse matching, excluding the legacy directory and all Markdown files, triggers when there are changes in other files.
ifModify:
- "**"
- "!(legacy/**)"
- "!(**/*.md)"
- "!*.md"- Example 4:
Reverse matching, triggers when there are changes in the src directory except for the src/legacy directory.
ifModify:
- "src/**"
- "!(src/legacy/**)"If you need further clarification or have any specific questions about this, feel free to ask!
breakIfModify
- type:
Boolean - default:
false
Terminates the build if the source branch is updated before the Job executes.
skipIfModify
- type:
Boolean - default:
false
Skips the current Job if the source branch is updated before execution.
retry
- type:
Number - default:
0
Number of retries on failure. 0 means no retries.
allowFailure
- type:
Boolean - default:
false
Whether the current pipeline is allowed to fail.
When set to true, the pipeline's failure status will not be reported to CNB.
lock
- type:
Object|Boolean
Sets a lock for the pipeline. The lock is automatically released after the pipeline completes. Locks cannot be used across repositories.
Behavior: After pipeline A acquires the lock, pipeline B requests the lock. It can either terminate A or wait for A to release the lock before acquiring it and continuing.
key:
- type:
String
Custom lock name. Default is
branch name-pipeline name, meaning the lock scope is the currentpipeline.- type:
expires:
- type:
Number - default:
3600(one hour)
Lock expiration time, after which the lock is automatically released, in seconds.
- type:
timeout:
- type:
Number - default:
3600(one hour)
Timeout duration for waiting for the lock, in seconds.
- type:
cancel-in-progress:
- type:
Boolean - default:
false
Whether to terminate pipelines occupying or waiting for the lock, allowing the current pipeline to acquire the lock and execute.
- type:
wait:
- type:
Boolean - default:
false
Whether to wait if the lock is occupied (without consuming pipeline resources or time). If
false, an error is thrown immediately. Cannot be used withcancel-in-progress.- type:
cancel-in-wait:
- type:
Boolean - default:
false
Whether to terminate pipelines waiting for the lock, allowing the current pipeline to join the lock queue. Requires the
waitproperty.- type:
If lock is true, key, expires, timeout, cancel-in-progress, wait, and cancel-in-wait take their default values.
Example 1: lock as a Boolean
main:
push:
- lock: true
stages:
- name: stage1
script: echo "stage1"Example 2: lock as an Object
main:
push:
- lock:
key: key
expires: 600 # 10 minutes
wait: true
timeout: 60 # Maximum wait of 1 minute
stages:
- name: stage1
script: echo "stage1"Example 3: Terminate the currently running pipeline under pull_request
main:
pull_request:
- lock:
key: pr
cancel-in-progress: true
stages:
- name: echo hello
script: echo "stage1"Stage
- type:
Job|Object<name: Job>
Stage represents a build stage, which can consist of one or more Jobs. See Job Introduction.
Single Job
If a Stage has only one Job, the Stage can be omitted, and the Job can be written directly.
stages:
- name: stage1
jobs:
- name: job A
script: echo helloCan be simplified to:
- stages:
- name: job A
script: echo helloWhen Job is a string, it can be treated as a script task, with name and script set to the string. Further simplified:
- stages:
- echo helloSerial Jobs
When the value is an array (ordered), the Jobs in the group will execute sequentially.
# Serial
stages:
- name: install
jobs:
- name: job1
script: echo "job1"
- name: job2
script: echo "job2"Parallel Jobs
When the value is an object (unordered), the Jobs in the group will execute in parallel.
# Parallel
stages:
- name: install
jobs:
job1:
script: echo "job1"
job2:
script: echo "job2"Multiple Jobs can be flexibly organized in serial or parallel. Example of serial followed by parallel:
main:
push:
- stages:
- name: serial first
script: echo "serial"
- name: parallel
jobs:
parallel job 1:
script: echo "1"
parallel job 2:
script: echo "2"
- name: serial next
script: echo "serial next"name
- type:
String
Stage name.
ifNewBranch
- type:
Boolean - default:
false
If true, the Stage executes only if the current branch is new (i.e., CNB_IS_NEW_BRANCH is true).
If
ifNewBranch,ifModify, orifconditions are met, theStagewill execute.
ifModify
- type:
Array<String>|String
Specifies that the Stage executes only if the specified files are modified. A glob matching expression string or string array.
if
- type:
Array<String>|String
One or more Shell scripts. The exit code determines whether the Stage executes. If the exit code is 0, the step will execute.
Example 1: Check the value of a variable
main:
push:
- env:
IS_NEW: true
stages:
- name: is new
if: |
[ "$IS_NEW" = "true" ]
script: echo is new
- name: is not new
if: |
[ "$IS_NEW" != "true" ]
script: echo not newExample 2: Check the output of a task
main:
push:
- stages:
- name: make info
script: echo 'haha'
exports:
info: RESULT
- name: run if RESULT is haha
if: |
[ "$RESULT" = "haha" ]
script: echo $RESULTenv
- type:
Object
Same as Pipeline env, but only effective for the current Stage.
Stage env has higher priority than Pipeline env.
imports
- type:
Array<String>|String
Same as Pipeline imports, but only effective for the current Stage.
retry
- type:
Number - default:
0
Number of retries on failure. 0 means no retries.
lock
- type:
Boolean|Object
Sets a lock for the Stage. The lock is automatically released after the Stage completes. Locks cannot be used across repositories.
Behavior: After task A acquires the lock, task B requests the lock and must wait for the lock to be released before acquiring it and continuing.
lock.key
- type:
String
Custom lock name. Default is
branch name-pipeline name-stage index.- type:
lock.expires
- type:
Number - default:
3600(one hour)
Lock expiration time, after which the lock is automatically released, in seconds.
- type:
lock.wait
- type:
Boolean - default:
false
Whether to wait if the lock is occupied.
- type:
lock.timeout
- type:
Number - default:
3600(one hour)
Specifies the timeout duration for waiting for the lock, in seconds.
- type:
If lock is true, key, expires, timeout, cancel-in-progress, wait, and cancel-in-wait take their default values.
Example 1: lock as a Boolean
main:
push:
- stages:
- name: stage1
lock: true
jobs:
- name: job1
script: echo "job1"Example 2: lock as an Object
main:
push:
- stages:
- name: stage1
lock:
key: key
expires: 600 # 10 minutes
wait: true
timeout: 60 # Maximum wait of 1 minute
jobs:
- name: job1
script: echo "job1"image
- type:
Object|String
Specifies the environment image for the current Stage. All tasks in this Stage will default to executing in this image environment.
This property and its sub-properties support referencing environment variables. Refer to Variable Substitution.
image.name:StringImage name, e.g.,node:20.image.dockerUser:StringDocker username for pulling the specified image.image.dockerPassword:StringDocker password for pulling the specified image.
If image is a string, it is equivalent to specifying image.name.
If using the Docker artifact repository of Cloud Native Build and image.dockerPassword is not set, this parameter will be set to the value of the environment variable CNB_TOKEN.
jobs
- type:
Array<Job>|Object<name,Job>
Defines a group of tasks, each executed sequentially or in parallel.
- If the value is an array (ordered), the
Jobswill execute sequentially. - If the value is an object (unordered), the
Jobswill execute in parallel.
Job
Job is the most basic task execution unit, divided into three categories:
Built-in Tasks
type:
- type:
String
Specifies the built-in task to execute.
- type:
options:
- type:
Object
Specifies parameters for the built-in task.
- type:
optionsFrom:
Array<String>|String
Specifies local or Git repository file paths to load as built-in task parameters. Similar to
imports, ifoptionsFromis an array, duplicate parameters are overwritten by later configurations.
options fields have higher priority than optionsFrom.
Reference file permission control: Configuration File Reference Authentication.
Example:
name: install
type: INTERNAL_JOB_NAME
optionsFrom: ./options.json
options:
key1: value1
key2: value2// ./options.json
{
"key1": "value1",
"key2": "value2"
}Script Tasks
- name: install
script: npm installscript:
- type:
Array<String>|String
Specifies the
shellscript to execute. Arrays are joined with&&by default.If the
scriptshould run in its own environment rather than the pipeline's environment, specify the runtime environment via theimageproperty.- type:
image:
- type:
String
Specifies the runtime environment.
- type:
Example:
- name: install
image: node:20
script: npm installScript tasks can be simplified to a string, where script is the string and name is the first line:
- echo helloEquivalent to:
- name: echo hello
script: echo helloPlugin Tasks
Plugins are Docker images, also called image tasks.
Unlike the above two types, plugin tasks offer more flexible execution environments. They are easier to share within teams, companies, or even across CI systems.
Plugin tasks pass environment variables to ENTRYPOINT to hide internal implementations.
Note: Custom environment variables set via imports, env, etc., are not passed to plugins but can be used in settings or args for variable substitution. CNB system environment variables are still passed to plugins.
name:
- type:
String
Specifies the
Jobname.- type:
image:
- type:
String
The full path of the image.
- type:
settings:
- type:
Object
Specifies plugin task parameters. Follow the documentation provided by the image. Environment variables can be referenced via
$VARor${VAR}.- type:
settingsFrom:
type:
Array<String>|StringSpecifies local or Git repository file paths to load as plugin task parameters.
Priority:
- Duplicate parameters are overwritten by later configurations.
settingsfields have higher priority thansettingsFrom.
Reference file permission control: Configuration File Reference Authentication.
Example:
Restricting both images and slugs:
allow_slugs:
- a/b
allow_images:
- a/bRestricting only images, not slug:
allow_images:
- a/bsettingsFrom can be written in Dockerfile:
FROM node:20
LABEL cnb.cool/settings-from="https://cnb.build/<your-repo-slug>/-/blob/main/xxx/settings.json"Examples
with imports:
- name: npm publish
image: plugins/npm
imports: https://cnb.build/<your-repo-slug>/-/blob/main/xxx/npm.json
settings:
username: $NPM_USER
password: $NPM_PASS
email: $NPM_EMAIL
registry: https://mirrors.xxx.com/npm/
folder: ./{
"username": "xxx",
"password": "xxx",
"email": "xxx@emai.com",
"allow_slugs": ["cnb/**/**"],
"allow_images": ["plugins/npm"]
}with settingsFrom:
- name: npm publish
image: plugins/npm
settingsFrom: https://cnb.build/<your-repo-slug>/-/blob/main/xxx/npm-settings.json
settings:
# username: $NPM_USER
# password: $NPM_PASS
# email: $NPM_EMAIL
registry: https://mirrors.xxx.com/npm/
folder: ./{
"username": "xxx",
"password": "xxx",
"email": "xxx@emai.com",
"allow_slugs": ["cnb/cnb"],
"allow_images": ["plugins/npm"]
}name
- type:
String
Specifies the Job name.
ifModify
- type:
Array<String>|String
Same as Stage ifModify. Only effective for the current Job.
ifNewBranch
- type:
Boolean - default:
false
Same as Stage ifNewBranch. Only effective for the current Job.
if
- type:
Array<String>|String
Same as Stage if. Only effective for the current Job.
breakIfModify
- type:
Boolean - default:
false
Same as Pipeline breakIfModify. Only effective for the current Job.
skipIfModify
- type:
Boolean - default:
false
Skips the current Job if the source branch is updated before execution.
env
- type:
Object
Same as Stage env, but only effective for the current Job.
Job env has higher priority than Pipeline env and Stage env.
imports
- type:
Array<String>|String
Same as Stage imports, but only effective for the current Job.
exports
- type:
Object
After Job execution, a result object is generated. exports can export properties from result to environment variables, with a lifecycle of the current Pipeline.
See Environment Variables for details.
timeout
- type:
Number|String
Sets a timeout for a single task. Default is 1 hour, maximum is 12 hours.
Effective for script-job and image-job.
Also supports the following units:
ms: Milliseconds (default)s: Secondsm: Minutesh: Hours
name: timeout job
script: sleep 1d
timeout: 100s # Task will timeout and exit after 100 secondsSee Timeout Strategy for details.
allowFailure
- type:
Boolean|String - default:
false
If true, failure of this step does not affect subsequent execution or the final result.
If String, environment variables can be read.
lock
- type:
Object|Boolean
Sets a lock for the Job. The lock is automatically released after the Job completes. Locks cannot be used across repositories.
Behavior: After task A acquires the lock, task B requests the lock and must wait for the lock to be released before acquiring it and continuing.
lock.key
- type:
String
Custom lock name. Default is
branch name-pipeline name-stage index-job name.- type:
lock.expires
- type:
Number - default:
3600(one hour)
Lock expiration time, after which the lock is automatically released, in seconds.
- type:
lock.wait
- type:
Boolean - default:
false
Whether to wait if the lock is occupied.
- type:
lock.timeout
- type:
Number - default:
3600(one hour)
Specifies the timeout duration for waiting for the lock, in seconds.
- type:
If lock is true, key, expires, timeout, cancel-in-progress, wait, and cancel-in-wait take their default values.
Example 1: lock as a Boolean
name: Lock
lock: true
script: echo 'job lock'Example 2: lock as an Object
name: Lock
lock:
key: key
expires: 10
wait: true
script: echo 'job lock'retry
- type:
Number - default:
0
Number of retries on failure. 0 means no retries.
type
- type:
String
Specifies the built-in task to execute.
options
- type:
Object
Specifies parameters for the built-in task.
optionsFrom
- type:
Array<String>|String
Specifies local or Git repository file paths to load as built-in task parameters. Similar to imports, if optionsFrom is an array, duplicate parameters are overwritten by later configurations.
script
- type:
Array<String>|String
Specifies the script to execute. Arrays are joined with &&. The script's exit code determines the Job's exit code.
Note: The default shell interpreter for the pipeline's base image is sh. Different images may use different interpreters.
commands
- type:
Array<String>|String
Same as script, but with higher priority. Mainly for compatibility with Drone CI syntax.
image
- type:
Object|String
Specifies the image to use as the current Job's execution environment, for docker image as env or docker image as plugins.
This property and its sub-properties support referencing environment variables. Refer to Variable Substitution.
image.name:StringImage name, e.g.,node:20.image.dockerUser:StringDocker username for pulling the specified image.image.dockerPassword:StringDocker password for pulling the specified image.
If image is a string, it is equivalent to specifying image.name.
If using the Docker artifact repository of Cloud Native Build and image.dockerPassword is not set, this parameter will be set to the value of the environment variable CNB_TOKEN.
settings
- type:
Object
Specifies parameters required for the plugin task. See Plugin Tasks for details.
settingsFrom
Array<String>|String
Specifies local or Git repository file paths to load as plugin task parameters. Similar to imports, if settingsFrom is an array, duplicate parameters are overwritten by later configurations.
See Plugin Tasks for details.
args
Array<String>
Specifies arguments passed to the image during execution, appended to ENTRYPOINT. Only supports arrays.
- name: npm publish
image: plugins/npm
args:
- lsWill execute:
docker run plugins/npm lsTask Exit Codes
- 0: Task succeeds, continue execution.
- 78: Task succeeds but interrupts the current
Pipeline. Can be used in custom scripts withexit 78to interrupt the pipeline. - Other:
Number, task fails and interrupts the currentPipeline.
