This section covers the core building blocks of an App Builder application - commands, flags, and arguments.
Command Components
The system executes a type of command somewhere in the hierarchy of a CLI tool’s sub commands.
Consider an app called demo that has commands demo say and demo think - the say and think parts are commands. In this example these are commands of type exec - they run a shell command.
Given a command demo deploy status and demo deploy upgrade, the deploy command would not perform any action. It exists mainly to anchor sub commands and show help information. Here the deploy command would be of type parent.
Nested commands should be structured as root -> parent -> parent -> exec and never root -> parent -> exec -> exec. When deviating from this pattern, the first exec should be a read-only action like showing some status. Users should feel safe to execute parents without unintended side effects.
Flags and Arguments
Commands often need parameters. For example, a software upgrade command might look like demo upgrade 1.2.3. Here the 1.2.3 is an argument. Commands can have a number of arguments, and they can be set to be required or optional. When multiple arguments exist, an optional one cannot appear before a required one.
Flags are generally kept for optional items like demo upgrade 1.2.3 --channel=nightly, where --channel is a flag. At present only flags with string values are supported. Future versions intend to support enums of valid values and boolean flags.
Subsections of Reference
Common Settings
Application definitions share a set of common settings across all command types. This section covers the standard properties, arguments, flags, validations, and other shared configuration options.
Command Types
The core command types are parent, exec, form, scaffold and ccm_manifest. Additional types can be registered through the plugin system.
Most commands are made up of a generic set of options and then have one or more added in addition to specialise them.
Common properties reference
Most commands include a standard set of fields - those that do not or have special restrictions will mention in the docs.
The following example produces this command:
usage: demo say [<flags>] <message>
Says something using the cowsay command
The command called defaults to cowsay but can be configured using the Cowsay configuration item
Flags:
--help Show context-sensitive help (also try --help-long and --help-man).
--cowfile=FILE Use a specific cow file
Args:
<message> The message to display
The definition consists of a commands member that has these properties:
name: exampledescription: Example applicationversion: 1.0.0author: Operations team <ops@example.net>help_template: default# optionalcommands:
-
# The name in the command: 'example say ....' (required)name: say# Help shown in output of 'example help say' or 'example say --help` (required)description: | Says something using the cowsay command
The command called defaults to cowsay but can be
configured using the Cowsay configuration item# Selects the kind of command, see below (required)type: exec# or any other known type# Optionally allows running 'example say hello' or 'example s hello' (optional)aliases:
- s # Arguments to accept (optional)arguments:
- name: messagedescription: The message to displayrequired: true# Flags to accept (optional)flags:
- name: cowfiledescription: Use a specific cow fileplaceholder: FILE# Sub commands to create below this one (optional, but see specific references)commands: []
The initial options define the application followed by commands. All the top settings are required except help_template, its value may be one of compact, long, short or default. When not set it defaults to default. Each help format presents information differently (requires version 0.0.9).
A banner can be emitted before invoking the commands in an exec, providing a warning or extra information to users before running a command. For example, a banner may warn that a config override is in use:
- name: saydescription: Say something using the configured commandtype: execcommand: | {{ default .Config.Cowsay "cowsay" }} {{ .Arguments.message | escape }}banner: | {{- if (default .Config.Cowsay "") -}}
>>
>> Using the {{ .Config.Cowsay }} command
>>
{{- end -}}arguments:
- name: messagedescription: The message to send to the terminalrequired: true
Cheat Sheet style help is supported, see the dedicated guide about that.
Arguments
An argument is a positional input to a command. example say hello, when the command is say the hello would be the first argument.
Arguments can have many options, the table below detail them and the version that added them.
Option
Description
Required
Version
name
A unique name for each argument
yes
description
A description for this argument, typically 1 line
yes
required
Indicates that a value for this argument must be set, which includes being set from default
enum
An array of valid values, if set the flag must be one of these values
0.0.4
default
Sets a default value when not passed, will satisfy enums and required. For bools must be true or false
A flag is a option passed to the application using something like --flag, typically these are used for optional inputs. Flags can have many options, the table below detail them and the version that added them.
Option
Description
Required
Version
name
A unique name for each flag
yes
description
A description for this flag, typically 1 line
yes
required
Indicates that a value for this flag must be set, which includes being set from default
placeholder
Will show this text in the help output like --cowfile=FILE
enum
An array of valid values, if set the flag must be one of these values
0.0.4
default
Sets a default value when not passed, will satisfy enums and required. For bools must be true or false
0.0.4
bool
Indicates that the flag is a boolean (see below)
0.1.1
env
Will load the value from an environment variable if set, passing the flag specifically wins, then the env, then default
0.1.2
short
A single character that can be used instead of the name to access this flag. ie. --cowfile might also be -F
- name: deletedescription: Delete the datatype: execcommand: | {{if .Flags.force}}
rm -rfv /nonexisting
{{else}}
echo "Please pass --force to delete the data"
{{end}}flags:
- name: forcedescription: Required to pass when removing databool: true
The --force flag is used to influence the command. Booleans with their default set to true or "true" will add a --no-flag-name option to negate it. Booleans without a true default do not get a negation flag.
Argument and Flag Validations
Input provided to commands may need validation. For example, when passing commands
to shell scripts, care must be taken to avoid Shell Injection.
Custom validators on Arguments and Flags are supported using the Expr Language.
Version Hint
This is available since version 0.8.0.
Based on the Getting Started example that calls cowsay we might wish to limit the length of the message to what
would work well with cowsay and also ensure there is no shell escaping happening.
arguments:
- name: messagedescription: The message to displayrequired: truevalidate: len(value) < 20 && is_shellsafe(value)
The standard expr language grammar is supported - it has a large number of functions that can assist
validation needs. A few extra functions are added that make sense for operations teams.
In each case accessing value would be the value passed from the user.
Function
Description
isIP(value)
Checks if value is an IPv4 or IPv6 address
isIPv4(value)
Checks if value is an IPv4 address
isIPv6(value)
Checks if value is an IPv6 address
isInt(value)
Checks if value is an Integer
isFloat(value)
Checks if value is a Float
isShellSafe(value)
Checks if value is attempting to to do shell escape attacks
Confirmations
Commands can prompt for confirmation before performing an action:
Before running the command the user will be prompted to confirm the action. Since version 0.2.0 an option is
added to the CLI allowing the prompt to be skipped using --no-prompt.
Including other definitions
Since version 0.10.0 an entire definition can be included from another file or just the commands in a parent.
name: includedescription: An include based appversion: 0.2.2author: another@example.netinclude_file: sample-app.yaml
This includes the entire application from another file but overrides the name, description, version and author.
A specific parent can load all its commands from a file:
- name: includetype: parentinclude_file: go.yaml
In this case the go.yaml would be the full parent definition.
Parent Command
A parent is a placeholder. In a command like example deploy status and example deploy upgrade, the deploy is a parent. It exists to group related commands and takes no action on its own.
It requires the name, description, type and commands and the optional aliases and include_file.
It does not accept flags, arguments, confirm_prompt or banner.
name: deploydescription: Manage deployment of the systemtype: parent# Commands are required for the parent type and should have more than 1commands: []
Including commands from a file
The include_file option allows loading the parent command definition from an external YAML file. The name set in the parent definition is preserved while other settings are loaded from the file.
name: deploydescription: Manage deployment of the systemtype: parentinclude_file: deploy_commands.yaml
Exec Command
Use the exec command to execute commands found in your shell, and, optionally format their output through data transformations.
An exec runs a command, it is identical to the generic example shown earlier and accepts flags, arguments and sub commands. It adds command, script, shell, environment (since 0.0.3), transform (since 0.0.5), dir (since 0.9.0), backoff and no_helper items.
Below the example that runs cowsay integrated with configuration:
name: saydescription: Says something using the cowsay commandtype: execdir: /tmpenvironment:
- "MESSAGE={{ .Arguments.message}}"command: | {{ default .Config.Cowsay "cowsay" }} "{{ .Arguments.message | escape }}"arguments:
- name: messagedescription: The message to displayrequired: true
The command is how the shell command is specified, demonstrating templating. This reads the .Config hash for a value Cowsay; if it does not exist it defaults to "cowsay". The .Arguments hash provides access to the value supplied by the user, escaped for shell safety.
The example also shows how to set environment variables using environment, which are also templated.
Since version 0.9.0 setting dir will execute the command in that directory. This setting supports templating and sets extra variables UserWorkingDir for the directory the user is in before running the command, AppDir and TaskDir indicating the directory the definition is in.
Setting environment variable BUILDER_DRY_RUN to any value will enable debug logging, log the command and terminate without calling your command.
Shell scripts
A shell script can be added directly to the app definition. Setting shell specifies the command used to run the script; if not set, $SHELL, /bin/bash, or /bin/sh is used, whichever is found first.
name: scriptdescription: A shell scripttype: execshell: /bin/zshscript: | for i in {1..5}
do
echo "hello world"
done
Common helper functions
A basic helper shell script is provided that can be used to echo text to the screen in various ways. To use it,
source the script:
Version Hint
Added in version 0.6.3
name: scriptdescription: A shell scripttype: execshell: /bin/zshscript: | set -e
. "{{ BashHelperPath }}"
ab_announce Hello World
This will output:
>>> Hello World
It provides a few functions:
ab_say prefix the message using a single prefix >>>
ab_announce prefix the message with >>> with a line of >>> before and after the message
ab_error prefix the message with !!!
ab_panic prefix the message with !!! and exit the script with code 1
The >>> can be configured by setting AB_SAY_PREFIX and the !!! by setting AB_ERROR_PREFIX after sourcing the helper.
The output can have time stamps added to the lines by setting AB_HELPER_TIME_STAMP shell variable to T for time and D for time and date prefixes.
If you do not need the helper script you can disable it by setting no_helper to true, this prevents writing the temporary helper file to disk.
Retrying failed executions
Failing executions can be tried based on a backoff policy, here we configure a maximum of 10 attempts with varying sleep
times that would include randomized jitter.
Scripts can detect if they are running in a retry by inspecting the BUILDER_TRY environment variable.
name: retrydescription: A shell script execution with backoff retriestype: execcommand: ./script.shbackoff:
# Maximum amount of retries, requiredmax_attempts: 10# Maximum sleep time + jitter, optionalmax_sleep: 20s# Minimum sleep time + jitter, optionalmin_sleep: 1s# Number of steps in the backoff policy, once the max is reached# further retries will jitter around max_sleep, optional, minimum 2steps: 5
Only the max_attempts setting is required, min_sleep defaults to 500ms and max_sleep defaults to 20s with steps
defaulting to max_attempts.
Form Command
The form command creates guided wizard style question-and-answer sessions that construct complex data from user input.
The general use case is to guide users through creating complex configuration files. The gathered data is output as JSON and can be sent to transforms for scaffolding or templating into a final form.
The form command supports data transformations, flags, arguments and sub commands.
Version Hint
This was added in version 0.9.0
Collecting data
A basic example that collects a network address and user accounts:
name: configurationdescription: Generate a configuration filetype: formproperties:
- name: listendescription: The network address to listen onrequired: truedefault: 127.0.0.1:-1help: Examples include localhost:4222, 192.168.1.1:4222 or 127.0.0.1:4222 - name: accountsdescription: Local accountshelp: Sets up a local account for user access.type: objectempty: absentproperties:
- name: usersdescription: Users to add to the accountrequired: truetype: arrayproperties:
- name: userdescription: The username to connect asrequired: true - name: passworddescription: The password to connect withtype: passwordrequired: true
When run this looks a bit like this, with no transform the final data is just dumped to STDOUT:
$ abt form
Demonstrates use of the form based data generator
? Press enter to start
The network address and port to listen on
? listen 127.0.0.1:-1
Multiple accounts
? Add accounts entry Yes
? Unique name for this entry USERS
The username to connect as
? user user1
The password to connect with
? password ******
? Add additional 'users' entry No
? Add accounts entry No
{
"USERS": {
"users": [
{
"password": "secret",
"user": "user1"
}
]
},
"listen": "127.0.0.1:-1"
}
Properties reference
The form command is a generic command with the addition of an array of properties making up the questions and an optional transform for processing the collected data:
Property
Description
name
Unique name for each property, in objects this would be the name of the key in the object
description
Information shown to the user before asking the questions
help
Help shown when the user enters ? in the prompt
empty
What data to create when no values are given, one of array, object, absent
type
The type of data to gather, one of string, integer, float, bool, password, object or array. Objects and Arrays will nest
conditional
An expr expression that looks back at the already-entered data and can be used to skip certain questions
validation
A validation expression that will validate user input and ask the user to enter the value again on fail
required
A value that is required cannot be skipped
default
Default value to set
enum
Will only allow one of these values to be set, presented as a select list
properties
Nested questions to ask, array of properties as described in this table
Validations
Validation uses the validators described in Argument and Flag Validations with value being the data just-entered by the user.
Conditional questions
Conditional queries are handled using expr, the expression has access to the collected data so far via Input (or input), as well as Arguments, Flags and Config from the CLI context.
The example below looks back at the accounts entry and will only ask this thing when the user opted to add accounts:
- name: thingdescription: Adds a thing if accounts are setempty: absentconditional: Input.accounts != nil
Transforming output
The form output is JSON and can be processed through transforms. This combines well with the scaffold transform to generate files from the collected data:
name: configurationdescription: Generate configuration from user inputtype: formproperties:
- name: listendescription: The network address to listen onrequired: truedefault: 127.0.0.1:4222transform:
scaffold:
target: /etc/myappsource_directory: /usr/local/templates/config
A full example can be seen in the example directory of the project.
Scaffold Command
Use the scaffold command to create directories of files based on templates. The scaffold command supports flags, arguments and sub commands.
One of source or source_directory is required to provide the templates, along with a target directory.
The Sprig functions library is available to use in templates.
Version Hint
This was added in version 0.7.0
Scaffolding files
The following is the most basic example:
name: scaffolddescription: Demonstrate scaffold features by creating some go filestype: scaffoldarguments:
- name: targetdescription: The target to create the files inrequired: truetarget: "{{ .Arguments.target }}"source:
"main.go": | // Copyright {{ .Arguments.author }} {{ now | date "2006" }}
package main
import "{{ .Arguments.package }}/cmd"
func main() {cmd.Run()}
This generates a file main.go in the directory set using the target argument. The target directory must not exist.
Complex trees can be created like this:
source:
"cmd":
"cmd.go": | // content not shown"main.go": | // content not shown
Here we will have a directory cmd with cmd/cmd.go inside along with top level main.go.
Storing files externally
In the example above the template is embedded in the YAML file. It’s functional but does not scale well.
A directory full of template files that mirror the target directory layout can be used instead:
name: scaffolddescription: Demonstrate scaffold features by creating some go filestype: scaffoldarguments:
- name: targetdescription: The target to create the files inrequired: trueflags:
- name: templatedescription: The template to usedefault: golangtarget: "{{ .Arguments.target }}"source_directory: /usr/local/templates/{{ .Flags.template }}
Now we will use /usr/local/template/golang by default and whatever is passed in --template instead of golang
otherwise.
Post processing files
The first example showed a poorly formatted go file; the result will be equally badly formatted.
The following demonstrates how to post process the files using gofmt:
name: scaffolddescription: Demonstrate scaffold features by creating some go filestype: scaffoldarguments:
- name: targetdescription: The target to create the files inrequired: truetarget: "{{ .Arguments.target }}"source_directory: /usr/local/templates/defaultpost:
- "*.go": "gofmt -w" - "*.go": "goimports -w '{}'"
The new post structure defines a list of processors based on a file pattern match done using filepath.Match.
As shown the same pattern can be matched multiple times to run multiple commands on the file.
If the string {} is in the file it will be replaced with the full path to the file otherwise the path is set as
last argument. When using this format it’s suggested you use quotes like in the example.
Conditional rendering
By default all files are rendered even when the result is empty, by setting skip_empty: true any file that results in
empty content will be skipped.
name: scaffolddescription: Demonstrate scaffold features by creating some go filestype: scaffoldarguments:
- name: targetdescription: The target to create the files inrequired: trueflags:
- name: gitignoredescription: Create a .gitignore filebool: truedefault: truetarget: "{{ .Arguments.target }}"source_directory: /usr/local/templates/defaultskip_empty: true
We can now create a template for the .gitignore file like this:
{{ if .Flags.gitignore }}
# content here
{{ end }}
This will result in a file that is empty - or rather just white space in this case - this file will be ignored and not
written to disk.
Rendering partials
We support partials that can be reused, any files in the _partials directory will be skipped for normal processing,
you can reference these files from other files:
Given a file _partials/go_copyright in the source templates holding the following:
// Copyright {{ .Arguments.author }} {{ now | date "2006" }}
The content of the Copyright strings can be reused and updated in one place later.
Rendering files from templates
It is often the case that new files not in the actual template source are needed. For example, a form might ask
how many of a certain thing are required and then that many files must be created. This means a Partial can be
used to make the file and needs to be invoked multiple times.
Version Hint
This was added in version 0.7.4
To use this you can store a template in the _partials directory and then render files like this:
This will render and, using the write helper, save cluster-{1,2,3,...}.conf for how many ever clusters you had in
Flags. The file will be post processed as normal and written relative to the target directory.
The .Flags value is saved in $flags because within the range the . will not point to the top anymore, so this ensures
the passed in flags remain accessible in the _partials/cluster.conf template.
If you place this loop in a file that is only there to generate these other files then the resulting empty
file can be ignored using skip_empty: true in the scaffold definition.
Custom template delimiter
When generating Go projects you might find you want to place template tags into the final project, for example when
generating a ABTaskFile.
With the final ABTaskFile having the same template delimiters will cause havoc.
You can change the delimiters of the template source to avoid this:
name: scaffolddescription: Demonstrate scaffold features by creating some go filestype: scaffoldarguments:
- name: targetdescription: The target to create the files inrequired: truetarget: "{{ .Arguments.target }}"source_directory: /usr/local/templates/defaultskip_empty: trueleft_delimiter: "[["right_delimiter: "]]"
Our earlier .gitignore would now be:
[[ if .Flags.gitignore ]]
# content here {{ these will not be changed }}
[[ end ]]
Choria Discover Command
The Discover command interact with the Choria Discovery system used to find fleet nodes based on a vast array of
possible queries and data sources.
Since this is built into Choria it will use the Choria Client configuration for the user executing the command
to find the Choria Brokers and more. It supports the usual override methods such as creating a choria.conf file in
the project working directory. No connection properties are required or supported.
This feature is only available when hosting App Builder applications within the Choria Server version 0.26.0 or newer
Overview
This command supports all the standard properties like Arguments, Flags, Banners and more, below is a simple command
that finds apache servers.
name: finddescription: Finds all machines tagged as Apache Serverstype: discoverstd_filters: truefilter:
classes:
- roles::apache
When run it will show a list of matching nodes, one per line. It also accepts the --json flag to enable returning a
JSON array of matching nodes.
Since the std_filters option is set the command will also accept additional filters in standard Choria format. Flags
like -C, -F, discovery mode selectors and more. User supplied options will be merged/appended with the ones supplied
in the YAML file. By default, none of the standard Choria flags will be added to the CLI.
All the filter values, even arrays and objects, support templating.
Filter Reference
The main tunable here is the filter, below a reference of available options. The examples here are brief; the Choria Discovery Documentation provides a thorough understanding.
Key
Description
Example
collective
The collective to target, defaults to main collective
collective: development
facts
List of fact filters as passed to -F
facts: ["country=uk"]
agents
List of agent filters as passed to -A
agents: ["puppet"]
classes
List of Config Management classes to match as passed to -C
classes: ["apache"]
identities
List of node identities to match as passed to -I
identities:["/^web/"]
combined
List of Combined filters as passed to -W
combined:["/^web/","location=uk"]
compound
A single Compound filter as passed to -S
compound: "with('apache') or with('nginx')
discovery_method
A discovery method to use like inventory as passed to --dm
discovery_method:"flatfile"
discovery_options
A set of discovery options, specific to the discovery_method chosen
discovery_options: {"file":"/etc/inventory.yaml"}
discovery_timeout
How long discovery can run, in seconds, as passed to --discovery-timeout
discovery_timeout: 2
dynamic_discovery_timeout
Enables windowed dynamic timeout rather than a set discovery timeout
dynamic_discovery_timeout: true
nodes_file
Short cut to use flatfile discovery with a specific file, as passed to --nodes
nodes_file: /etc/fleet.txt
Choria RPC Command
The RPC command interacts with the Choria RPC system used to execute actions on remote nodes.
Since this is built into Choria it uses the Choria Client configuration for the user executing the command
to find the Choria Brokers and more. It supports the usual override methods such as creating a choria.conf file in
your project working directory. No connection properties are required or supported.
Before using this command type, reading about Choria Concepts is recommended.
This feature is only available when hosting App Builder applications within the Choria Server version 0.26.0 or newer
Overview
This command supports all the standard properties like Arguments, Flags, Banners and more, it also incorporates the
discovery features of the Discover Command Type in order to address nodes.
Below a simple RPC request.
name: stopdescription: Stops the Service gracefullytype: rpcrequest:
agent: serviceaction: stopinputs:
service: httpd
This will look and behave exactly like choria req service stop service=httpd.
Adjusting CLI Behavior
A number of settings exist to adjust the behavior or add flags to the CLI at runtime. Generally you can either allow users
to supply values sugh as --json, or force the output to be JSON but you cannot allow both at present:
Setting
Description
std_filters
Enables standard filter flags like -C, -W and more
output_format
Forces a specific output format, one of senders, json or table
output_format_flags
Enables --senders, --json and --table options, cannot be set with output_format
display
Supplies a setting to the typical --display option, one of ok, failed, all or none
display_flag
Enables the --display flag on the CLI, cannot be used with display
batch_flags
Adds the --batch and --batch-sleep flags
batch, batch_sleep
Supplies values for --batch and --batch-sleep, cannot be used with batch_flags
no_progress
Disables the progress bar`
all_nodes_confirm_prompt
A confirmation prompt shown when an empty filter is used
Request Parameters
Every RPC request needs request specified that must have at least agent and action set.
Inputs are allowed as a string hash - equivalent to how one would type inputs on the choria req CLI.
It also accepts a filter option that is the same as that in the discover command.
name: stopdescription: Stops the Service gracefullytype: rpcrequest:
agent: serviceaction: stopinputs:
service: httpdfilter:
classes:
- roles::apache
Filtering Replies
Results can be filtered using a result filter, this allows you
to exclude/include specific replies before rendering the results.
Here’s an example that will find all Choria Servers with a few flags to match versions, it invokes the rpcutil#daemon_states
action and then filters results matching a query. Only the matching node names are shown.
name: busydescription: Find Choria Agents matching certain versionstype: rpc# list only the namesoutput_format: sendersflags:
- name: nedescription: Finds nodes with version not equal to the givenplaceholder: VERSIONreply_filter: ok() && semver(data("version", "!= {{.Flags.ne}}")) - name: eqdescription: Finds nodes with version equal to the givenplaceholder: VERSIONreply_filter: ok() && semver(data("version", "== {{.Flags.eq}}"))request:
agent: rpcutilaction: daemon_stats
Transforming Results
Results can be transformed data transformations, here’s an example that gets the state of a particular autonomous agent:
name: statedescription: Obtain the state of the service operatortype: rpctransform:
query: | .replies | .[] | select(.statuscode==0) | .sender + ": " + .data.staterequest:
agent: choria_utilaction: machine_stateinputs:
name: nats
When run it will just show lines like:
n1-lon: RUN
n3-lon: RUN
n2-lon: RUN
Choria KV Command
The KV command interacts with the Choria Key-Value Store and supports usual operations such as Get, Put, Delete and more.
Since this is built into Choria it uses the Choria Client configuration for the user executing the command
to find the Choria Brokers and more. It supports the usual override methods such as creating a choria.conf file in
your project working directory. No connection properties are required or supported.
Version Hint
This feature is only available when hosting App Builder applications within the Choria Server version 0.26.0 or newer
Overview
All variations of this command have a number of required properties, here’s the basic get operation, all these keys are required:
name: versiondescription: Retrieve the `version` keytype: kvaction: getbucket: DEPLOYMENTkey: version
Usual standard properties like flags, arguments, commands and so forth are all supported. The bucket and key flags supports templating.
Writing data using put
Data can be written to the bucket, it’s identical to the above example with the addition of the value property that supports templating.
name: versiondescription: Stores a new version for the deploymenttype: kvaction: putbucket: DEPLOYMENTkey: versionvalue: '{{- .Arguments.version -}}'arguments:
- name: versiondescription: The version to storerequired: true
Retrieving data and transformations using get
Stored data can be retrieved and rendered to the screen, typically the value is just dumped. Keys and Values however have
additional metadata that can be rendered in JSON format.
name: versiondescription: Retrieve the `version` keytype: kvaction: getbucket: DEPLOYMENTkey: state# Triggers rendering the KV entry as JSON that will include metadata ab out the value.json: true
Further if it’s known that the entry holds JSON data it can be formatted using data transformations:
This example demonstrates accessing the .Config and .Arguments structures and using some functions.
Available Data
Key
Description
.Config
Data stored in the configuration file for this application
.Arguments
Data supplied by users using command arguments
.Flags
Data supplied by users using command flags
.Input
Parsed JSON input from a previous step, available in transform contexts only
Available Functions
Function
Description
Example
require
Asserts that some data is available, errors with the given message on failure or a default message when empty
{{ require .Config.Password "Password not set in the configuration" }}
escape
Escapes a string for use in shell arguments
{{ escape .Arguments.message }}
read_file
Reads a file
{{ read_file .Arguments.file }}
default
Checks a value, if its not supplied uses a default
{{ default .Config.Cowsay "cowsay" }}
env
Reads an environment variable
{{ env "HOME" }}
UserWorkingDir
Returns the directory the user is in when running the command
{{ UserWorkingDir }}
AppDir
Returns the directory the application definition is in
{{ AppDir }}
TaskDir
Alias for AppDir
{{ TaskDir }}
In addition to the above, the Sprig functions library is available in most template contexts including commands, scaffolds and transforms.
Transformations
Transformations are like a shell pipe defined in App Builder. A number of transformations are available, and using them is entirely optional - often a shell pipe would be much better.
The reason for adding transformations like jq to App Builder itself is to have it function in places where that 3rd party dependency is not met. Rather than require everyone to install JQ - and handle that dependency, App Builder includes a JQ dialect directly.
A basic example of transformations can be seen here:
name: ghddescription: Gets the description of a Github Repotype: execcommand: | curl -s -H "Accept: application/vnd.github.v3+json" https://api.github.com/repos/choria-io/appbuildertransform:
jq:
query: .description
Here we call out a REST API that returns JSON payload using curl and then extract the description field from the result using a JQ transform
$ demo ghd
Tool to create friendly wrapping command lines over operations tools
Not every command supports transforms, so the individual command documentation will call out it out.
JQ Transform
The jq transform uses a dialect of JQ called GoJQ, most of your JQ knowledge is transferable with only slight changes/additions. This is probably the most used transform so there is a shortcut to make using it easier:
Since version 0.5.0 an optional yaml_input boolean can be set true to allow YAML input to be processed using JQ.
To JSON Transform
The to_json transform can convert YAML or JSON format input into JSON format output. By default the output JSON will be compact unindented JSON, prefix and indent strings can be configured.
# unindented JSON outputtransform:
to_json: {}
# Indented JSON output with a custom prefixtransform:
to_json:
indent: " "prefix: " "
To YAML Transform
The to_yaml transform can convert JSON format input into YAML format output.
transform:
to_yaml: {}
The to_yaml transform has no options.
Bar Graph Transform
This transform takes a JSON document like {"x": 1, "y": 2} as input and renders bars for the values.
Here is an example that draws the sizes of the assets of the latest release:
name: bargraphdescription: Draws an ASCII bar graphtype: exectransform:
pipeline:
- jq:
query: | .assets|map({(.name): .size})|reduce .[] as $a ({}; . + $a) - bar_graph:
caption: "Release asset sizes"bytes: truescript: | curl -s https://api.github.com/repos/choria-io/appbuilder/releases/latest
This uses a pipeline (see below) to transform a GitHub API request into a hash and then a bar_graph to render it:
Above the /tmp/name.txt would hold the initial JSON data.
If the write_file is the only transform or in a pipeline like here the data received is simply passed on to the next
step, this can be annoying when writing large files as they will be dumped to the screen.
transform:
write_file:
file: /tmp/report.txtreplace: truemessage: Wrote {{.IBytes}} to {{.Target}}
In this case the message Wrote 1.8 KiB to /tmp/report.txt would be printed. You can use .Bytes, .IBytes, .Target and .Contents in the message.
Option
Description
file
The file to write, the file name is parsed using Templating
message
A message to emit from the transform instead of the contents received by it
replace
Set to true to always overwrite the file
Row orientated Reports
These reports allow you to produce text reports for data found in JSON files. It reports on Array data and produce
paginated reports with optional headers and footers.
Here we fetch the latest release information from GitHub and produce a report with header,
footer and body. Since the JSON data from GitHub is a object we use the assets GJSON
query to find the rows of data to report on.
See the goform project for a full reference to
the formatting language.
How many rows to print per page, pages each have header and footer
initial_query
The initial GJSON query to use to find the row orientated data to report
source_file
A file holding the report rather than inline, name, header, body and footer are read from here. File name parsed using Templating
Scaffold
The scaffold transform takes JSON data and can generate multiple files using that output.
This is essentially the Scaffold Command in transform form. The Command
documentation provides full details on the underlying feature. This section covers only what makes the transform unique.
Version Hint
This was added in version 0.9.0
Option
Description
target
The directory to write the data into
source_directory
The directory where the template files can be found, cannot be used with source
source
Map holding file names and content, if a value is another object a directory is created instead
post
Post processing directives
skip_empty
Skips files that would be empty when written
left_delimiter
Custom template delimiter
right_delimiter
Custom template delimiter
These settings all correspond to the same ones in the command so we won’t cover them in full detail here.
The scaffold transform returns the input JSON on its output.
Pipelines
Several example transform pipelines appear above, like this one:
This runs the output of the curl command (JSON weather forecast data) through a jq transform that produces results like:
29
29
29
29
30
30
29
29
That data is then fed into a line_graph and rendered; the output from the jq transform is used as input to the line_graph.
Any failure in the pipeline will terminate processing.
CCM Manifest
The ccm_manifest transform executes a Choria Config Manager manifest using the input data as manifest data. Arguments and flags are merged into the manifest data along with any JSON input.
While output from --help can be useful, many people do not read it or understand the particular format and syntax
shown. Instead, a quick cheat sheet style help can often be more helpful.
The cheat utility solves this problem in a generic manner,
by allowing searching, indexing and rendering of cheat sheets in the terminal.
$ cheat tar
# To extract an uncompressed archive:
tar -xvf /path/to/foo.tar
# To extract a .tar in specified Directory:
tar -xvf /path/to/foo.tar -C /path/to/destination/
This format is well suited to App Builder applications. Since 0.0.7 it is possible to add cheat
sheets to an application, access them without needing to install the cheat command, and also integrate them with that
command if desired.
Cheats are grouped by label, so while your application might have natsctl report jetstream the cheats are only 1 level
deep and does not need to match the names of commands.
Example
The following example updates the quick start application to include cheats:
name: demodescription: Demo application for Choria App Builderauthor: https://github.com/choria-io/appbuildercheat:
tags:
- mycorp - cowslabel: demo# this would be the default if not givencheat: | # To say something using a cow
demo say hello
# To think something using a cow
demo think hellocommands:
- name: saydescription: Say something using the configured commandtype: execcheat:
cheat: | # This command can be configured using the Cowsay configuration
Cowsay: /usr/bin/animalsaycommand: | {{ default .Config.Cowsay "cowsay" }} {{ .Arguments.message | escape }}arguments:
- name: messagedescription: The message to send to the terminalrequired: true
Running the application produces:
usage: demo [<flags>] <command> [<args> ...]
Demo application for Choria App Builder
Contact: https://github.com/choria-io/appbuilder
Use 'demo cheat' to access cheat sheet style help
Commands:
say <message>
....
Since 2 cheats were added, running demo cheat shows a list:
$ demo cheat
Available Cheats:
demo
say
The cheat sheet is accessible directly:
$ demo cheat demo
# To say something using a cow
demo say hello
# To think something using a cow
demo think hello
Integrate with cheat
The cheat utility is worth investigating. With it installed,
all cheats from an App Builder application can be exported into it:
$ demo cheat --save /home/rip/.config/cheat/cheatsheets/personal/demo
Saved cheat to /home/rip/.config/cheat/cheatsheets/personal/demo/demo
Saved cheat to /home/rip/.config/cheat/cheatsheets/personal/demo/say
With this done, cheat demo/say retrieves the saved cheat, or all cheats tagged mycorp (one of the tags
added above) can be listed:
The relevant configuration consists of the Application Definition and optional Application Configuration.
The XDG Base specification is supported, including standard environment variable based overrides like using XDG_CONFIG_HOME, for storing these in the home directory with system wide fallback locations.
Files are stored in either /etc/appbuilder/ or ~/.config/appbuilder (~/Library/Application Support/appbuilder on a Mac). When the symlink is created to a choria binary the locations /etc/choria/builder and ~/.config/choria/builder (~/Library/Application Support/choria/builder on a Mac) will also be searched in addition to the standard locations.
File
Description
demo-app.yaml
This is your application definition
demo-cfg.yaml
This is your per-application configuration
Runtime Settings and Tools
When invoking appbuilder various utilities are exposed. Applications also take some Environment Variables as runtime
configuration.
Builder Info
General runtime information can be printed:
$ appbuilder info
Choria Application Builder
Debug Logging (BUILDER_DEBUG): false
Configuration File (BUILDER_CONFIG): not specified
Definition File (BUILDER_APP): not specified
Source Locations: /home/example/.config/appbuilder, /etc/appbuilder
This output shows where applications are loaded from and more.
Run Time Configuration
As seen above a few variables are consulted, below a list with details:
Variable
Description
BUILDER_DEBUG
When set to any level debug logging will be shown to screen
BUILDER_CONFIG
When invoking a command a custom configuration file can be loaded by setting the path in this variable
BUILDER_APP
When invoking a command a custom application definition can be loaded by setting the path in this variable
With these variables set the appbuilder info command will update accordingly
Finding Commands
All applications stored in source locations can be listed:
$ appbuilder list
╭─────────────────────────────────────────────────────────────────────────────────────────╮
│ Known Applications │
├────────┬──────────────────────────────────────────────┬─────────────────────────────────┤
│ Name │ Location │ Description │
├────────┼──────────────────────────────────────────────┼─────────────────────────────────┤
│ mycorp │ /home/rip/.config/appbuilder/mycorp-app.yaml │ A hello world sample Choria App │
╰────────┴──────────────────────────────────────────────┴─────────────────────────────────╯
Validating Definitions
A recursive deep validate can be run across the entire definition which will highlight multiple errors in commands
and sub commands:
$ appbuilder validate mycorp-app.yaml
Application definition mycorp-app.yaml not valid:
root -> demo (parent): parent requires sub commands
root -> demo (parent) -> echo (exec): a command is required
Compiled Applications
App Builder apps do not need to be compiled into binaries, which allows for fast iteration, but sometimes compilation might be desired.
Version Hint
This was added in version 0.7.2
Basic compiled application
Given an application in app.yaml we can create a small Go stub:
Compiling this as a normal Go application produces a binary that is an executable version of the app.
Mounting at a sub command
The previous example mounts the application at the top level of the myapp binary, but it can also be mounted at a sub-command level - perhaps there are other compiled-in behaviors to surface:
Here we would end up with myapp embedded [app commands] - the command being mounted at a deeper level in the resulting compiled application. This way an App Builder command can be plugged into any level programmatically.