Rendering BOSH Templates
You can read more about BOSH templates on bosh.io.
The following points describe each process that involves working with BOSH Job Templates, from beginning to end.
The Data Gathering step is run using one
QuarksJob, that has one pod with multiple containers.
Extract Job Spec and Templates from Image
This happens in one init container for each release present in the deployment manifest.
The entrypoint of that init container is responsible with copying the contents of
/var/vcap/jobs-src to a shared directory, where other containers can access it.
This shared directory is
Each init container uses the release’s docker image.
Calculation of Required Properties for an Instance Group and BPM Info
The main purpose of the data gathering phase is to compile all information required for all templates to be rendered and for all instance groups to be run:
- link instances
- bpm yaml
Two containers are run for each instance group in the deployment manifest, using the image of the CF Operator. These two containers write the following on to a file
output.json in the volume mount
/mnt/quarks of the container :
This is the “Resolved Instance Group Properties” yaml file. It contains a deployment manifest structure that only has information pertinent to an instance group. It includes:
- all job properties for that instance group
- all properties for all jobs that are link providers to any of the jobs of that instance group
- the rendered contents of each
bpm.yml.erb, for each job in the instance group
- link instance specs for all AZs and replicas; read more about instance keys available for links here
Link instance specs are stored in the
quarksproperty key for each job in the instance group.
Once all properties and link instances are compiled,
bpm.yml.erbcan be rendered for each job and for each AZ and replica of the instance group.
The output of this container is the “BPM Info” yaml file. It contains a deployment manifest structure that only has information pertinent to an instance group. It includes the rendered contents of each
bpm.yml.erb, for each job in the instance group.
The BPM information is stored under the
quarksproperty, for each BOSH Job.
Because container entrypoints in Kubernetes cannot be different among the replicas of a Pod, we don’t support the usage of things like
spec.indexin the ERB template of
Create QuarksStatefulSet and QuarksJobs
The operator creates definitions for
QuarksStatefulSets (for BOSH Services) or
QuarksJobs (for BOSH Errands).
These have the following init containers:
one for each unique release in the instance group - used for copying release job specs and templates; these use the release image
one init container that performs ERB rendering; this runs using the CF Operator image
Init containers copy the templates of the releases to
/var/vcap/all-releases, which is a shared directory among all containers.
Another init container is run using the operator’s image, for rendering all templates. It mounts the “Resolved Instance Group Properties”
Secret (generated in the data gathering step) and performs ERB rendering.
It’s also configured with the following environment variables, to facilitate BOSH
spec.* property keys:
Run the entrypoints
Once all the init containers are done, all control scripts and configuration files are available on disk, the BOSH Job containers can start. Their entrypoints, env vars, capabilities, etc. are set based on BPM information.
The following section describes specific implementation details for algorithms required in the rendering process.
Services and DNS Addresses
DNS Addresses for instance groups are calculated in the following manner:
INDEX is calculated using the following formula:
In order for things to work correctly across versions and AZs, we need ClusterIP
Services that select for Instance Group
For example, assuming a
3 and an
2 for a “nats”
StatefulSet versions available, we would see the following
The following steps describe how to resolve links assuming all information is available. The actual implementation transforms data and stores it in between steps, but the outcome is the same.
To resolve a link, the following steps are performed:
current job- the job for which rendering is happening
desired manifest- the deployment manifest used
provider job- the job that has been identified to be the provider for a link
- the name and type of the link is retrieved from the spec of the
- the name of the link is looked up in the
current job's instance group
consumeskey (an explicit link definition); if found and is set to
nil, nil is returned and resolving is complete
- if the link’s name has been overridden by an explicit link definition in the
desired manifest, the
desired manifestis searched for a corresponding job, that has the same name; if found, the link is populated with the properties of the
provider job; first, the defaults for the exposed properties (defined in the
providessection of the spec of the
provider job) are set to the defaults from the spec, and then the properties from the
desired manifestare applied on top
- if there was no explicit override, we search for a job in all the releases, that provides a link with the same
Read more about links here.
Calculating spec.* and link().instances.*
spec of each job instance can be calculated:
Why render BPM separately from all other BOSH Job Templates?
Because we need information from BPM to actually know what to run. Without that, we don’t have an entrypoint, env vars, etc. - so we can’t create a pod and containers for the BOSH Job.
Why run all release images for Data Gathering?
We need to run everything all at once because of links. The only way to resolve them is to have all the BOSH Job specs available in one spot.
Is everything supported in templates, just like BOSH?
It should, yes. All features should work the same (that’s the goal).
The use of
bpm.ymlis rendered before the actual instance group runs, in a different pod,
The use of
Any BPM information that is different for each replica, cannot be supported by the CF Operator, because all
Podreplicas are identical by definition.