The Conductor submitter for Houdini is a Houdini Digital Asset (HDA) that allows you to ship renders to Conductor from within the application.
This document is intended to be a comprehensive reference for all features and concepts. If you just want to get up and running fast, head over to the Houdini tutorial page.
The submitter HDA generates a submission payload, which is sent to the cloud when you press Submit. This payload is in the form of a JSON object. You can view the resolved submission at any time in the Preview tab of the submitter UI.
By analyzing this payload, you will gain a good understanding of how Conductor works at a conceptual level and will find troubleshooting submission issues easier.
In the Preview Panel, you'll find:
- All expressions in the HDA are resolved in file paths, in the job title, and other parameters.
- All file dependencies are collected. For optimization reasons, this is done on-demand with the Do asset scan button.
- Environment variables to be active on the render nodes are generated based on your software choices and custom additions.
- Task commands (one per instance) are generated based on your frame range specifications and the task template.
The data in the preview panel is kept up to date as you change configuration settings. For example, when you change the frame range or output image paths in the input ROP, the list of tasks in the preview panel updates.
The ROP to be rendered. Connect one of the following types of driver node here.
- IFD (Mantra)
- Ris (Renderman)
- geometry (cache)
- dop (dynamics)
A human-readable name for the type of driver. This is generated by an expression and there's no need to change it.
The path of the connected driver. You can enter the path of any ROP here in case it can't be connected. The submission will attempt to call its
Click to establish a connection with Conductor. This fetches information from your account including the lists of projects, software packages, and instance types available.
Submit the job to Conductor.
Click to save a Python script that can submit this job offline.
The job title appears in the Conductor dashboard. It helps to identify your job by evaluating the hipfile name, the renderer, and other information. You may overwrite the default expression.
This refers to a project on the Conductor dashboard. The dropdown menu is populated or updated when the submitter connects to your Conductor account. If the menu contains only the - Not Connected - option, press the Connect button.
If projects are added or removed since connecting to Conductor, you can press the Connect button again to refresh the list.
Instance type family¶
Choose between machines with or without graphics cards. If your job does not require GPUs, then don't select a GPU instance type as they are considerably more expensive.
Specify the hardware configuration used to run your tasks. You are encouraged to run tests to find the most cost-efficient combination that meets your deadline. You can read about hardware choices and how they affect costs in this blog post.
Preemptible instances are less expensive to run than non-preemptible. The drawback is that they may be stopped at any time by the cloud provider. The probability of a preemption rises with the duration of the task. Conductor does not support checkpointing, so if a preemption occurs, the task starts from scratch on another instance. It is possible to change the preemptible setting in the dashboard for your account.
Set how many times a preempted task will be retried automatically.
This is the version of Houdini to run on the render nodes. It can be different from your local version, but be aware of feature changes affecting your render.
The plugin software (if any) that supports the connected input driver ROP.
Add additional plugin software provided an licensed by Conductorthat needs to be enabled on the render node. Suppose for example, you are running an Arnold Render, but for some reason you need to evaluate a renderman shader, you can add the Renderman plugin here.
A chunk is the set of frames handled by one task. If your renders are reasonably fast, it may make sense to render many frames per task because the time it takes to spin up instances, and sync can be significant by comparison.
Override frame range¶
Override the frame range defined in the input ROP. When checked, the frame range parameter is editable, otherwise it is populated by an expression.
A frame-spec is a comma-separated list of arithmetic progressions. In most cases, this will be a simple range:
However, any set of frames may be specified efficiently in this way.
Negative numbers are also valid.
Use scout frames¶
Enable the Scout Frames feature. See below.
Specify a set of frames to render first. We start any tasks that contain these frames. All others are put on hold, which allows you to check a subsample of your sequence before committing to the full render.
You can use a frame spec to specify scout frames, for example:
1-100x30. Alternatively, you can select how many scout frames you want and let the submitter calculate scout frames from the current frame range. To specify three well-spaced scout frames automatically, enter
The remote render nodes execute tasks in their entirety, so if you have chunk size set greater than 1, then all frames are rendered in any task containing a scout frame.
Scout frame spec:¶
Read-only parameter to show the resolved scout frame spec.
Read-only parameter to show the number of frames to render.
Read-only parameter to show the number of tasks. For example, if chunk_size is 2, then there will be half as many tasks as frames.
Scout frame count:¶
Read-only parameter to show the number of scout frames. If chunk_size is greater than one, then there may be more frames rendrered than the number of scout frames specified. This is because tasks are always executed in their entirety.
Scout task count:¶
Read-only parameter to show the number of tasks that contain the specified scout frames.
Always use autosave¶
If the hip file has been modified, it is always autosaved with a name given by the autosave_scene parameter, and it is this file that is shipped to Conductor. If the scene has not been modified, you may just want to ship the version currently on disk. In that case, turn this parameter off. You'll need to make sure to save the scene manually before hitting the submit button, otherwise autosave is invoked.
The name to use for autosave. The default expression prefixes the current scene name with
The name of the hip file on the Linux render node. If submitting from Mac or Linux, this will be unchanged. If submitting from Windows, it will have the diveletter stripped off and any backslashes converted to forward slashes.
The folder where output images are saved to. You cannot download any renders or other files unless they are saved somewhere under this folder. The default expression looks at the connected driver node to infer the folder where files are set to be saved. In most cases you should not need to change this.
The default render_script, chrender.py, is similar to hrender.py. Since it will run on Conductor's render nodes, it enforces very verbose output in order to make logs more useful for troubleshooting. You can replace chrender.py with your own script. It will be uploaded and run on the remote render node. If your script takes a different set of args, then you'll need to adjust the Task Template expression to match. You can always see how the task template is resolved in the Preview tab.
Copy the render script to a new location so you can edit it. It can also be beneficial to copy the script if using the upload daemon on another machine that doesn't have access to the Conductor installation directory.
Command that runs on each render node at Conductor. The default task template expression is
configured to provide argumnents to the default render_script
chrender.py. They work in concert
with each other. Look at comments in the expression itself for more info. You can check the
Preview tab to see how the command is resolved in each task.
Asset scan regex¶
In order to scan for assets to be uploaded, all file reference parameters in the scene are evaluated
for one frame. Each evaluated filename is then adjusted to have certain parts replaced by wildcards
to create a globbable pattern. For example, the file reference, $HIP/tex/texture.###.
Asset scan excludes¶
Comma-separated list of Unix-style wildcard patterns to exclude from the asset scan. For more information on Unix-style wildcards, see this page: https://docs.python.org/3/library/fnmatch.html
- Please visit this detailed page for a deeper understanding of the asset scraping mechanism.
Browse for files to upload that were not found automatically by the asset scan. The best way to check the results of the asset scan is to look in the preview panel and click the do_asset_scan button.
You may also browse for entire folders.
Add environment variables and values to be set on the render nodes. You may want this for example if you have your own shell script and you need to make an addition to the PATH variable so that the script can be found. In this case you would need to browse for the script in the extra assets section.
- If submitting from Windows, make sure to remove the drive letter from the path to the script while defining the environment variable. Note that environment variables can be defined as exclusive or appendable.
Add existing variable¶
A convenient UI to add extra remote environment variables based on variables that have been defined locally.
Use upload daemon¶
Use upload Daemon is off by default, which means that the task of uploading assets happens within Houdini itself. Although this requires no extra setup, if you have many assets, Houdini will block until uploading completes.
A better solution may be to turn on Use Upload Daemon. An upload daemon is a separate background process. It means assets are not uploaded in the application. The submission, including the list of expected assets, is sent to Conductor, and the upload daemon continually asks the server if there are assets to upload. When your job hits the server, the upload daemon will get the list and upload them, which allows you to continue with your work.
You can start the upload daemon either before or after you submit the job. Once started, it will listen to your entire account, and you can submit as many jobs as you like.
You must have Conductor Core installed in order to use the upload daemon and other command line tools. See the installation page for options.
To run an upload daemon, open a terminal or command prompt, and run the following command.
Once started, the upload daemon runs continuously and uploads files for all jobs submitted to your account.
Attach a location to this submission for the purpose of matching to an uploader and/or downloader process.
If your organization is distributed in several locations, you can enter a value here, for example, London. Then when you run a downloader daemon you can add the location option to limit downloads to only those that were submitted in London.
Add one or more email addresses, separated by commas to receive an email when the job completes.
Set the number of tasks to show. This is for display purposes only and does not affect the tasks that are submitted to Conductor.
Do asset scan¶
Click this button to see the results of a full asset scan in the payload. This is an optimization for display purposes only, since the preview is updated frequently and asset scanning may be expensive. On submission, a full asset scan is always run.
Payload The raw submission object. This is automatically updated as you populate the fields in the Configuration tab. You can return to this tab at any point to confirm that your submission includes the correct data.