Best method to release engine with compiled code

Hi there

You might be the best people to suggest a workflow for this.

I have an engine and part of it has a compiled plugin. As git repos shouldn’t contain compiled code, everytime we do a cache apps, we have to copy/compile the required plugin into the install location for it to function. If this engine was an official release, I doubt that people using it would want to do this manually. I need to automate the process somehow.

I have thought of a few solutions to this problem, but want to run it past you guys, who might have a better idea.

  1. Create a release of the repo, and attach a zipped version of it which includes the plugin, as an asset.
    The github_release descriptor can be changed to allow the user to specify what asset to download and install instead of the standard zipped source code. This method creates a lot of duplication with the possibility of there being multiple builds of the asset against differing libraries and OS. It would require the user to know exactly what asset to pull and they would have to be in charge of updating their configs accordingly. It also requires more work from the developer (unless we can use ci jobs to build all variants of the build and attach them).

  2. Assets that are just the compiled plugins and the descriptor knows where to put them.
    This would require a fair bit of standardisation to work, there would have to be a specific place to put built code and that would have to be reflected in the location for the engines to make sure paths are set up correctly. The user would still need to specify what asset to download, unless they just download all assets and somehow update the paths using the launch context (OS, application version, etc), but we would have to cover a lot of builds for that to be feasible and flexible enough to seamlessly work on all platforms and configuration.

  3. We trigger a build on the user side as a post_download operation.
    Again this would need some standardisation to fully automate. I reckon having a or file at the root that will get triggered if it exists. Only issues I can see is that caching apps will take a lot longer as it will include build time. We would have to figure out how to build against the user’s system consistently (getting lib and include locations, picking up the correct version of software, etc), which might not be possible if studios have their own environment set up, like rez. Using docker is a solution, but it would have to be a requisite of the engine for it to work.

  4. The plugin is separate to the engine
    The likelihood of the plugin needing compiling all the time might be small, so maybe this can exist within a studio’s environment, just like all other applications and plugins. It may however be dependant on the engine code to function. The user would have to make sure that the engine and plugin are kept in sync somehow if any changes occur

  5. Screw it, studios just have to build it on their own.

One issue with all of the above is that it might completely remove the system-agnostic approach that shotgun sets out to achieve. If a localized config is being run, there can’t exist plugins for differing operating systems within the same version, for example, and we can’t run multiple versions of the same engine concurrently without creating multiple launchers, and that defeats the object a little bit.

Does anybody have any idea to how this might work? With applications like Katana and USD only allowing for C++ plugins, this is going to be an issue moving forward

Thanks in advance

  1. The hook to trigger a build.
    Just found this hook exists

EDIT: this doesn’t work in the manner that was expected.


Hi Liam

Welcome to the forums, and great question!
I’ll run this past the team later but it sounds like you have a good solution there.


1 Like

Hi Liam

So the hook should probably be deprecated, and our docs are not very clear on why it’s a bad idea to use it. The problem with it is that the hook is only called when the setup project is run, so is not useful if you are using a distributed config, or you are updating the app on the config.

I chatted with an engineer, and they said we usually bundle the binaries with the repo, but I can appreciate that if the binaries are large or if your updating them regularly then that will lead to bloat in the repository.

We think the best example of where we handle a similar scenario is with the Photoshop/After Effects integrations. They require an adobe plugin that doesn’t come with the repo, and so the engine handles downloading a setting that up when the software is launched via the launch app.

In this example, it actually calls the tk-framework-adobe framework to handle the installing of the plugin.

Maybe you can use that approach?


Hi Phil, thanks for getting back to me

What you’ve suggested makes sense, but I am assuming the photoshop plugins are cross-version, or do you have a framework release for each new version of photoshop plus any updates that might happen? How does this handle plugins built against multiple versions of the same software and on different platforms? Would you do a release for each or bundle them all into one and use some logic to choose which one to use? Also, if a user is running multiple versions of the same photoshop/shotgun configs on the same machine, how might it handle the potential version conflicts?

Over much deliberation, we have resorted to compiling the software and deploying it with rez. We use a rez launchapp hook to define our shotgun environments (which we have recently updated and pushed). We just make sure we build against any new release and version up accordingly.

Thanks again


1 Like

Hi Liam

Sorry for not getting back to you, this dropped off my radar.

Yes, so the framework is versioned separately from the Photoshop engine, I’m not actually sure if a new plugin gets built with every framework release though.

Essentially you could end up in a situation where you have different configs all pointing to different versions of the Photoshop engine, but I believe that will still try and use the latest Adobe plugin*. We essentially try to ensure backwards compatibility.

* When we released the After Effects integration, we moved the plugin logic out of the Photoshop engine into a framework, so that the logic could be used generically between both the After Effects and Photoshop engines. As this was a major change, we opted to have a brand new plugin. The older tk-photoshopcc engines still used the old plugin, and newer versions of the engine used the newer plugin. So you would end up with both plugins installed. This was not ideal, as you would then have two different Photoshop extensions, which leads to this confusion, but we recomend updating to the newer Photoshop engine accross all configs for this major plugin change so that you no longer have to have both the old and new plugins installed.

Baiscally I would say it works well enough for what we need with the frequency of the framework updates not being high, but you may need a more robust solution depending on your requirements.

1 Like