Environment Variables in Storage Roots

Hi all,

I’ve searched a bit on this topic but haven’t found a great answer yet and I did see that this was potentially added to the roadmap.

I am currently setting up a project with a storage root that uses an environment variable that all users will have access to ($PROJECT_PATH).

I can get this to work correctly in tank by modifying some method returns, which I’ll list below, but it seems to have an issue resolving those files with SG once they are created. Before I went too far down a rabbit hole, is there a better way of doing this? Right now it’s just two files, but now I am looking at changes potentially to the FolderIOReceiver class and then some.

Here are the method changes I’ve made:

  • config\install\core\python\tank\pipelineconfig.py

    • Change line 794 to “project_roots_lookup[root_name] = os.path.expandvars(project_root.current_os)”
  • config\install\core\python\tank\folder\folder_types\project.py

    • Change line 84 to “return os.path.expandvars(self._storage_root_path)”

The error I then receive when attempting to create task folders from tank:

tank Task #### folders

ERROR: Critical! Could not update SG with folder data. Please contact support.
Error details: API batch() request with index 0 failed. All requests rolled
back. API create() CRUD ERROR #6: Create failed for [Attachment]: Path
ABSOLUTE_PROJECT_PATH doesn’t match any defined Local
Storage.

The folders are created on disk, but it looks like the io receiver can’t resolve the absolute paths back to a storage root in SG.

1 Like

Kudos to you for trying to solve this unfortunate shortcoming of SG.
I hope you get somewhere.

I tried modifying <project>/install/core/python/tank/util/storage_roots.py
in the function _get_storage_roots_metadata()
and had some temporary success.

I inserted a bit of code to parse the value of each *_path key in the roots.yml file for an environment variable:

def _get_storage_roots_metadata(storage_roots_file):
    """
    Parse the supplied storage roots file

    :param storage_roots_file: Path to the roots file.
    :return: The parsed metadata as a dictionary.
    """

    log.debug("Reading storage roots file form disk: %s" % (storage_roots_file,))

    try:
        # keep a handle on the raw metadata read from the roots file
        roots_metadata = (
            yaml_cache.g_yaml_cache.get(storage_roots_file, deepcopy_data=False) or {}
        )  # if file is empty, initialize with empty dict


        # resolve any environment variables stored in '*_path' keys
        for k,v in roots_metadata.iteritems():
            if "_path" in k and v:
                resolved_path=os.environ.get(v,None)
                if resolved_path:
                    roots_metadata[k]=resolved_path


    except Exception as e:
        raise TankError(
            "Looks like the roots file is corrupt. "
            "Please contact support! "
            "File: '%s'. "
            "Error: %s" % (storage_roots_file, e)
        )

    log.debug("Read metadata: %s" % (roots_metadata,))

    return roots_metadata

But since this file is not in <project>/config but the <project>/install folder, am unsure if this file is one of those in danger of being overwritten by normal SG behavior.
I’ll be following this post.

From a thread in March of 2020

Hi Logan,

Thanks for the additional information! I’ll check those storage roots out as well to see if it gets us any closer.

We’ve made a bit of progress and are able to get folder creation to work and upload correctly to SG, the one issue we are hitting now though is a database concurrency issue when launching DCC software from toolkit.

TankError: Could not create folders on disk. Error reported: Database concurrency problems: The path '$PROJECT_PATH\foo' is already associated with SG entity <Foo entity>. Please re-run folder creation to try again.

Getting here was done by changing the following lines to expand the path os.path.expandvars:

  • install\core\hooks\process_folder_creation.py line 118
  • install\core\python\tank\path_cache.py line 263

This error might be related to some of the information mentioned in the second link, poking around at it this week though.

Did you get this working in the end?

Theres a better way for this now which is the core hook for changing storage config.

https://developers.shotgridsoftware.com/tk-core/core.html#module-default_storage_root

2 Likes

Thanks Ricardo,
Is my interpretation of this correct; this would change the default storage root, but not allow for custom paths on a per user lever for a defined storage root?
eg, I have primary for most typical workflows, and another root called p4v which gets hydrated with a custom root defined per-user via an environment variable?

You could create a field on user level to set the storage root.
However it would be a different LocalStorage in SG so it will require some thinking on how to resolve things back.

If you’re using perforce though, it it not better to register published there against a common root?
Perhaps by mapping drives or creating symlinks?

That does sound more straightforward, but would that mean that a user would need to symlink or map a single perforce workspace to a single drive prior to launching a project in sg-desktop, with the implication being that if they wanted to switch to a different project they would need to change the drive mapping to the new project and re-launch SG desktop? Would that mean only one project linked at a time?
Maybe not!
Perhaps I could have SG handle creating a symlink between a user workspace and the p4v root projects folder…
eg
on engine_init:
Find user workspace eg d:\pv4\my_project_A_workspace.
Map/symlink this workspace folder to the perforce root P:\projects\project_A\

I could see that working.

In this case I would create 1 localStorage per Project and maybe do the LocalStorage configuration per project instead of per user.

However you’d want some sort ofabstraction so symlinking or mapping would be best so that you have a locatsorage for Perforce that works for all your artists wheter internal or external.

"1 localStorage per Project "
You mean a custom root per project? That seems unscalable?

Or maybe 1 LocalStorage that represents the local Pv4 storage (and you make sure to map it properly)?

I’m curious to know what game studios do to deal with this issue.

If the perforce workspace includes the folder structure expected by SG, then it’s all good, I can swap out the drive of the root for a given artist based on an env var.

But, if the perforce workspace needs “injected” to a particular location within the folder schema, then the only approach I imagine that would work would be using symlinks (which I’m not entirely confident would be reliable/robust in windows).

I would be happy to take the first approach, but I’m worried a project might come down the line where we have no control over the folder structure within the perforce workspace. eg where a client has ownership of the data on perforce and has other requirements to meet. Maybe this is a “what-if” that may never happen.

Can you explain a little bit on what you are tracking inside the perforce structure?

My experience in UE would be that the UE project is outside of the SG folder structure.
The art pipeline is a SG structure and you use the Loader/Breakdown apps (or other methods) to load/update assets in the game (and save id’s and other data on the metadata in the UE object/asset).

At least in UE you kind of don’t want to track version numbers of files in the engine and in perforce, you kind of just push and pull that from/to the asset tracking system.

Hello, I am approaching the same problem. Our storage is all on Perforce, and we want to maintain our current workflow, which is to map our Perforce depot to a folder relative to the current user directory. This is important, for example, if two people are using the same computer, working on the same files, but with different states of their workspace. It also makes file permissions issues easier to deal with.

I would like to have a single LocalStorage that represents this “Perforce Root”. However, the actual root path would need to be changed dynamically somewhere in sgtk - so that the windows_path, mac_path fields are ignored and replaced at runtime.

Is this not possible? For example on macOS, the root of the storage should be something like /Users/me/perforce/workspace

Thanks for any help!

EDIT:

I had some success at least getting the “Create Folders” action to work, by overriding process_folder_creation.py and resolving the environment variables in the path there.

If the path to perforce is the same on each OS then why not use it as the actual Storage Location?
Nothing is stopping you from setting a Storage Location in Flow to match /Users/me/perforce/workspace if that works for all users on macOS.

Each user would have a different username. Replace “me” with the name of each user.

Use Symlinks to link a common path to the correct folder?

Or just use 1 workspace per machine instead of a workspace per user.
In the end they sync up and are the same thing depending on what branch/strem you load.

Perhaps have a look to see if this core hook helps you out.

https://developers.shotgridsoftware.com/tk-core/core.html#module-default_storage_root