More and more teams using Perforce Helix Core are moving to remote working environments every day.
Some teams are going remote by choice, with members distributed across different geographies and work-from-home increasingly offered as a workplace perk in the name of work-life balance. Other teams were suddenly forced into remote work environments by the COVID-19 pandemic and lockdowns that were instituted worldwide.
Regardless, remote development isn’t anything new to the game development industry. But going remote can make it difficult to collaborate on games because of performance issues that can easily crop up when trying to collaborate over long distances on WAN internet connections.
While Perforce Helix Core comes with a number of performance optimizations built-in, teams that use Helix Core version control often encounter inherent performance barriers because of the sheer size and number of files typically involved in the projects they have in their Perforce depots. (The Unreal Engine alone contains thousands of files and can take up multiple GBs of storage.)
Couple large data volumes with mediocre internet connections that individual team members are often stuck with at home—distributed across different cities, regions, or even continents—and production can easily grind to a crawl.
While some performance issues are unavoidable when working with large files, the Assembla DevOps team recommends a number of tricks and strategies to optimize your Perforce hosting and retain high performance while going remote.
We have developed this list of strategies over 10+ years of experience hosting Perforce for tens of organizations with hundreds of team members distributed around the globe and thousands of gigabytes (terabytes) of data under version control. Whether you host your own Perforce server or are looking for a managed Perforce partner to take over your server management, continue on to read Assembla’s top ten tips for maximizing Perforce Helix Core performance.
Migrating to cloud-based depots is key to ensuring high performance for remote development.
Perforce cloud installations typically perform so well that Perforce Consultant Tom Tyler has gone so far as to call them “a little bit of black magic in the cloud.” Cloud-based installations benefit from cutting-edge improvements that the major cloud providers are constantly making to their hardware and networking gear. Equipment of the caliber offered by companies like Amazon Web Services, Microsoft Azure, Google Cloud Platform and others would be grossly cost-prohibitive to procure for a single company and refresh year after year. But economies of scale and virtualization can make them available in the cloud for a fraction of the cost.
Additionally, most modern cloud providers offer data transfer speeds between their global data centers that are significantly faster than VPNs and even most private, dedicated network providers. One of the most important strategies to overcoming poor performance is distributing Perforce server resources close to distributed team members using replicas or edge servers located in different data centers around the world. Leveraging the high-speed global networks maintained by modern cloud providers, localized cloud replicas become an even more effective tool to reduce latency and improve the experience of developers and artists alike.
Finally, more and more studio organizations these days are opting for a lean operating model that minimizes overhead expenses in favor of strategic investment in key focus areas. Teams want to spend money on things that actually make a difference to the quality of production—and their bottom line. Many teams forego hiring any dedicated IT resources at all, and add P4 hosting duties to the long list of other responsibilities already assigned to build or game infrastructure engineers.
Besides being a poor allocation of already-stretched resources, remote work environments make it even easier riskier to add on-premises hardware maintenance to your build team’s task list. Your source code and art are the most important assets for a software or game development team, aside from personnel. It is crucial that you ensure uptime, availability, and easily-recoverable backups to avoid incredibly costly server outages and lost production time.
Some cloud instances enable better performance than others for Perforce, and it’s important that you provision the right set of cloud resources to support your team’s usage patterns while avoiding cost overages, server swamp, or other performance degradations.
Based on Perforce benchmarking, cloud-based installations of Perforce Helix Core on compute-optimized resources—such as AWS’ c5 class of instances—have been shown to provide the optimal balance between efficient cloud resource utilization and performance.
Additionally, if you are designing your own cloud-based Perforce infrastructure, you should aim to use instances that:
There are often adjustments you can make to the operating system of your server to optimize data throughput.
If you are using Perforce 2017.1 or later on a Linux system, you can enable Perforce autotune to improve performance. Click here for more on Perforce autotune.
If you are hosting on a Linux-based system with Windows-based clients, you can also manually add the following lines to /etc/sysctl.conf or a custom conf file in /etc/sysctl.d/ (if this path is present and referenced by /etc/sysctl.conf ), and then run ‘sudo sysctl -p’ to load in the new sysctl settings.
net.core.rmem_default = 2796203
net.core.wmem_default = 2796203
net.core.rmem_max = 22369621
net.core.wmem_max = 22369621
net.ipv4.tcp_rmem = 4096 2796203 22369621
net.ipv4.tcp_wmem = 4096 2796203 22369621
While this may go without saying, it is actually quite important that you keep your Helix Core server up to date. With each new release, Perforce makes optimizations and tweaks under the hood to speed up p4d. The longer you put off updating your p4d version, the more performance benefits you’re missing out on.
At the very least, your team should plan to upgrade your p4d server with every yearly major release (although minor releases often include important bug fixes and updates as well and so should be strongly considered).
Besides operating system-level tuning, there are certain configurables that can be set on your Perforce server to reduce the resources and processing time required to facilitate certain commands.
These configurables include (in order of usual level of performance improvement):
Of these, MaxLimits settings (i.e. MaxResults, MaxScanRows, etc.) should only be set in extreme cases where the size of team and/or repository require it and usage patterns are well-understood by server administrators. If this are not the case, MaxLimits can easily over-restrict users and contribute to production delays and frustration.
Although protections are most often used as a way to set restrictions on certain users or groups of users, they can also improve performance by limiting the number of files involved in syncing and other p4 server operations initiated by groups of restricted users. You must have superuser privileges to set protections. For more information, see Setting protections with p4 protect.
Similarly, you can manually control the amount of data referenced by your clients by creating a “tight” workspace view. In this case, you add more detail about the files that should be mapped from the depot to the client view, omitting files or streams that do not need to be accessed by that view. Then, any users with access to the client view will only reference the files and metadata contained in the view when executing Perforce commands, reducing the total amount of data that needs to be relayed between client and server and reducing server processing time. See Tight views.
While it can be complicated to deploy and maintain multiple regions of infrastructure, Perforce Federated Services are instrumental to facilitate high-performance Perforce Helix Core installations for a number of reasons.
First of all, replicas and edge servers deployed in distributed geographic regions help to reduce latency caused by geographic distance and drastically improve server performance for team members that aren’t located on the same continent as the Master. Part of the reason this is such an effective strategy in the cloud is that the cross-continental fiber-optic networks of today’s major cloud providers are the fastest way for data to cross the globe. Cloud-based multi-site Perforce topologies are the most powerful way to deploying multi-site topologies that have ever been possible.
Additionally, specific types of replicated servers aid in the distribution of workload across the system. Proxies, replicas, and edge servers can each serve certain sets of Perforce server operations, reducing the amount of data required to be transferred back and forth to the master or commit server. This is especially the case for build replicas or build edge servers, which prevent build farms from overloading the master server.
Although many teams want to get away from maintaining infrastructure locally, hybrid cloud topologies that include a local p4 proxy or replica server can significantly improve local performance if some team members still work from a single central location.
In addition to server-side optimizations, there are a number of best practices you can employ on the client-side to improve performance and streamline cloud-based collaboration.
Much like configurable properties on the server-side, Perforce clients can be tuned to maximize data transfer speeds and reduce the amount of data transferred between server and client to improve performance at the client.
Perforce uses the TCP protocol to relay data between different Helix Core servers in your network and client machines. Depending on the quality of your local internet connection, you can sometimes improve performance by tuning TCP and buffer settings on your local machine.
You can either change these settings by adding the following lines to your p4config.txt file, or set the properties from the command line.
P4 Command Line
$ p4 property -a -n filesys.bufsize -v 2M
Lowering the default value of “1000” reduces the amount of information P4V has to retrieve from the server for each file in the changelist. Select the “Preferences…” menu item in P4V. Select the “Connection” tab (“Server Data” in P4V 2012.1 and greater). Set the “Maximum number of files displayed per changelist” field to a lower number. Click “OK” to close the Preferences window.
By limiting how often P4V checks the Perforce, you can improve the performance of your local P4V client and also reduce the load on the server.
You can update polling frequency from your P4V preferences menu, or by entering the following in the P4 Command Line client:
$ p4 property -a -n P4V.Performance.ServerRefresh -v 60
Note: This P4V variable can be controlled from the server using the p4 property command described in Storing Property Settings in Perforce.
Note: If set to never check for updates, users need to manually refresh P4V by selecting the “View | Refresh All” menu item. Select the “Preferences…” menu item in P4V. Select the “Connection” tab (“Server Data” in P4V 2012.1 and greater). Set the “Check server for updates every NNN of minutes” field to a higher number or zero (to never check for updates). Click “OK” to close the Preferences window.
Configure clients, such as Visual Studio, to “lazy load file state” if possible. While this may reduce the speed with which you receive updates, for most workflow situations you don’t need the updates immediately anyway and this can significantly improve the response time of your client and reduce network congestion.
You can also improve the performance of your Perforce connection by limiting the number of files you need to download to your machine. One way to minimize the total dataset size is to employ specific branching strategies or branch types, like task streams and virtual streams, that minimize metadata stored about the stream or only include a specific subset of files.
A task stream is a lightweight, sparse stream that is quicker to use than a regular stream. Task streams are only intended to live for a short time in one individual’s workspace and do not create the same amount of metadata as regular streams, so therefore don’t require the same amount of data transfer or back-and-forth communication with the server. For more on using task streams, see Working with task streams.
Virtual streams work similarly to task streams, with some differences in terms of the workflow options available when working with them compared to task streams. Virtual streams are particularly helpful because they allow you to filter files from a stream that you do not need to sync to your client machine, but still allow you to submit changes directly to the parent stream without having to promote your changes back up the stream. For more details on using virtual streams, see Working with virtual streams.
In addition to creating different types of streams, you can speed up the work you need to do on a set of files by creating a branch with the specific subset of files needed for given feature development or bug fix. In this case, you retain all the workflow capabilities of a standard branch, but minimize the amount of data that needs to sync back and forth between your machine and the central repository by selecting a particular set of files to populate your branch. To do this, simply select the specific folder or set of files you would like to branch, right-click, and choose Branch files.
In addition to branching, you can also manually control the amount of data you need to sync from the central repository to your client machine by limiting the amount of data referenced by certain commands you perform.
For example, you can sync smaller numbers of files at a time by syncing specific files in your depot in separate commands.
p4 sync //depot/projA/…
p4 sync //depot/projB/…
In many cases, using a branch-merge development model with Perforce streams can accelerate your development workflow. But locking files is sometimes the simplest, most bulletproof way to avoid conflicts for large binary files. File-locking is definitely a less modern workflow-style and won’t work for every studio, but it can help save you time and headache by eliminating the need to resolve conflicts between binary files later. For more on using locks to avoid multiple resolves, see Preventing multiple resolves by locking files.
Assembla is the exclusive provider of on-demand Perforce Cloud—the GitHub for Perforce Helix Core—and the leading provider of managed, single-tenant Perforce cloud hosting. Whether your team is just getting started with Helix Core version control or already has a Perforce server with hundreds of thousands of files and terabytes of data, Assembla’s Perforce Cloud hosting solutions are tuned for high performance out of the box.
To request migration assistance, get a quote, or ask us about anything else related to Perforce hosting, please reach out to our Assembla Customer Success Team at email@example.com, or start a chat with us on https://assembla.com/perforce.Download Perforce Helix Core Buyer’s Guide