Perforce Helix Core Edge vs Forwarding Replicas

Jonathan Lowe Global Account Executive at Assembla
Jonathan Lowe
|
Last updated on November 9, 2023

There’s a reason Perforce Helix Core is used by 19 of the top 20 AAA game studios around the globe. In an 8k world filled with experiences that are more immersive, hyper-realistic, and visually-stunning than ever before, Perforce Helix Core continues to be the only version control system that reliably handles large asset files like game art, textures, levels, etc. 

However, ever-increasing file sizes are amplifying one of the core challenges of 21st-century game development: the difficulty of collaborating on large files from remote/home offices or with teammates located around the globe. To meet those requirements, solutions like Helix Core Edge and Forwarding Replicas were developed.

Distributed development has become the new normal of game development, as demonstrated by the growth of conferences like the External Development Summit (XDS), the success of specialized art, animation, and visual effects studios like Lakshya Digital, and new models for production pipelines that accommodate more dynamic relationships between studios, publishers, and external partners. As we head into the new decade, the problem of handling large files across long distances is only going to intensify.

Perforce Helix Core Edge

Distributed Versioning in Perforce Helix Core

Luckily, Perforce Helix Core comes with a number of features designed to address the problem of multi-site replication. Perforce launched federated services as early as version 2008.1 to promote “better performance for users at remote sites, reduce bandwidth requirements for installations separated by low bandwidth and/or high latency connections, and reduce the load on central servers.”

Through successive updates, the Perforce team has introduced more and more specialized server types and wrap-around services to facilitate easier collaboration across sites. At the time of this writing, Helix Core supports various types of replicated servers, such as:

  • P4 proxies
  • Forwarding replicas
  • Commit-edge servers
  • Read-only replica
  • Build server
  • Standby replicas (P4D 2018.2 and later)
  • Forwarding standby servers (P4D 2018.2 and later)

In this post, we will cover the top five pros and cons between two of the most popular replica types used to facilitate multi-site distributed Perforce topologies: commit-edge server pairs, and forwarding replicas.

Perforce Commit-Edge Server Pairs

Commit-edge architecture was first released in Perforce 2013.2. Unlike more traditional replication, the distinguishing feature of commit-edge architecture is that certain data manipulated on an edge server never actually needs to be replicated upstream to the commit server. By handling the most commands without needing to ping the central server of any distributed Helix Core server endpoint, edge servers offer the best overall performance of distributed Perforce system types (as long as they are geographically located close to the teams connecting to them).

Pros of Commit-Edge Server Pairs

  1. Best overall performance of all distributed Helix Core topologies.
  2. Substantially reduce the workload on the commit server by handling all read and some write operations, such as syncing, checking out, merging, resolving, submitting and reverting files.
  3. Edge-to-edge chaining allows for fine-grained access control and geographic specificity of server resources over a wide geographic range, while minimizing server load distributed across the system and data transfer between individual server resources.
  4. Relatively easy to convert a master/forwarding-replica pair to a commit-edge pair (commit servers generally require lower levels of machine provisioning than master servers, while edge servers generally require higher levels of machine resources than forwarding replicas).
  5. Can be used as a Build Server where write commands are part of the build process.

Cons of Commit-Edge Server Pairs

  1. More complex setup and maintenance than forwarding replicas (e.g. edge servers require their own HA/DR plan, separate from the commit server).
  2. Higher level of machine provisioning and resources required, potentially increasing cost across the system.
  3. More complex user management, as users are not automatically created on edge servers.
  4. Involve a more complex workflow, as many settings and operations can be unique to a specific edge server and are not shared with the rest of the Helix Core system (e.g. shelved changes on an edge server are not available on any other servers in the topology unless specifically promoted).
  5. Edge servers cannot be used as a warm standby in the case of commit server failure.

Perforce Forwarding Replicas

Although basic replicas were available in Perforce Helix Core as far back as version 2009.2, forwarding replicas were not introduced until version 2012.1. Forwarding replicas contain both versioned files and file metadata, which allows them to service common read-only commands without introducing as much latency or increasing server load on the master (target) server. When a command is submitted to a forwarding replica that would attempt to change file contents or metadata, the operation is simply forwarded to the master server for completion and the response is automatically relayed back to the replica.

p4dist forwarding Perforce Helix Core Edge vs Forwarding Replicas
Forwarding replica with master server architecture diagram from Helix Core Server Administrator Guide: Multi-Site Deployment (2019.2).

Pros of Forwarding Replicas

  1. Reduce load on the master server and improve performance for the many of the Perforce commands most sensitive to latency, such as p4 sync and p4 resolve.
  2. Enable Offline Checkpoints to prevent target server downtime for checkpoints and backup.
  3. Can be used as a warm standby; do not require a separate HA/DR plan from the master.
  4. Can be daisy-chained together for better-localized performance similar to edge-to-edge chaining in commit-edge servers.
  5. Easier to configure and manage compared to a commit-edge server topology.

Cons of Forwarding Replicas

  1. Cannot process any write operations.
  2. Do not speed up commands that attempt to change metadata or versioned file contents (must wait on the target server to process and relay changes back before registering successful completion of any write operations).
  3. Higher overall latency when compared to edge servers.
  4. Require higher machine provisioning for the target master server than commit servers.
  5. More data transfer is incurred between forwarding replica-master server pairs than commit-edge pairs.
ico server Perforce Helix Core Edge vs Forwarding Replicas

Ultimately, deciding which solution best fits your team’s requirements is a balancing act between many unique characteristics of your team’s project, workflow, resources. Most of the studios on distributed Assembla Perforce Single Tenant Cloud solutions have found managed forwarding replicas in the cloud to deliver good performance, although the Assembla P4 DevOps team has also set up commit-edge topologies for teams with more sophisticated personnel footprints, workflows, or requirements.

Whether you choose to implement a commit-edge or forwarding replica system for your distributed Perforce development, both solutions provide significant benefits and performance improvements for globally-distributed teams.

Have any questions or tips you’d give teams interested in setting up a distributed Perforce system based on your experience? Leave a comment below!

And if you would like any advice about which solution might best fit your particular team’s particular needs or want to learn more about Assembla’s managed cloud Perforce solutions, please don’t hesitate to reach out to support@assembla.com!

Get Source Code Management Tips in Your Inbox
Get your monthly dose of Perforce, SVN and Git advice from the experts in Multi-VCS hosting in the cloud.
Jonathan Lowe Global Account Executive at Assembla
Jonathan Lowe
Jonathan is the Global Account Executive at Assembla.
© 2024 Assembla - All Rights Reserved

Select AWS Region

Pick the region closest to your team for faster performance.

Select AWS Region

Pick the region closest to your team for faster performance.