When I started using Unreal I had experience with several in-house engines as well as Unity. There are a number of concepts that are common to almost all engines and, once you understand them, you come to expect them and rely on them.

I couldn’t find a good resource that would break down how Unreal implements these common concepts. Often, people talk about how to do simple things (like getting a character up on screen) or describe the parts Unreal consists of, without explaining how they typically get used.

What I want to do here is to write the high-level view of how I’m using Unreal‘s architecture. It’s important to play nice with the engine (fitting a round peg in a square box is a bad idea) while building an architecture that will scale to the scope of your game. This article forgets the trees and focuses on the forest.

Starting from the Top

There is a big difference between a tech demo and a game.

If your project doesn’t have multiple screens, if it doesn’t have a way to save game state, if it doesn’t have many different game modes, if it doesn’t have loading screens or streaming, then it’s a tech demo and you have a lot of work ahead of you.

Almost every game has an object (often a Singleton) that represents the entire game. This object persists across screens and modes and serves as the connective tissue for the entire experience.

In Unreal the easiest way to fulfill this role is to subclass UGameInstance. This object is created early when the game boots, and the function Init() gets called. It will survive through level changes and can be easily accessed from the World object.

Note that the simple examples that come with Unreal don’t do this. They just set up a Game Mode (which we’ll discuss later) because they are tech demos.

After you create your Game Instance, you want to be able to access it easily. A nice way to do it is to create a static function in C++ and expose it to Blueprints are a Pure function:

UFUNCTION(BlueprintPure, Category="Game", meta=(WorldContext="WorldContextObject"))
static UMyGameInstance* GetMyGameInstance(const UObject* WorldContextObject);

UMyGameInstance* UMyGameInstance::GetMyGameInstance(const UObject* WorldContextObject)
{
	UGameInstance* gameInstance = UGameplayStatics::GetGameInstance(WorldContextObject);
	return Cast(gameInstance);
}

This will give you the same interface Unreal provides:

Here’s some of the functionality I have in the Game Instance (and therefore accessible to the entire game):

  • Load levels, including holding information the level will need
  • Keep user’s game state (including saving and restoring)
  • Control screen transitions and history
  • Expose configuration from DataTables and Data-only Blueprints
  • Remember whether a Gamepad is being used or not
  • Collect telemetry for data analysis and replays
  • Manage shared Render Targets (thumbnails used throughout the game)
  • Provide deterministic random numbers for procedural generation
  • Whatever else is convenient

Not all of this functionality lives on the one Game Instance class. You can easily create individual classes that you then access through the Game Instance pointer.

One note about the Game Instance in Unreal is that it’s not a true Singleton, in the typical C++ sense of a static object. In fact, if you run in PIE (play-in-editor) mode, there will be one per PIE instance. Keep this in mind as you will need a trail to be able to access it from UObject‘s that don’t live in the World: either pass a World context as a parameter in a function call, or establish a relationship with an Actor or Widget that has access to the World.

Another implication of how Game Instances work is that if you end up storing something in a static variable, it will survive across PIE runs, which is probably not what you want. If it must be static, set it during Init() and reset it during Shutdown(), but expect that multiple PIE instances will still share it.

Managing Screens

Now that we have a way to control the flow from one screen to the next, the question is: what’s in a screen? To answer that question let’s talk about how Unreal does UI (user interface).

Unreal‘s preferred way to do UI is by using UMG (Unreal Motion Graphics) which is a wrapper around their native system, called Slate. UMG provides visual tools to structure your UI screens and if you’re using Unreal Editor to create UI, you’re using UMG.

A big difference between Unreal and Unity relates to how UI is structured. In Unity everything is a GameObject, whether it’s a character running around on screen or a UI button. On the other hand, while both the character and the button will be UObjects in Unreal, the first one will be an AActor while the second one will be a UWidget. AActors and UWidgets effectively live in two different contexts (actors in the level, widgets in the viewport) which means you can combine them however you want. One disadvantage is that passing information from one context to the other can be cumbersome.

For my purposes, I consider a screen is a UI Blueprint (derived from UUserWidget) which will include all the functionality of the screen and control the widgets contained within. For instance, if I need an inventory screen I’d derive a C++ class from UUserWidget (e.g. UInventoryScreen), add references to its parts using BindWidget and write code to populate it and handle events like selecting one of the items.

With this in mind, let’s consider two scenarios and see how we could approach them:

  1. The UI covers the entire screen, the gameplay world is only visible during a mission. A sensible structure would be to have a single empty level for all the UI and swap screens by changing the UI Blueprint that sits on the viewport. Note you can still present 3D assets using Render to Texture.
  2. The UI lives in a 3D world or lets the player see the 3D world through it. If switching screens means going from a 3D environment to another, you’ll need to have multiple levels (unless you want to use sublevels) and changing screens will typically involve changing both the level and the UI Blueprint on the viewport.

The second approach is more flexible, but will lead to longer waits when going from screen to screen, so if having the UI feel very fast and responsive is a priority consider limiting the amount of level switching involved.

We’ll imaginatively call the system responsible to control screen flow the UScreenManager. It contains the definitions of all the screens (level name, UI Blueprint) as well as a stack of screens, to support transitions as well as going back. Pushing a screen from code is as simple as using the id of that screen and passing some (optional) contextual information. This is roughly what the code looks like:

void UScreenManager::Push(EScreen screen, const FScreenData& data)
{
	ScreenStack.Emplace(screen, data);
	const FScreenDef* screenDef = Find(screen);
	UGameplayStatics::OpenLevel(this, screenDef->LevelName); // this is asynchronous
}

// registered with FCoreUObjectDelegates::PostLoadMapWithWorld.AddUObject(this, &UScreenManager::PostLevelLoad);
void UScreenManager::PostLevelLoad(UWorld* World)
{
	const FScreenDef* screenDef = GetTopScreenDef();
	UUserWidget* screenWidget = UWidgetBlueprintLibrary::Create(World, screenDef->Widget, World->GetFirstPlayerController());
	screenWidget->AddToViewport();
}

Some notes about this code. First, it’s not production-ready: error checking is missing. Second, I haven’t found a robust way to store general information for the screen (FScreenData). A possible approach is to encode it in a string (maybe using JSON) but that can be inefficient, so for my game I add individual pieces of content as needed to that struct.

Also there is a problem we discussed earlier: if we need to pass information from the level to the screen, the only way is for the screen to pull it doing global searches, which is not ideal. A possible solution is to use the Level Blueprint (in BeginPlay) to create the screen widget, then pass the information as needed. The downside is that it’s not a very good general approach and it requires switching maps to edit the Blueprint.

Finally, because we get notified when the level is finished loading, we can add a loading screen and then remove it when loading completes.

Show Me the Game

The main piece and the one most people think about when they consider Unreal is the gameplay piece. As we said earlier, objects that exist in a 3D world are a type of Actor.

If you come from Unity, it’s theoretically possible to build the game entirely using Actor Blueprints with the right combination of components, and creating new components as necessary. That’s not the Unreal way, however. Unreal provides a Gameplay Framework, that will be extremely useful, especially if your game resembles Fortnite or Gears of War.

In a nutshell, the rules for winning or losing the game belong in the Game Mode and Game State objects. All the spaceships and monsters and heroes are a Pawn or even a Character (if you want a mesh, collision and movement logic). It’s a good idea to build those so they can be driven equally by the player or the AI, so Unreal externalizes the logic that controls them into a Player Controller and an AI Controller. There are a lot more details but that’s the general idea.

Each level has default types for all of these things (you can see them in the World Settings window). One can almost picture having single player levels and multiplayer levels, with different Game Modes.

Another important consideration, especially if you’re used to Unity, is where the functionality lives. In Unity all the code belongs in Behaviors that attach to generic GameObjects. While this is possible in Unreal as well using Components, you can also subclass AActor (or APawn or ACharacter) and add functionality directly in that subclass. The approach you choose will depend on the type of game you’re making. If you expect a large variety of actors with different combinations of characteristics (movement, health, damage, others), then build those pieces as components. If, on the other hand, you only have a few stereotypes, you’ll avoid some overhead by writing that code in a subclass. Note that it’s not possible to use Unreal in a fully data-oriented way like Unity‘s DOTS.

I’m working on a tactics game, so I’ve deviated slightly from how a typical action game would be structured. I don’t need any of the fidelity of an action game so I don’t need the built-in movement or physics collision. But the most important difference has to do with being able to run combat simulations outside of the Unreal world. I use this to show predictions to the player so they understand their options, as well as to search for the best course of action for the AI.

The main impact of this difference is that my Pawns don’t know anything about hit points or any of the combat logic. They are simply visual puppets. The real logic lives in a separate UObject (not an Actor) that exists in a separate logic world and is extremely lightweight. I call them UUnit. They can be created, modified and destroyed quickly and frequently during simulations. To be honest, the only reason they are a UObject to begin with is to be able to expose them to Blueprints.

UUnits get registered with an actor that bridges the simulation and the visual world (I call it AGrid). In this way, when the level is unloaded and the actor is destroyed all UUnits get garbage-collected.

More to Come

This was an introductory overview to how I am structuring my game in Unreal. There’s a lot more to cover, and I’ll be writing about it in future posts, specifically:

  • What is the right division between C++ and Blueprints
  • How to manage input: mouse, keyboard and gamepad
  • Ways of storing configuration data
  • Special considerations for making a multiplayer game