Initial actor replication optimization

We have been doing some networking optimization for our application and found that spawning a single blueprint with a static mesh requires ~200 properties to be replicated according to the network profiler. In levels with lots of spawned actors (we have dynamic level generation) clients can disconnect immediately after joining because the initial replication of all those properties overflows the reliable call buffer.

Is it possible to limit what is being sent on that initial call. 200 properties seems like a lot more than required for a barrel.

So further research into this shows that I should be able to turn off properties from replicating using the DOREPLIFETIME_ACTIVE_OVERRIDE macro. Now just to find a way to get all the properties and modify them so I don’t have to add 200+ lines of code and toggle them on/off to see what I need and what I don’t

Alright it seems I was reading the area of the profiler incorrectly. an actor with 8 static mesh components is sending over 200 property updates though.

I’ve decided to take this one step at a time. The ComponentTagOverride is used to replicate the tags (Since the tags array is not replicated themselves) So I use prereplicate to set them to the ComponentTags array and then replicate them.

Super::PreReplication(ChangedPropertyTracker);
ComponentTagOverrides = ComponentTags;

Then I’m using conditional replication on it to replicate only the initial bunch.

Super::GetLifetimeReplicatedProps(OutLifetimeProps);
DOREPLIFETIME_CONDITION(URTileComponent, ComponentTagOverrides, COND_InitialOnly);

What I expected was 8 tiles, that have 8 static mesh properties replicated and 8 array values replicated. Instead I get 48 update calls. This is goign to take some more digging. Any help would be much appreciated.

Still looking into this and it seems that there is a bottleneck somewhere before the data is sent. We have run some mass tests. Spawning 3000 cube actors. These take a long time to replicate (~6 seconds) but have very little data in them. The networking overview seems to say that there is only 73.5 KB of outgoing bandwidth.

256836-untitled-1.png

This should be downloaded super fast. So that leads me to believe it’s before the sending that has the issue. We tried having all the actors have Always relevant to see if there is a a bottleneck in the consideration step but it didn’t take any longer. We have since tried using the 4.20 replication graph but it takes even longer.

Right now we are really trying to find where the bottleneck is and how we can make some changes to avoid it.

I’ve managed to track this issue down to the line 3878 (or maybe 3872) of NetworkDriver.cpp It seems my ActorChannel is becoming saturated. The hard part to understand here is why the channel is considered saturated.

The log at that point of replication is:

[538]LogNetTraffic:  Maybe Replicate RTile_671
[538]LogNetTraffic: Created channel 11 of type 2
[538]LogNetTraffic: Creating Replicator for RTile_671
[538]LogNetTraffic: - Replicate RTile_671. 3801088
[538]LogNetTraffic: Replicate RTile_671, bNetInitial: 1, bNetOwner: 0
[538]LogNetTraffic: Creating Replicator for StaticMeshComponent0
[538]LogNetTraffic: Verbose: UNetConnection::SendRawBunch. ChIndex: 11. Bits: 2209. PacketId: 11137
[538]LogNetTraffic: Verbose: UNetConnection::SendRawBunch. ChIndex: 11. Bits: 359. PacketId: 11137

[538]LogNetTraffic:  Maybe Replicate RTile_127
[538]LogNetTraffic: Created channel 12 of type 2
[538]LogNetTraffic: Creating Replicator for RTile_127
[538]LogNetTraffic: - Replicate RTile_127. 3801088
[538]LogNetTraffic: Replicate RTile_127, bNetInitial: 1, bNetOwner: 0
[538]LogNetTraffic: Creating Replicator for StaticMeshComponent0
[538]LogNetTraffic: Verbose: UNetConnection::SendRawBunch. ChIndex: 12. Bits: 2209. PacketId: 11138
[538]LogNetTraffic: Verbose: UNetConnection::SendRawBunch. ChIndex: 12. Bits: 362. PacketId: 11138
[538]LogNetTraffic: Verbose: Saturated. RTile_127
[538]LogNetTraffic:  Saturated. Mark RTile_127 NetUpdateTime to be checked for next tick
...

Some lines omitted for the comment

It seems that after sending one bunch, the rest are marked as saturated so it won’t send another until next tick. Even though these are tiny bunches. It forces more time considering and and marking saturated then it would just sending the data.It takes 0.002 seconds to replicate one actor. It takes 0.001 second to mark 3-4 actors as saturated. Any Idea what this saturation means and how to prevent it?

Hey. Did you fix it somehow? Looks like i’m struggled with the same issue. I have hundreds thousands of actors, total data transferred is pretty small, 20 kb per second, but actors appear on the client one by one, very slowly

Are you spawning these actors on the Server?