Movement Modifiers and ServerMove

I have a feature where a player’s movespeed can be modified. When the modifications apply and unapply, I get a client correction from the server. Currently the circumstances that trigger the buff are detected by both the client and the server, but the server is running at a different framerate from the client and thus the modification applies differently on different machines, so the client and server disagree about how far the client has moved.

I realize that if I just want a single flag I can add it to CompressedFlags for ServerMove, and it will arrive along with the move, however I have more than one modification, and they aren’t so binary. What would you suggest is the best way to synchronize the changes in movement state on client and server?

Are you sure this is caused by differences in framerate? I would confirm by clamping the framerates to the same value on client and server for verification (“t.maxfps 30” etc). If this really is the case, you could try using more precise substepping settings in movement (perhaps conditionally). This would surprise me though, we typically don’t see this with the fairly large acceleration values that come in from normal player movement.

I rather suspect a timing difference between when the server and client are triggering the changes. In some of our samples and internal projects having the client send an RPC when they begin sprinting is still enough to keep client and server in sync (though you lose out on the saved moved replay robustness from packing it in the compressed move).

When you say the change is triggered by both client and server, can you give more details? Is it something locally simulated or triggered on the client (like collision with a trigger volume, or activation of the client ability), or is it something triggered on the server and replicated to the client? In the latter case, there would be a delay in the client receiving the boost, so the server would likely be running ahead of the client at that point. Normally they are slightly behind the client due to latency of client input reaching the server, and having the server perform an action before the client does so would throw that timing out of sync. It seems like you might want to delay applying the buff on the server until the client timestamp catches up, or until the client has received the buff and starts including it in their movement.

The change is usually triggered by player input on the server and then replicated to the server with an RPC. There are several such changes, some are calculated in parallel on client and server based on pawn position, some are triggered client side by players.

Note I have low server framerate rather than high, so clamping wouldn’t help :slight_smile:

So a client might move once, start a sprint, then move again, and these 3 RPCS (server move, change sprint, server move) are processed at the same time.

It’s also possible that it might be being caused by move combining which we do have active, almost certainly we’d need to have buffs trigger force no combine.

Are you testing and seeing this correction-heavy behavior under significant latency/packet loss situations, or do you get this with no simulated latency? I’d like to get an idea of how severe your issue is (large corrections? tiny? do you have extremely strict position correction in GameNetworkManager settings?). How much this is influenced by latency/packet loss vs. base logic/framerate difference will change scope of the dive/investigation necessary.

You should definitely investigate move combining. On FSavedMove_Character there’s a bForceNoCombine bool. I would suggest project-specific code setting that to true whenever you’re in a “movement altered” state. If you don’t do this, moves can end up getting combined where you had one character movement speed originally for the first move and a different speed for the second, and now it’ll get resimulated as having the most recent speed for both time steps. This would affect both “start of sprint” and “end of sprint” moves.

At a high-level view, if this is is really only visible during latency/packet loss/out-of-order packets you’re running into a fundamental problem of logically coupling separate streams from client to server. Your “ActivateSprint” RPC has no dependencies or connection to “ServerMove #123456 time delta 0.2 seconds” RPC. Those can arrive out of order. Even if those arrive in order, you can run into issues. If the ServerMove RPC sent before you activated sprint got dropped or hasn’t arrived yet, the server will simulate the full time from the last ServerMove received to the current ServerMove, meaning if you swapped “sprint speed” between those, the server will simulate two moves worth of speed boosting, putting the player ahead of where they thought they were (and farther than what they should be allowed).

The easiest way to deal with this and related issues is to design the game in a way where we know what’s going to happen to player movement ahead of time. If you can add even a 0.2 second delay to sprint, you can get the communication of “in 200 milliseconds you’ll be moving faster” out of the way and now the client who is always ahead of time vs. the server can properly predict that.

Failing solving things to not have the problem in the first place, and not knowing exactly where your problem lies, I’ll just offer up some misc points:

  • Paragon has abilities and sprints and fast travel modes etc. Building into the design tiny/near-imperceptible delays saves a lot of headache and mis-predictions/corrections. This isn’t applicable to all games and situations. We are very lenient with our corrections to cover most of the remaining issues. The danger is opening yourself up to cheating, but right now we couple leniency with precise analytics so that even if we allow you to fudge things a tiny bit, we remember and can process the data to find players who are doing that more than usual.

  • If you don’t want to be lenient all the time, you could include logic on the server correction functionality to at least be a little forgiving around the starts and stops of different modes/changes.

  • Instead of tracking sprint mode or whatever you’re working on as bools (if you are), instead store these as client movement time ranges. So instead of a bIsSprinting, it would be a float range “sprinting is active starting at client timestamp 17.23”. The benefit of this means that when you DO get server corrections, when those saved moves end up being replayed you save into your moves/CharacterMovement state a history of when sprinting was enabled, so that you can resimulate those with the right modifications.

  • Even with our lenient corrections/client-trust we have on Paragon, we still can run into significant issues with very sudden/significant movement abilities coupled with significant latency/packet loss. Player A activates a dash move ability that sends them 300 units up in a single tick. We send a “ability x activated” RPC, and then the next tick the ServerMove RPC goes to the server as well. If they arrive in a reverse order, the server will simulate the ServerMove and conclude “there’s no way you would have gone straight up 300 units under normal movement conditions, CORRECTION”, THEN receive the dash move ability activation RPC, and then next ServerMove it receives it’ll simulate doing the dash and cause ANOTHER correction. So the client sees two significant corrections. This is why I mentioned it as a fundamental problem - you have certain RPCs being sent to the server (moves) that only make sense assuming that other RPCs (ability activation) have already occurred. One way to work to alleviate this is to encode some data in the ServerMove RPC - say maybe a “movement sync key” - every alteration to movement capabilities done from outside movement code itself changes a counter/key value on the movement component, and those are sent with every ServerMove RPC. When the server receives a ServerMove RPC that is ahead of what it knows about (abilities/modes activating), it delays processing that ServerMove for up to maybe a few frames to give a good chance at receiving the other related info. Basically providing a time window on the server to receive all the client commands before the server processes and considers sending corrections back.

We have 2 situations where I’m seeing significant corrections:

The first is simulated modifiers. These are modifiers that are applied by both the client and server independently. For example we might increase the gravity applied to an object when it’s travelling downwards to alter the trajectory of a fall. In this case we detect the direction of fall independently on client and server. This causes issues when framerate is low or during move combining. I’ve turned down the position correction from ~1cm to 50cm and turned on accept client location and this has cleaned up.

We’re mostly seeing super heavy correction under heavy latency and low server framerate - your example of the double correction might be what is happening here. I wonder if, since we’re accepting client location but at a different cadence than the client due to framerate and move combining, we send a correction that moves the client which is then sent back to the server which causes another correction - causing the character to shudder.

I’ve looped in a couple people on the game teams for some suggestions. Lots of approaches are possible but trying to narrow down to a simple few. Let us know if you turn up anything in the meantime.

For the second case, does that involve modifiers as well? I would definitely test turning off move combining (or add logging/breakpoints to see if it’s happening), and if at all possible getting a beefy machine or turning off your server tick clamping to at least rule in/out that framerate is the reason you’re seeing this. Also if you’re not aware know that p.NetCorrections 1 will print/display info for you on the severity of the corrections, p.VisualizeMovement 1 will show a rough in-world representation of acceleration/velocity which can sometimes help pinpointing differences for simulated clients, and you can enable or hook up additional logging for the correction path.

Something critical to realize that isn’t quite “in your face” (that may or may not have anything to do with the issues you’re seeing, but wanted to make clear) is how much different the “ticks” are between clients and server for CharacterMovementComponent. For clients, they get reliable TickComponent() calls where they do their logic. You know that every tick you’ll be consuming input, simulating, sending a ServerMove RPC with “I ticked this long with this acceleration (input) and think I ended up at location X”.

On the server (authority) for a character, they only really do tick/simulation logic when they receive ServerMoves. When receiving a ServerMove with the latest ClientTimestamp, the server says “okay, the ClientTimestamp on this latest move is 0.4 seconds ahead of the last one I received, so I’m going to simulate you from where you’re at now to 0.4 seconds in the future, compare where you end up for me with where you think you are, and then correct if needed”. It’s designed this way so that it naturally handles dropped/out-of-order packets.

A client ticking at 10hz as an easy example will send ServerMove RPCs with timestamps 11.0, 11.1, 11.2, 11.3. The server receives 11.0 and then 11.1, and since there’s 0.1 seconds between them, it’ll tick for 0.1 seconds. This is great because you’re very very likely to get the same result. Now say 11.2 gets dropped and the server receives 11.3. Now it’ll do a 0.2 second tick. If on one of the client ticks it had one gravity value and on the second tick it had a different one, if your gravity-modifying logic isn’t built into saved moves or isn’t Timestamp-based, you’ll be doing a 0.2 second tick with not only the wrong gravity value for half of it, but also with not-perfect acceleration. This usually isn’t significant enough to make a difference, but with heavy latency/packet loss/re-ordering the server can end up diverging significantly.

I also believe that by default we don’t alter substep timing (how long individual “ticks” are for PhysWalking/PhysFalling loops) based on that total tick time, so if you have very fine requirements for “something will be different for 0.3 seconds only” you’ll want to look into making sure those line up.

Again for that the strategy is to structure your movement modifiers in a way that is friendly to this reality - instead of having a GetGravity() function that is based on CharacterMovementComponent property (which can fall apart during move replay on clients or when the server’s receiving of ServerMoves is all over the place due to bad network conditions), have a GetGravity(float TimeStamp) function that both client and server movement code can call to get super accurate simulation.

This might all have nothing to do with the issue you’re seeing, but I did want to clarify it in case you have your modification code not being aware of some of it.

I would definitely print out the different tick times for the physics moves for clients and servers so you can see if there is divergence at that level. Even if the server is going at 10FPS and the client is at 60FPS, I believe the server will still tick the character at whatever rate it receives ServerMoves (so 60FPS) - I haven’t tested this recently.