AI Perception Working Memory

Hello!

We have been diving into the new AI features and have a couple questions regarding perception.

What is the preferred method for retrieving perceived information in a scalable way during runtime?

Is there any way to query into an agent’s perceived working memory to get specific facts, rather than iterating over all perceived actors?

Currently, it looks as though, after adding a perception component to an AIController, it then allows for us to retrieve the currently perceived actors, but only via an array of base actors. So when it comes to actually using this perceived information to govern AI decisions, it can easily become difficult to work with as the AI becomes more complex.

The pattern that I keep coming across from other users usually involves iterating over the array of perceived actors and then casting them to whatever we’re looking for until a cast is successful. But this may create quite a bit of overhead as we start to perceive more and more different types of things; especially if we are retrieving all objects of some type, then it would require an N iteration of all perceived actors of all types, casting each one while we build a collection of what we’re really hoping to find. Is this the intended use? If so, we’re looking at exploring two options moving forward.


1) Create our own working memory wrapper around perception.
At the moment an actor is sensed, we can ask all our questions about it right then and there. Then store that information in a different way, using tags and enumerations to determine facts of a certain type. We would then setup a larger set of data structures to store them, such that we can ask for a given fact, and get it back quickly without the additional iteration and casting overhead each time.


2) Utilize multiple perception components
It looks like we can put as many perception components as we want on an AIController, even of the same type. So for a simple example: What if I want my character to “see” other characters and also “see” ammo, but without storing them in the same array, so we don’t have to sift through it all each time we look for either type?

Well it looks like I can just derive from the AISense_Sight class, and make a separate one for AISense_Sight_Character, and AISense_Sight_Ammo. Then setup two separate Perception components on the controller, one that is configured to sense each individually. Now, I can trust that if I access the component that senses only ammo, all the actors in there must contain only the ammo actors that I can see.

This seems a bit more usable, but will also balloon quite quickly, if we want to see, hear, and touch, many different types. It will be very likely that any given AI agent will have 10+ perception components, just for the purpose of separating the perceived actor containers. My concern there is mostly whether or not we’d be abusing the system by taking this approach. Or mostly, would it be efficient behind the scenes if we took this approach?


We’re currently leaning toward a combination of the two options. But first, I wanted to ask whether there were already systems in place that achieve this. Perhaps I’m just missing some of the features, since it’s so new. I’d also like to confirm that we wouldn’t be taking an inefficient approach in regards to the underlying implementation of the AI Perception system.

Thanks for any and all help. Cheers!

Hey Joe,

Our AI perception system is a generic solution and as such has noticeable shortcomings when one wants to use it extensively without modifications. I’d suggest approach 1), but it might require you to make some of the AIPerceptionComponent's functions virtual (if you do let me know which ones, we might want to make them virtual out of the box). Multiple-component solution would works as well since perception systems uses the components, not component owners, as “listeners”, but I agree it could get messy quickly.

Let me know if you run into issues with whichever approach you pick.

Cheers,

–mieszko

Thanks Mieszko!

Since my initial post, I’ve been going through the source code to better understand how the perception system currently works. I believe we should be able to make a nice Working Memory system utilizing what is currently implemented.


There is something I noticed that may be a bug, or at least is worth bringing to your attention.

There is a blueprint function called GetPerceivedActors which is one of the few gateways from blueprints into the current state of perception. However, I noticed an inconsistency regarding the age of a perceived actor via a given sense.

If I call GetPerceivedActors, but do not pass a specific SenseToUse, then age appears to be respected. I’ll get all actors back that are perceived by any sense config. However, if I call the same function with a specific sight sense, or derived sight sense, it will only return the actors that are perceived at that exact moment. Even if I give the sense config an age of zero (which should cause the agent to remember that actor forever once perceived the first time) The function will still not return the perceived actor if I pass that specific sense to the function.

Upon reviewing the code for that function, it makes sense why this is happening.

Due to short circuit operations, if there is no sense given, then the whole actor array is populated to the out array. If a sense is given, then a function IsSenseRegistered(SenseID) is called on each actor in the container. That function in turn performs a set of checks to make sure the actor is perceived by that specific sense, but one function in particular is called that doesn’t respect age. LastSensedStimuli[Sense].WasSuccessfullySensed().

In regards to sight, that function will only return true if the actor is currently sensed, not if the actor was sensed previously but still hasn’t expired from memory due to the age. Perhaps this is intended behavior, but I would guess that it’s not. If it were, then the age value in the sense config wouldn’t have much meaning. Either way, if it is intended, then there may need to be an additional function added for blueprints that has the option of performing the same check, but also considering actors that were once perceived, and still remembered.

Please let me know if you need any further clarification on this.


Anyway, keep up the awesome work! And I’ll be sure to keep you posted on the Working Memory system as we develop it. Cheers!

The behavior when you pass in a specific sense is indeed intended, but the naming could use a readability touch :slight_smile: And it should be consistent with the “no sense specified” path - that one is a bug. So, I’ll make GetPerceivedActors deprecated and introduce two new functions GetCurrentlyPerceivedActors and GetKnownPerceivedActors (not too happy with the latter name, if you have naming suggestion I’d love to hear it! :D)

Thanks for the quick reply!

I can definitely see how both functions will prove to be useful. As for naming…I can’t think of anything that I think is perfect for the purpose at the moment. For the sake of brainstorming, here are a couple suggestions for the latter that might spark something:

GetRememberedActors

GetPreviouslyPerceivedActors

RecallPerceivedActors

GetPerceivedActorsThatAreNotCurrentlyPerceivedButHaveNotYetExpiredDueToAge

OK, ignore that last one. :wink: