Hello!
We have been diving into the new AI features and have a couple questions regarding perception.
What is the preferred method for retrieving perceived information in a scalable way during runtime?
Is there any way to query into an agent’s perceived working memory to get specific facts, rather than iterating over all perceived actors?
Currently, it looks as though, after adding a perception component to an AIController, it then allows for us to retrieve the currently perceived actors, but only via an array of base actors. So when it comes to actually using this perceived information to govern AI decisions, it can easily become difficult to work with as the AI becomes more complex.
The pattern that I keep coming across from other users usually involves iterating over the array of perceived actors and then casting them to whatever we’re looking for until a cast is successful. But this may create quite a bit of overhead as we start to perceive more and more different types of things; especially if we are retrieving all objects of some type, then it would require an N iteration of all perceived actors of all types, casting each one while we build a collection of what we’re really hoping to find. Is this the intended use? If so, we’re looking at exploring two options moving forward.
1) Create our own working memory wrapper around perception.
At the moment an actor is sensed, we can ask all our questions about it right then and there. Then store that information in a different way, using tags and enumerations to determine facts of a certain type. We would then setup a larger set of data structures to store them, such that we can ask for a given fact, and get it back quickly without the additional iteration and casting overhead each time.
2) Utilize multiple perception components
It looks like we can put as many perception components as we want on an AIController, even of the same type. So for a simple example: What if I want my character to “see” other characters and also “see” ammo, but without storing them in the same array, so we don’t have to sift through it all each time we look for either type?
Well it looks like I can just derive from the AISense_Sight class, and make a separate one for AISense_Sight_Character, and AISense_Sight_Ammo. Then setup two separate Perception components on the controller, one that is configured to sense each individually. Now, I can trust that if I access the component that senses only ammo, all the actors in there must contain only the ammo actors that I can see.
This seems a bit more usable, but will also balloon quite quickly, if we want to see, hear, and touch, many different types. It will be very likely that any given AI agent will have 10+ perception components, just for the purpose of separating the perceived actor containers. My concern there is mostly whether or not we’d be abusing the system by taking this approach. Or mostly, would it be efficient behind the scenes if we took this approach?
We’re currently leaning toward a combination of the two options. But first, I wanted to ask whether there were already systems in place that achieve this. Perhaps I’m just missing some of the features, since it’s so new. I’d also like to confirm that we wouldn’t be taking an inefficient approach in regards to the underlying implementation of the AI Perception system.
Thanks for any and all help. Cheers!