Trying to understand U4 A.I

If someone could “confirm/deny/explain if wrong” these concepts I’ll be thankfull, got the AI videos yesterday to begin implementing my game AI and before “use” the system I need to reasonate it and try to understand “what’swhats” or as your AI wording points “the Context” of each thing and the “Sequencer” to use these info.

While your documentation does expose the “usage” of the system on Editor/Tool part, to create above this I need to translate your concepts to “basic concepts I could explain to a child”, so after theorioze a bit I would like to proofcheck if “I got” and make some questions.

Please remember, all these points are “Questions”, you can read all them with an “Is this…?” suffix. :wink:

A) The Beggining… AI PERCEPTION

Basically the eyes and ears from our AI, this is our AI senses and stimulus, as example the AI is just seeing a lot of shapes, shades and colors, hearing a lot of sounds but not “reasoning” anything at all.

QUESTIONS ABOUT THE TOPIC:

1) Why it’s placed on AIController and not on Pawn or any other “incarnation” Actor?

Considering the AIController as a “Soul/Consciousness” the ear and eyes should be on body, not on the “immaterial” spirit. While to some kind of games could “initially” be a problem generating an Overmind, this overmind could maybe “filter” gameplay/strategic advantage info to be processed on each unit, but give all the units a “same world reasoning”.

E.G. All know the elevator is broken, this stage of “reasoning/conclusion” is “shared” and ticked/processed once and not “by unit”, while all units must “see” the elevator to get it into consideration they all “already interpreted that it’s a broken elevator since the first AI controlled eyes have seen it”.

I thought about make a “shared” perception to my game to ALL AI make decisions based on the same premises, this could short the “perception and reasoning” stage.
Case 1: If one AI already “seen and identified” a rock it can tell all other AIs/Units there is a rock on some place.
Case 2: Since I have more “cameras and microphones” on a building I can use smaller and cheap ranges.

  1. On the two videos I have not noticed the usage of GENERATORS, but I think that they are used when we have not a “body” at all, so they are a way to get stimulus without “ears neither eyes”. Does AI PERCEPTION is some kind of “local/incarnated” Generator? If does, and these theories makes some sense I think that on the docs the definition about “Contexts” being a kind of Generator just confuses a noob user (like me). While the affirmation is technically precise I would like to suggest “just by didactic sake” comprise the Generators function to “what kind of stimulus AI is sensing” and let the “reasoning, filtering meaning” to Contexts.

B) BLACKBOARD KEYS

These are (remember I still on the “Are these…?” area) what our AI need to understand or “the things” relevant to allow it to play and that it needs to reason/identify on the “world” (as things it needs to interact) or rules (Am I hungry? Harmed?) to make decisions uppon. As example a Woodcutter AI needs to know that it need to look by “something” called tree and also know if it can carry more lumber on his shoulder, if it has an axe, and so on…

QUESTIONS ABOUT THE TOPIC:

1) What’s the best approach (just performance side, I know it could depend on other things to game)?
A “common set of things” (sort of single Blackboard to all) or a multiple Blackboard approach filtering what each kind of unit needs to know?
E.G. The “Miner” unit uses a Blackboard Key set that has “STONE”, but not “TREE”, on the other hand the “Woodcutter” has “TREE” but not “STONE”… Or maybe both units knows “about the existence of both things” while the Miner will never “reason or deal” with a tree.

C) THE EQS CONTEXT

This is the process/rules used by the AI to “interpret” the signs/stimulus it has received from the AIPERCEPTION to get a conclusion that what it’s seeing (or hearing, whatever) is something it’s looking for (BlackboardKey) or don’t. Or identify some condition just on check herself (the load on the Woodcutter’s shoulders).

QUESTIONS ABOUT THE TOPIC:

1) Other what’s better performance question:

While all units could “identify” the same stimulus as some thing or some situation, EG “A Tree” each unit can have a different “usage” to this, so the “Context to reasoning” will be the same, but we can have different usages: Woodcutter = Resource Woodpecker = Food or Home talking about the same tree.

This can lead me to have 2 Behaviour Trees with a Identify Trees > Cut to the Woodcutter and a Identify Trees > Eat to a Woodpecker or a Single One to both and a Branch to deal with trees considering the Querier type.

D) NAV MESH "dynamic holes"

I got crazy with that things about Dynamic Obstacles, how Can I get that same optimization on SkeletalMesh Actors?

I think that’s all… Thank you!

Hello!

A) Basically, yes, this is correct. The AI Perception component is basically just a bunch of information about what the character sees and hears around it.

  1. It’s placed in the AI Controller because that is the thing that drives the character. It works more like the character’s brain than anything else. It’s also where you would run the behavior tree to do things based on what the AI character sees and hears. You could say the Behavior Tree is kind of like the neurons firing in the characters brain and dictating its decision making.

Generators are related to the environment query system. All they do is “generate” a set of points for you. These can be potential locations for your character to walk to, or objects to pick up, and so on. The part where the AI Perception component comes in is in narrowing down those generated points (if that is your wish) to the best one, whether it be one that is directly in front of the character (in line of sight) or close to an object for the character to hide behind, ect. Contexts, would be where this becomes important, since this is what helps you narrow down your results the most.

B) Yes, this is essentially correct. Blackboard keys are essentially variables that the behavior tree keeps track of and updates depending on what your character’s tasks are and where those variables/keys are set.
1.) I can’t answer this, as I would like to know the same thing.

C) This is also essentially correct. These are basically tests that you are doing to see which results are better than others. Again, narrowing down your AI character’s options.

  1. Personally, I think it makes more sense to have separate behavior trees for AI characters that are going to be behaving differently toward certain stimuli, just to make things simpler and easier for yourself. If I’m not mistaken, having multiple characters following separate behavior trees (and therefore separate AI Controllers) would not cost you much more than having multiple characters following the same behavior tree.

D) I’m currently researching the Crowd Detour AI Controller to see how I would accomplish the same thing,. Currently there is one way that can be done in Blueprints. I’m not sure yet how. There is also a way to do it using code which is explained in more detail. You can find something about it here; http://unreal-ai-tutorial.info/index.php?start=5

I hope this has helped you a little!

Thanks Psoih, I still thinking in ways to “fit” the tool to my ideas or my ideas to the tool. My first AI experiments were on the UDK era, did nothing complex just attack on sight behaviours.
Now with more tools (and less lazyness) I’m thinking on expand a bit the strategy side from my mobs.

About D, I think it’s a bit more simple than what you are trying to do. I just have some trees that fall (Skeletal animation instead of Physics), so some preset would help.