If someone could “confirm/deny/explain if wrong” these concepts I’ll be thankfull, got the AI videos yesterday to begin implementing my game AI and before “use” the system I need to reasonate it and try to understand “what’swhats” or as your AI wording points “the Context” of each thing and the “Sequencer” to use these info.
While your documentation does expose the “usage” of the system on Editor/Tool part, to create above this I need to translate your concepts to “basic concepts I could explain to a child”, so after theorioze a bit I would like to proofcheck if “I got” and make some questions.
Please remember, all these points are “Questions”, you can read all them with an “Is this…?” suffix.
A) The Beggining… AI PERCEPTION
Basically the eyes and ears from our AI, this is our AI senses and stimulus, as example the AI is just seeing a lot of shapes, shades and colors, hearing a lot of sounds but not “reasoning” anything at all.
QUESTIONS ABOUT THE TOPIC:
1) Why it’s placed on AIController and not on Pawn or any other “incarnation” Actor?
Considering the AIController as a “Soul/Consciousness” the ear and eyes should be on body, not on the “immaterial” spirit. While to some kind of games could “initially” be a problem generating an Overmind, this overmind could maybe “filter” gameplay/strategic advantage info to be processed on each unit, but give all the units a “same world reasoning”.
E.G. All know the elevator is broken, this stage of “reasoning/conclusion” is “shared” and ticked/processed once and not “by unit”, while all units must “see” the elevator to get it into consideration they all “already interpreted that it’s a broken elevator since the first AI controlled eyes have seen it”.
I thought about make a “shared” perception to my game to ALL AI make decisions based on the same premises, this could short the “perception and reasoning” stage.
Case 1: If one AI already “seen and identified” a rock it can tell all other AIs/Units there is a rock on some place.
Case 2: Since I have more “cameras and microphones” on a building I can use smaller and cheap ranges.
- On the two videos I have not noticed the usage of GENERATORS, but I think that they are used when we have not a “body” at all, so they are a way to get stimulus without “ears neither eyes”. Does AI PERCEPTION is some kind of “local/incarnated” Generator? If does, and these theories makes some sense I think that on the docs the definition about “Contexts” being a kind of Generator just confuses a noob user (like me). While the affirmation is technically precise I would like to suggest “just by didactic sake” comprise the Generators function to “what kind of stimulus AI is sensing” and let the “reasoning, filtering meaning” to Contexts.
B) BLACKBOARD KEYS
These are (remember I still on the “Are these…?” area) what our AI need to understand or “the things” relevant to allow it to play and that it needs to reason/identify on the “world” (as things it needs to interact) or rules (Am I hungry? Harmed?) to make decisions uppon. As example a Woodcutter AI needs to know that it need to look by “something” called tree and also know if it can carry more lumber on his shoulder, if it has an axe, and so on…
QUESTIONS ABOUT THE TOPIC:
1) What’s the best approach (just performance side, I know it could depend on other things to game)?
A “common set of things” (sort of single Blackboard to all) or a multiple Blackboard approach filtering what each kind of unit needs to know?
E.G. The “Miner” unit uses a Blackboard Key set that has “STONE”, but not “TREE”, on the other hand the “Woodcutter” has “TREE” but not “STONE”… Or maybe both units knows “about the existence of both things” while the Miner will never “reason or deal” with a tree.
C) THE EQS CONTEXT
This is the process/rules used by the AI to “interpret” the signs/stimulus it has received from the AIPERCEPTION to get a conclusion that what it’s seeing (or hearing, whatever) is something it’s looking for (BlackboardKey) or don’t. Or identify some condition just on check herself (the load on the Woodcutter’s shoulders).
QUESTIONS ABOUT THE TOPIC:
1) Other what’s better performance question:
While all units could “identify” the same stimulus as some thing or some situation, EG “A Tree” each unit can have a different “usage” to this, so the “Context to reasoning” will be the same, but we can have different usages: Woodcutter = Resource Woodpecker = Food or Home talking about the same tree.
This can lead me to have 2 Behaviour Trees with a Identify Trees > Cut to the Woodcutter and a Identify Trees > Eat to a Woodpecker or a Single One to both and a Branch to deal with trees considering the Querier type.
D) NAV MESH "dynamic holes"
I got crazy with that things about Dynamic Obstacles, how Can I get that same optimization on SkeletalMesh Actors?
I think that’s all… Thank you!