[AI] Questions about AI Perception
I currently implement my own visibility system , but it seems that this AI Perception thing is really big now , I'd like to know how they work so I could compare some pros and cons. I'm only interested in the visibility part.
if someone can help me to understand it would be great. some high level explanations of the process would do :
1) How are visibility process calculated? I'm assuming it's distance culling / angle culling / then raycast, but I could be outdated with those methods, please enlighten me. .
2) Are the visibility calculated every tick on all? Do they support priority queing or aynchronous processing ?
3) Do they utilize any form of Potentially Visible actors , if so what do they do.
4) Share some thoughts of its performance . Are they feasible in situation of 30 vs 30 AI for an FPS encounter?
asked Nov 12 '15 at 03:12 AM in Blueprint Scripting
Visibility checks are done via AISense_Sight. 1) Each update a queue of query checks are performed against enemy targets. For each check: It first does a broad-phase dot check based off direction from the sight listener towards the sight source. If that passes it calls the CanBeSeenFromInterface on the sight target if its implemented, otherwise it does a trace against ECC_WorldStatic to see if the target was seen or not. It only does a certain # of traces each update. It lastly adds query importance based off distance to target. 2) Every frame with a cap on traces, not async. 3) No 4) I implement the CanBeSeenFrom and do 2 additional traces and some other dot checks for other strength/importance values. Line traces are not that expensive. CPU has never been a problem with our project. At max we have around 20 AI looking for 1 to 2 players.
answered Feb 06 '16 at 01:38 AM
Follow this question
Once you sign in you will be able to subscribe for any updates here