Hi, just starting with Sentry. After having read through most of the threads on Sentry here, I have come to some conclusions. Since this is my first impression, please correct me if I'm wrong.
- Upon trigger, BI sends exactly 1 downsized image to Sentry (in the "cloud") for analysis. The transport is encrypted (it would be outrageous if that was not the case), but I have a very strong suspicion Sentry decrypts the image for person detection.
Question 1: Or has BI already done the first steps of the inference, thereby "obfuscating" the image a bit? - One MUST configure BI correctly for Sentry to work. And I haven't found a "how to set up" list. The following is a compilation of what I saw in various posts. AFAIK, crucial are:
- enable "motion sensor"
- set correct object size, contrast. This is depending on your camera setup (angle, distance..). Remark: it is a shame BI does not allow perspective controls, as some others do. It would help with high vantage points.
- set correct "min. duration". This determines the image that is sent to Sentry. This setting is tricky, and very much depends on "expected speed of the subject to be detected". Remark: But the problem is: you do not know the speed of the subject in advance. This is I think the weak point. Sentry should get more than 1 image.
Question 2: is there any chance you can send more than 1 image per alert? Like 1 per frame (as others do)? That would improve detection quality a lot I think. - do not highlight motion (as Sentry gets the "enhanced" image). Remark: That is a bit of a shame, BI could also just send a non-"enhanced" image, while keeping the rectangle and motion in the alert clips.
- set object detection. Especially "object travels" serves almost the same purpose as "minimum duration" (but is better suited IMHO), and "Reset detector".
- set Zones and hotspot: helps excluding non wanted zones.
- and the other settings in the "motion sensor" screen.
Question 3: Is this all? Or are there other settings to be considered?
- Am I right in my assumptions? If not, please correct me. I just want to understand.
- See the 3 questions above.
- I know the answer ("no"), but I still try: Is there any chance Sentry could work in the edge (i.e.: locally)? Some of your competitors allow that. And with GPU acceleration and the EdgeTPU being rather accessible, and the models getting better and better (I personally find the quality of the object classification via MobileNetv2 on EdgeTPU becoming rather good considering the amount of Watts it takes), it is not out of reach of the typical BI users.
Last edited: