5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

Tinman

Known around here
Joined
Nov 2, 2015
Messages
1,208
Reaction score
1,472
Location
USA
So I figured I would take the plunge and delete Deepstack from my main BI so I could get a good idea on how good it works. Well it works great to be honest. Sure it is detecting a few objects that I don't want but it filters them fine to what you select. The times are a little more , but that is expected using the default setup and no custom models. Running a lot of cams with AI analysis enabled and checking times for the next few days. At this time I see no need to go back to Deepstack.

I played around with the Code SenseAI explorer....not sure what it's for but found it works very well removing backgrounds from images. Here is a example:

step 1.png

result.png

download.png
 

lane777smith

Getting the hang of it
Joined
May 11, 2022
Messages
148
Reaction score
76
Location
texas
So I figured I would take the plunge and delete Deepstack from my main BI so I could get a good idea on how good it works. Well it works great to be honest. Sure it is detecting a few objects that I don't want but it filters them fine to what you select. The times are a little more , but that is expected using the default setup and no custom models. Running a lot of cams with AI analysis enabled and checking times for the next few days. At this time I see no need to go back to Deepstack.

I played around with the Code SenseAI explorer....not sure what it's for but found it works very well removing backgrounds from images. Here is a example:

View attachment 131095

View attachment 131097

View attachment 131099
I did the same thing and so far SenseAI seems to run quite well and use less resources. I wouldn’t say it’s any quicker or slower than Deepstack, but so far so good.
 

spammenotinoz

Getting comfortable
Joined
Apr 4, 2019
Messages
345
Reaction score
274
Location
Sydney
For those that prefer docker, installation is a single command from powershell.
I prefer docker, as SenseAI requires a ton of dated dependencies, and for stability like to keep Windows as clean as possible.
So from a powershell prompt, run;
docker run -d -p 5000:5000 --name 'SenseAI-Server' --restart always -v C:\ProgramData\CodeProject\SenseAI:/usr/share/CodeProject/SenseAI codeproject/senseai-server

hope this helps
 

netmax

Getting the hang of it
Joined
Sep 6, 2015
Messages
30
Reaction score
25
I'm running my Deepstack in a separate VM, so it was pretty straight forward to spin up another VM running SenseAI. With the "Override server" checkbox it's extremely easy to switch between these two per camera by just checking the box and entering the IP of the SenseAI (Default is DeepStack). So I've played around a bit.

Some questions rose:

1) Is this "just" a wrapped DeepStack?
While pulling the docker image, it was pulling "deepstack". And in the docker logs, in every settings we see "DeepStack":

Code:
------------------------------------------------------------------
Setting Environment variables for Scene Classification
------------------------------------------------------------------
ERRLOG_APIKEY    = ed359c3a-8a77-4f23-8db3-d3eb5fac23d9
PORT             = 5000
APPDIR           = /app/AnalysisLayer/DeepStack/intelligencelayer
CUDA_MODE        = False
DATA_DIR         = /usr/share/CodeProject/SenseAI
MODE             = MEDIUM
MODELS_DIR       = /app/AnalysisLayer/DeepStack/assets
PROFILE          = desktop_cpu
TEMP_PATH        = /app/AnalysisLayer/DeepStack/tempstore
VIRTUAL_ENV      = /app/AnalysisLayer/bin/%PYTHON_RUNTIME%/venv
VISION-SCENE     = True

2) Performance test
Is there any way to get the exact analysis time per picture from SenseAI? In DeepStack it's just looking at the docker logs:

Code:
[GIN] 2022/06/22 - 14:19:37 | 200 |  743.530125ms |       10.32.1.8 | POST     "/v1/vision/detection"
[GIN] 2022/06/22 - 14:19:37 | 200 |   810.86594ms |       10.32.1.8 | POST     "/v1/vision/detection"
[GIN] 2022/06/22 - 14:19:37 | 200 |  568.694013ms |       10.32.1.8 | POST     "/v1/vision/detection"
[GIN] 2022/06/22 - 14:19:38 | 200 |  578.653844ms |       10.32.1.8 | POST     "/v1/vision/detection"
[GIN] 2022/06/22 - 14:19:38 | 200 |  536.494765ms |       10.32.1.8 | POST     "/v1/vision/detection"
[GIN] 2022/06/22 - 14:19:39 | 200 |  617.679457ms |       10.32.1.8 | POST     "/v1/vision/detection"
In the SenseAI logging window from the UI I just see strange entries like this:

Code:
14:43:00: detection.py: Getting Command from detection_queue took 0.00398 seconds
... which is absolutely not the image proceeing time. No way in 4ms on a CPU based VM. And sometimes this line mentions 8 or 9 seconds ... nope, my server is not that slow.


3. Server load
Both VM's do have the same configuration: Max 15 vCPU (Dell R720, Dual E5-2650L v2, ESXi 6.7U3), 8GB RAM. Running on the same server and same datastore.

Comparing a recorded clip in BI with the "Test & Tuning" --> "Analyze with AI" on - one time with DeepStack, the second time with SenseAI - SenseAI seems to detect more objects during the playback. I can't see a big difference how often the objects are updated with my eye. But again, is there a way to get some performance data in BI itself here?

However, doing exactly this with the same clip, DeepStack drives the common CPU load in the VM to 18%. Sense AI, with the same clip, goes to 38%. Huge load on a "dotnet" process.

As I don't know a way to know:
  • how many pictures have been sent for analysis by BI
  • what was the exact processing time per picture

... the open question is: Is SenseAI slower in general --> needing double the resources for the same thing, or is it just better multithreading and it does more pictures in the same time because a single pic analysis takes less with the same number of CPU cores.


4. Where to set the mode (low, medium, high) or other environment variables?
In the DeepStack VM, I can configure the mode and other things with environment variables like this in my docker-compose file:

Code:
    environment:
      - TZ=${TZ}
      - PUID=${PUID}
      - PGID=${PGID}
      - VISION-SCENE=True
      - VISION-FACE=True
      - VISION-DETECTION=True
      - MODE=Medium
      - THREADCOUNT=10
There is nothing documented like this for SenseAI? Where can I configure such things, especially the mode?

Maybe @ChrisMaunder can answer some of these?
 

netmax

Getting the hang of it
Joined
Sep 6, 2015
Messages
30
Reaction score
25
OK, it is much slower in real life on the same server with the same resources. So not production-ready right now.

Two usual alarms on my frontcam (person walking by):

DeepStack:

1655906579591.png
1655906589880.png

And after switching to SenseAI on the same cam:
1655906615364.png
1655906627126.png

This is an average plus of 250ms (or 50%) per picture. So no option right now.

I'll keep my SenseAI VM up and running as the project is still beta and WIP ... will give the next versions a try. But without a massive performance boost it has no chance compared to DeepStack.
 

spammenotinoz

Getting comfortable
Joined
Apr 4, 2019
Messages
345
Reaction score
274
Location
Sydney
OK, it is much slower in real life on the same server with the same resources. So not production-ready right now.

Two usual alarms on my frontcam (person walking by):

DeepStack:

View attachment 131425
View attachment 131426

And after switching to SenseAI on the same cam:
View attachment 131427
View attachment 131428

This is an average plus of 250ms (or 50%) per picture. So no option right now.

I'll keep my SenseAI VM up and running as the project is still beta and WIP ... will give the next versions a try. But without a massive performance boost it has no chance compared to DeepStack.
Do you want reliability or efficiency?
While some great feedback, and I concur that Deepstack is more efficient on a per image basis, I found that there is no way I can risk going back to DeepStack and with minor tuning overall resource usage is the same. Deepstack is very much Beta and there was a long absence of support.
My 2c using OpenSenseAI
-Far more accurate object detection (For me when I review footage, Deepstack misses quite a lot of events. Deepstack ofgen gets things wrong. Could be the models vs deepstack itself, but as a user I just want it to work)
  • Has not yet suffered from the "NotResponding Bug" or "No Object Found Bug" (usually mitigate by daily Deepstack restarts and running multiple docker instances on different ports)
  • Due to the higher detection rates, I don't need to feed as many images (huge impact on lowering system performance)
  • Doesn't suffer from the "Deepstack" pauses. Reviewing BI logs there are periods where Deepstack takes a long time to respond\wakeup.
So on a per image basis, yep SenseAI appears to use more resources, even with my obscene camera amount it's running fine with plenty of headroom. During my daily reviews, the increased accuracy is noticeable. In test runs they both detect the same objects, but in the wild deepstack just has these nothing found moments. Feed it the same image later, and it finds the objects.

PS: I have been very happy with Deepstack, I didn't really notice the "nothing founds" until I ran both in parallel on the same camera (clones). Another reminder on why to always constantly record.
 

Dave Lonsdale

Pulling my weight
Joined
Dec 3, 2015
Messages
456
Reaction score
195
Location
Congleton Edge, UK
I found DS really easy to use as soon as it was introduced until maybe a couple of weeks ago when everything started to report “nothing found”. I haven’t been able to find out what I did wrong and so decided to try installing SenseAI instead. This reported “AI: not responding” until I changed the port to 5000. Now it seems to work really well, with processing times of around 250mS.

I have several queries:-
1. I changed the hardware decode on all cameras from “Nvidia NVDEC” to “No” but the Nvidia T600 still shows 20% in Task Manager/Performance. I have very limited knowledge and assumed it would be 0%. What don’t I understand? (I’m using RDP if that makes any difference).
2. MikeLud1 said in post #63 that, even if “use main stream if available” is checked, the resolution is reduced to 640x480. Again using my very basic understanding, if this is correct, what’s the benefit of checking the main stream and wouldn’t that for example screw up face detection? (I don’t have it checked).
3. DS has a minimum confidence threshold of 40%. What is it when using Sense AI? (I prefer a few false positives rather than a few missed objects).
 

Tinman

Known around here
Joined
Nov 2, 2015
Messages
1,208
Reaction score
1,472
Location
USA
My times have improved a lot using the combined.pt Note: I am running this on my demo pc, which is not what I posted above, so it's not exactly a true comparison yet until I throw the newer version on the other BI install. I just did a walk around the house to get some more timings. It seems to work fine.
 

Attachments

Last edited:

Tinman

Known around here
Joined
Nov 2, 2015
Messages
1,208
Reaction score
1,472
Location
USA
Well I am not sure if it is using the combined.pt or not. I was just taking a stab in the dark and probably not doing it right, but after a while the custom detection stops. I'll post my screen shots of what I tried, but I'm done for now and will wait for the experts :)

Screenshot 2022-06-24 204822.pngScreenshot 2022-06-24 205047.pngScreenshot 2022-06-24 205115.pngScreenshot 2022-06-24 205141.png
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,690
Location
New Jersey
In the "custom models" box on the AI configuration you need to specify the model name and not the path. For example it should be "combined" for the combined model.
 

Tinman

Known around here
Joined
Nov 2, 2015
Messages
1,208
Reaction score
1,472
Location
USA
In the "custom models" box on the AI configuration you need to specify the model name and not the path. For example it should be "combined" for the combined model.
yep, I tried that first, but would get no detections at all. The Custom Objects would stop and go orange as well. By putting in the path it works, but I think it's just using the default model. So it appears something is not right.
 

sebastiantombs

Known around here
Joined
Dec 28, 2019
Messages
11,511
Reaction score
27,690
Location
New Jersey
I'd say check the spelling of the model in the models directory just to be safe. It should work properly with the specific model name. If not, there's another little twist with SenseAI. I'm still using DS until SenseAI gets a GPU version going.
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
I did some testing and the custom model module does works. You have to copy the custom models to the below folder and also in Blue Iris you need to set the custom model to the same folder see setting below. There is an issue if you use the general.pt model, there is a line of code (see below) that changes it to the yolov5m.pt model. To make the general.pt model work I had to rename the model to general1.pt model. All of my other custom models should work fine.

1656129624223.png

Folder
1656130164933.png

Settings
1656130238757.png

Code
Code:
                    if model_name == "general":
                        model_name = "yolov5m"
 
Last edited:

stratfordwill

Getting the hang of it
Joined
Jun 29, 2014
Messages
27
Reaction score
58
So I've been testing. Thanks for pointing out the weird general/general1 thing. Had me pulling my hair out why it wasn't using the model. Detection seems faster, but . . .

While testing/tuning I still get boxes drawn for suitcase, tv, and potted plant. It indicates that it's using the general1 model, but its clearly still detecting things other than person and vehicle. Not sure what's happening.

Anyway. Thank you Mike for all the work you're putting into the custom models.
 
Top